Mar 12 14:10:25.268442 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 12 14:10:25.860995 master-0 kubenswrapper[4141]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 14:10:25.860995 master-0 kubenswrapper[4141]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 12 14:10:25.860995 master-0 kubenswrapper[4141]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 14:10:25.860995 master-0 kubenswrapper[4141]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 14:10:25.860995 master-0 kubenswrapper[4141]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 12 14:10:25.860995 master-0 kubenswrapper[4141]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 14:10:25.862165 master-0 kubenswrapper[4141]: I0312 14:10:25.861801 4141 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 14:10:25.865573 master-0 kubenswrapper[4141]: W0312 14:10:25.865531 4141 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 14:10:25.865573 master-0 kubenswrapper[4141]: W0312 14:10:25.865552 4141 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 14:10:25.865716 master-0 kubenswrapper[4141]: W0312 14:10:25.865584 4141 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 14:10:25.865716 master-0 kubenswrapper[4141]: W0312 14:10:25.865590 4141 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 14:10:25.865716 master-0 kubenswrapper[4141]: W0312 14:10:25.865596 4141 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 14:10:25.865716 master-0 kubenswrapper[4141]: W0312 14:10:25.865601 4141 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 14:10:25.865716 master-0 kubenswrapper[4141]: W0312 14:10:25.865606 4141 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 14:10:25.865716 master-0 kubenswrapper[4141]: W0312 14:10:25.865611 4141 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 14:10:25.865716 master-0 kubenswrapper[4141]: W0312 14:10:25.865615 4141 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 14:10:25.865716 master-0 kubenswrapper[4141]: W0312 14:10:25.865621 4141 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 14:10:25.865716 master-0 kubenswrapper[4141]: W0312 14:10:25.865639 4141 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 14:10:25.865716 master-0 kubenswrapper[4141]: W0312 14:10:25.865644 4141 feature_gate.go:330] unrecognized feature gate: Example Mar 12 14:10:25.865716 master-0 kubenswrapper[4141]: W0312 14:10:25.865649 4141 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 14:10:25.865716 master-0 kubenswrapper[4141]: W0312 14:10:25.865655 4141 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 14:10:25.865716 master-0 kubenswrapper[4141]: W0312 14:10:25.865660 4141 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 14:10:25.865716 master-0 kubenswrapper[4141]: W0312 14:10:25.865664 4141 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 14:10:25.865716 master-0 kubenswrapper[4141]: W0312 14:10:25.865669 4141 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 14:10:25.865716 master-0 kubenswrapper[4141]: W0312 14:10:25.865674 4141 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 14:10:25.865716 master-0 kubenswrapper[4141]: W0312 14:10:25.865679 4141 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 14:10:25.865716 master-0 kubenswrapper[4141]: W0312 14:10:25.865695 4141 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 14:10:25.865716 master-0 kubenswrapper[4141]: W0312 14:10:25.865700 4141 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 14:10:25.865716 master-0 kubenswrapper[4141]: W0312 14:10:25.865705 4141 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 14:10:25.866439 master-0 kubenswrapper[4141]: W0312 14:10:25.865711 4141 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 14:10:25.866439 master-0 kubenswrapper[4141]: W0312 14:10:25.865718 4141 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 14:10:25.866439 master-0 kubenswrapper[4141]: W0312 14:10:25.865725 4141 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 14:10:25.866439 master-0 kubenswrapper[4141]: W0312 14:10:25.865731 4141 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 14:10:25.866439 master-0 kubenswrapper[4141]: W0312 14:10:25.865737 4141 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 14:10:25.866439 master-0 kubenswrapper[4141]: W0312 14:10:25.865743 4141 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 14:10:25.866439 master-0 kubenswrapper[4141]: W0312 14:10:25.865749 4141 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 14:10:25.866439 master-0 kubenswrapper[4141]: W0312 14:10:25.865755 4141 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 14:10:25.866439 master-0 kubenswrapper[4141]: W0312 14:10:25.865760 4141 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 14:10:25.866439 master-0 kubenswrapper[4141]: W0312 14:10:25.865765 4141 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 14:10:25.866439 master-0 kubenswrapper[4141]: W0312 14:10:25.865771 4141 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 14:10:25.866439 master-0 kubenswrapper[4141]: W0312 14:10:25.865776 4141 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 14:10:25.866439 master-0 kubenswrapper[4141]: W0312 14:10:25.865781 4141 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 14:10:25.866439 master-0 kubenswrapper[4141]: W0312 14:10:25.865787 4141 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 14:10:25.866439 master-0 kubenswrapper[4141]: W0312 14:10:25.865792 4141 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 14:10:25.866439 master-0 kubenswrapper[4141]: W0312 14:10:25.865797 4141 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 14:10:25.866439 master-0 kubenswrapper[4141]: W0312 14:10:25.865801 4141 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 14:10:25.866439 master-0 kubenswrapper[4141]: W0312 14:10:25.865806 4141 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 14:10:25.866439 master-0 kubenswrapper[4141]: W0312 14:10:25.865811 4141 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 14:10:25.867118 master-0 kubenswrapper[4141]: W0312 14:10:25.865816 4141 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 14:10:25.867118 master-0 kubenswrapper[4141]: W0312 14:10:25.865821 4141 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 14:10:25.867118 master-0 kubenswrapper[4141]: W0312 14:10:25.865826 4141 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 14:10:25.867118 master-0 kubenswrapper[4141]: W0312 14:10:25.865833 4141 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 14:10:25.867118 master-0 kubenswrapper[4141]: W0312 14:10:25.865839 4141 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 14:10:25.867118 master-0 kubenswrapper[4141]: W0312 14:10:25.865845 4141 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 14:10:25.867118 master-0 kubenswrapper[4141]: W0312 14:10:25.865850 4141 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 14:10:25.867118 master-0 kubenswrapper[4141]: W0312 14:10:25.865855 4141 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 14:10:25.867118 master-0 kubenswrapper[4141]: W0312 14:10:25.865862 4141 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 14:10:25.867118 master-0 kubenswrapper[4141]: W0312 14:10:25.865868 4141 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 14:10:25.867118 master-0 kubenswrapper[4141]: W0312 14:10:25.865874 4141 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 14:10:25.867118 master-0 kubenswrapper[4141]: W0312 14:10:25.865879 4141 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 14:10:25.867118 master-0 kubenswrapper[4141]: W0312 14:10:25.865884 4141 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 14:10:25.867118 master-0 kubenswrapper[4141]: W0312 14:10:25.865889 4141 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 14:10:25.867118 master-0 kubenswrapper[4141]: W0312 14:10:25.865913 4141 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 14:10:25.867118 master-0 kubenswrapper[4141]: W0312 14:10:25.865919 4141 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 14:10:25.867118 master-0 kubenswrapper[4141]: W0312 14:10:25.865925 4141 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 14:10:25.867118 master-0 kubenswrapper[4141]: W0312 14:10:25.865929 4141 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 14:10:25.867118 master-0 kubenswrapper[4141]: W0312 14:10:25.865934 4141 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 14:10:25.867118 master-0 kubenswrapper[4141]: W0312 14:10:25.865939 4141 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 14:10:25.867749 master-0 kubenswrapper[4141]: W0312 14:10:25.865944 4141 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 14:10:25.867749 master-0 kubenswrapper[4141]: W0312 14:10:25.865949 4141 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 14:10:25.867749 master-0 kubenswrapper[4141]: W0312 14:10:25.865953 4141 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 14:10:25.867749 master-0 kubenswrapper[4141]: W0312 14:10:25.865958 4141 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 14:10:25.867749 master-0 kubenswrapper[4141]: W0312 14:10:25.865963 4141 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 14:10:25.867749 master-0 kubenswrapper[4141]: W0312 14:10:25.865968 4141 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 14:10:25.867749 master-0 kubenswrapper[4141]: W0312 14:10:25.865973 4141 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 14:10:25.867749 master-0 kubenswrapper[4141]: W0312 14:10:25.865978 4141 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 14:10:25.867749 master-0 kubenswrapper[4141]: W0312 14:10:25.865983 4141 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 14:10:25.867749 master-0 kubenswrapper[4141]: W0312 14:10:25.865988 4141 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 14:10:25.867749 master-0 kubenswrapper[4141]: W0312 14:10:25.865993 4141 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 14:10:25.867749 master-0 kubenswrapper[4141]: I0312 14:10:25.866684 4141 flags.go:64] FLAG: --address="0.0.0.0" Mar 12 14:10:25.867749 master-0 kubenswrapper[4141]: I0312 14:10:25.866699 4141 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 12 14:10:25.867749 master-0 kubenswrapper[4141]: I0312 14:10:25.866708 4141 flags.go:64] FLAG: --anonymous-auth="true" Mar 12 14:10:25.867749 master-0 kubenswrapper[4141]: I0312 14:10:25.866715 4141 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 12 14:10:25.867749 master-0 kubenswrapper[4141]: I0312 14:10:25.866723 4141 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 12 14:10:25.867749 master-0 kubenswrapper[4141]: I0312 14:10:25.866759 4141 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 12 14:10:25.867749 master-0 kubenswrapper[4141]: I0312 14:10:25.866788 4141 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 12 14:10:25.867749 master-0 kubenswrapper[4141]: I0312 14:10:25.866796 4141 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 12 14:10:25.867749 master-0 kubenswrapper[4141]: I0312 14:10:25.866802 4141 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 12 14:10:25.867749 master-0 kubenswrapper[4141]: I0312 14:10:25.866808 4141 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 12 14:10:25.868384 master-0 kubenswrapper[4141]: I0312 14:10:25.866816 4141 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 12 14:10:25.868384 master-0 kubenswrapper[4141]: I0312 14:10:25.866821 4141 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 12 14:10:25.868384 master-0 kubenswrapper[4141]: I0312 14:10:25.866828 4141 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 12 14:10:25.868384 master-0 kubenswrapper[4141]: I0312 14:10:25.866834 4141 flags.go:64] FLAG: --cgroup-root="" Mar 12 14:10:25.868384 master-0 kubenswrapper[4141]: I0312 14:10:25.866839 4141 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 12 14:10:25.868384 master-0 kubenswrapper[4141]: I0312 14:10:25.866850 4141 flags.go:64] FLAG: --client-ca-file="" Mar 12 14:10:25.868384 master-0 kubenswrapper[4141]: I0312 14:10:25.866856 4141 flags.go:64] FLAG: --cloud-config="" Mar 12 14:10:25.868384 master-0 kubenswrapper[4141]: I0312 14:10:25.866861 4141 flags.go:64] FLAG: --cloud-provider="" Mar 12 14:10:25.868384 master-0 kubenswrapper[4141]: I0312 14:10:25.866867 4141 flags.go:64] FLAG: --cluster-dns="[]" Mar 12 14:10:25.868384 master-0 kubenswrapper[4141]: I0312 14:10:25.866875 4141 flags.go:64] FLAG: --cluster-domain="" Mar 12 14:10:25.868384 master-0 kubenswrapper[4141]: I0312 14:10:25.866881 4141 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 12 14:10:25.868384 master-0 kubenswrapper[4141]: I0312 14:10:25.866887 4141 flags.go:64] FLAG: --config-dir="" Mar 12 14:10:25.868384 master-0 kubenswrapper[4141]: I0312 14:10:25.866913 4141 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 12 14:10:25.868384 master-0 kubenswrapper[4141]: I0312 14:10:25.866920 4141 flags.go:64] FLAG: --container-log-max-files="5" Mar 12 14:10:25.868384 master-0 kubenswrapper[4141]: I0312 14:10:25.866928 4141 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 12 14:10:25.868384 master-0 kubenswrapper[4141]: I0312 14:10:25.866934 4141 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 12 14:10:25.868384 master-0 kubenswrapper[4141]: I0312 14:10:25.866940 4141 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 12 14:10:25.868384 master-0 kubenswrapper[4141]: I0312 14:10:25.866946 4141 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 12 14:10:25.868384 master-0 kubenswrapper[4141]: I0312 14:10:25.866951 4141 flags.go:64] FLAG: --contention-profiling="false" Mar 12 14:10:25.868384 master-0 kubenswrapper[4141]: I0312 14:10:25.866957 4141 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 12 14:10:25.868384 master-0 kubenswrapper[4141]: I0312 14:10:25.866963 4141 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 12 14:10:25.868384 master-0 kubenswrapper[4141]: I0312 14:10:25.866969 4141 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 12 14:10:25.868384 master-0 kubenswrapper[4141]: I0312 14:10:25.866975 4141 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 12 14:10:25.868384 master-0 kubenswrapper[4141]: I0312 14:10:25.866982 4141 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 12 14:10:25.868384 master-0 kubenswrapper[4141]: I0312 14:10:25.866988 4141 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 12 14:10:25.869336 master-0 kubenswrapper[4141]: I0312 14:10:25.866993 4141 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 12 14:10:25.869336 master-0 kubenswrapper[4141]: I0312 14:10:25.867000 4141 flags.go:64] FLAG: --enable-load-reader="false" Mar 12 14:10:25.869336 master-0 kubenswrapper[4141]: I0312 14:10:25.867005 4141 flags.go:64] FLAG: --enable-server="true" Mar 12 14:10:25.869336 master-0 kubenswrapper[4141]: I0312 14:10:25.867011 4141 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 12 14:10:25.869336 master-0 kubenswrapper[4141]: I0312 14:10:25.867018 4141 flags.go:64] FLAG: --event-burst="100" Mar 12 14:10:25.869336 master-0 kubenswrapper[4141]: I0312 14:10:25.867024 4141 flags.go:64] FLAG: --event-qps="50" Mar 12 14:10:25.869336 master-0 kubenswrapper[4141]: I0312 14:10:25.867030 4141 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 12 14:10:25.869336 master-0 kubenswrapper[4141]: I0312 14:10:25.867035 4141 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 12 14:10:25.869336 master-0 kubenswrapper[4141]: I0312 14:10:25.867041 4141 flags.go:64] FLAG: --eviction-hard="" Mar 12 14:10:25.869336 master-0 kubenswrapper[4141]: I0312 14:10:25.867048 4141 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 12 14:10:25.869336 master-0 kubenswrapper[4141]: I0312 14:10:25.867053 4141 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 12 14:10:25.869336 master-0 kubenswrapper[4141]: I0312 14:10:25.867061 4141 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 12 14:10:25.869336 master-0 kubenswrapper[4141]: I0312 14:10:25.867070 4141 flags.go:64] FLAG: --eviction-soft="" Mar 12 14:10:25.869336 master-0 kubenswrapper[4141]: I0312 14:10:25.867075 4141 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 12 14:10:25.869336 master-0 kubenswrapper[4141]: I0312 14:10:25.867081 4141 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 12 14:10:25.869336 master-0 kubenswrapper[4141]: I0312 14:10:25.867086 4141 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 12 14:10:25.869336 master-0 kubenswrapper[4141]: I0312 14:10:25.867092 4141 flags.go:64] FLAG: --experimental-mounter-path="" Mar 12 14:10:25.869336 master-0 kubenswrapper[4141]: I0312 14:10:25.867098 4141 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 12 14:10:25.869336 master-0 kubenswrapper[4141]: I0312 14:10:25.867104 4141 flags.go:64] FLAG: --fail-swap-on="true" Mar 12 14:10:25.869336 master-0 kubenswrapper[4141]: I0312 14:10:25.867109 4141 flags.go:64] FLAG: --feature-gates="" Mar 12 14:10:25.869336 master-0 kubenswrapper[4141]: I0312 14:10:25.867116 4141 flags.go:64] FLAG: --file-check-frequency="20s" Mar 12 14:10:25.869336 master-0 kubenswrapper[4141]: I0312 14:10:25.867122 4141 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 12 14:10:25.869336 master-0 kubenswrapper[4141]: I0312 14:10:25.867128 4141 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 12 14:10:25.869336 master-0 kubenswrapper[4141]: I0312 14:10:25.867134 4141 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 12 14:10:25.869336 master-0 kubenswrapper[4141]: I0312 14:10:25.867174 4141 flags.go:64] FLAG: --healthz-port="10248" Mar 12 14:10:25.869336 master-0 kubenswrapper[4141]: I0312 14:10:25.867181 4141 flags.go:64] FLAG: --help="false" Mar 12 14:10:25.870201 master-0 kubenswrapper[4141]: I0312 14:10:25.867186 4141 flags.go:64] FLAG: --hostname-override="" Mar 12 14:10:25.870201 master-0 kubenswrapper[4141]: I0312 14:10:25.867192 4141 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 12 14:10:25.870201 master-0 kubenswrapper[4141]: I0312 14:10:25.867198 4141 flags.go:64] FLAG: --http-check-frequency="20s" Mar 12 14:10:25.870201 master-0 kubenswrapper[4141]: I0312 14:10:25.867204 4141 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 12 14:10:25.870201 master-0 kubenswrapper[4141]: I0312 14:10:25.867210 4141 flags.go:64] FLAG: --image-credential-provider-config="" Mar 12 14:10:25.870201 master-0 kubenswrapper[4141]: I0312 14:10:25.867215 4141 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 12 14:10:25.870201 master-0 kubenswrapper[4141]: I0312 14:10:25.867221 4141 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 12 14:10:25.870201 master-0 kubenswrapper[4141]: I0312 14:10:25.867227 4141 flags.go:64] FLAG: --image-service-endpoint="" Mar 12 14:10:25.870201 master-0 kubenswrapper[4141]: I0312 14:10:25.867232 4141 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 12 14:10:25.870201 master-0 kubenswrapper[4141]: I0312 14:10:25.867239 4141 flags.go:64] FLAG: --kube-api-burst="100" Mar 12 14:10:25.870201 master-0 kubenswrapper[4141]: I0312 14:10:25.867245 4141 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 12 14:10:25.870201 master-0 kubenswrapper[4141]: I0312 14:10:25.867251 4141 flags.go:64] FLAG: --kube-api-qps="50" Mar 12 14:10:25.870201 master-0 kubenswrapper[4141]: I0312 14:10:25.867256 4141 flags.go:64] FLAG: --kube-reserved="" Mar 12 14:10:25.870201 master-0 kubenswrapper[4141]: I0312 14:10:25.867262 4141 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 12 14:10:25.870201 master-0 kubenswrapper[4141]: I0312 14:10:25.867268 4141 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 12 14:10:25.870201 master-0 kubenswrapper[4141]: I0312 14:10:25.867273 4141 flags.go:64] FLAG: --kubelet-cgroups="" Mar 12 14:10:25.870201 master-0 kubenswrapper[4141]: I0312 14:10:25.867279 4141 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 12 14:10:25.870201 master-0 kubenswrapper[4141]: I0312 14:10:25.867286 4141 flags.go:64] FLAG: --lock-file="" Mar 12 14:10:25.870201 master-0 kubenswrapper[4141]: I0312 14:10:25.867295 4141 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 12 14:10:25.870201 master-0 kubenswrapper[4141]: I0312 14:10:25.867300 4141 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 12 14:10:25.870201 master-0 kubenswrapper[4141]: I0312 14:10:25.867306 4141 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 12 14:10:25.870201 master-0 kubenswrapper[4141]: I0312 14:10:25.867315 4141 flags.go:64] FLAG: --log-json-split-stream="false" Mar 12 14:10:25.870201 master-0 kubenswrapper[4141]: I0312 14:10:25.867321 4141 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 12 14:10:25.870201 master-0 kubenswrapper[4141]: I0312 14:10:25.867327 4141 flags.go:64] FLAG: --log-text-split-stream="false" Mar 12 14:10:25.870201 master-0 kubenswrapper[4141]: I0312 14:10:25.867333 4141 flags.go:64] FLAG: --logging-format="text" Mar 12 14:10:25.871010 master-0 kubenswrapper[4141]: I0312 14:10:25.867338 4141 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 12 14:10:25.871010 master-0 kubenswrapper[4141]: I0312 14:10:25.867344 4141 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 12 14:10:25.871010 master-0 kubenswrapper[4141]: I0312 14:10:25.867350 4141 flags.go:64] FLAG: --manifest-url="" Mar 12 14:10:25.871010 master-0 kubenswrapper[4141]: I0312 14:10:25.867356 4141 flags.go:64] FLAG: --manifest-url-header="" Mar 12 14:10:25.871010 master-0 kubenswrapper[4141]: I0312 14:10:25.867363 4141 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 12 14:10:25.871010 master-0 kubenswrapper[4141]: I0312 14:10:25.867369 4141 flags.go:64] FLAG: --max-open-files="1000000" Mar 12 14:10:25.871010 master-0 kubenswrapper[4141]: I0312 14:10:25.867376 4141 flags.go:64] FLAG: --max-pods="110" Mar 12 14:10:25.871010 master-0 kubenswrapper[4141]: I0312 14:10:25.867381 4141 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 12 14:10:25.871010 master-0 kubenswrapper[4141]: I0312 14:10:25.867387 4141 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 12 14:10:25.871010 master-0 kubenswrapper[4141]: I0312 14:10:25.867393 4141 flags.go:64] FLAG: --memory-manager-policy="None" Mar 12 14:10:25.871010 master-0 kubenswrapper[4141]: I0312 14:10:25.867399 4141 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 12 14:10:25.871010 master-0 kubenswrapper[4141]: I0312 14:10:25.867404 4141 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 12 14:10:25.871010 master-0 kubenswrapper[4141]: I0312 14:10:25.867410 4141 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 12 14:10:25.871010 master-0 kubenswrapper[4141]: I0312 14:10:25.867416 4141 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 12 14:10:25.871010 master-0 kubenswrapper[4141]: I0312 14:10:25.867429 4141 flags.go:64] FLAG: --node-status-max-images="50" Mar 12 14:10:25.871010 master-0 kubenswrapper[4141]: I0312 14:10:25.867435 4141 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 12 14:10:25.871010 master-0 kubenswrapper[4141]: I0312 14:10:25.867440 4141 flags.go:64] FLAG: --oom-score-adj="-999" Mar 12 14:10:25.871010 master-0 kubenswrapper[4141]: I0312 14:10:25.867446 4141 flags.go:64] FLAG: --pod-cidr="" Mar 12 14:10:25.871010 master-0 kubenswrapper[4141]: I0312 14:10:25.867452 4141 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 12 14:10:25.871010 master-0 kubenswrapper[4141]: I0312 14:10:25.867460 4141 flags.go:64] FLAG: --pod-manifest-path="" Mar 12 14:10:25.871010 master-0 kubenswrapper[4141]: I0312 14:10:25.867466 4141 flags.go:64] FLAG: --pod-max-pids="-1" Mar 12 14:10:25.871010 master-0 kubenswrapper[4141]: I0312 14:10:25.867472 4141 flags.go:64] FLAG: --pods-per-core="0" Mar 12 14:10:25.871010 master-0 kubenswrapper[4141]: I0312 14:10:25.867477 4141 flags.go:64] FLAG: --port="10250" Mar 12 14:10:25.871806 master-0 kubenswrapper[4141]: I0312 14:10:25.867483 4141 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 12 14:10:25.871806 master-0 kubenswrapper[4141]: I0312 14:10:25.867489 4141 flags.go:64] FLAG: --provider-id="" Mar 12 14:10:25.871806 master-0 kubenswrapper[4141]: I0312 14:10:25.867494 4141 flags.go:64] FLAG: --qos-reserved="" Mar 12 14:10:25.871806 master-0 kubenswrapper[4141]: I0312 14:10:25.867502 4141 flags.go:64] FLAG: --read-only-port="10255" Mar 12 14:10:25.871806 master-0 kubenswrapper[4141]: I0312 14:10:25.867507 4141 flags.go:64] FLAG: --register-node="true" Mar 12 14:10:25.871806 master-0 kubenswrapper[4141]: I0312 14:10:25.867513 4141 flags.go:64] FLAG: --register-schedulable="true" Mar 12 14:10:25.871806 master-0 kubenswrapper[4141]: I0312 14:10:25.867518 4141 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 12 14:10:25.871806 master-0 kubenswrapper[4141]: I0312 14:10:25.867527 4141 flags.go:64] FLAG: --registry-burst="10" Mar 12 14:10:25.871806 master-0 kubenswrapper[4141]: I0312 14:10:25.867533 4141 flags.go:64] FLAG: --registry-qps="5" Mar 12 14:10:25.871806 master-0 kubenswrapper[4141]: I0312 14:10:25.867544 4141 flags.go:64] FLAG: --reserved-cpus="" Mar 12 14:10:25.871806 master-0 kubenswrapper[4141]: I0312 14:10:25.867550 4141 flags.go:64] FLAG: --reserved-memory="" Mar 12 14:10:25.871806 master-0 kubenswrapper[4141]: I0312 14:10:25.867557 4141 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 12 14:10:25.871806 master-0 kubenswrapper[4141]: I0312 14:10:25.867562 4141 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 12 14:10:25.871806 master-0 kubenswrapper[4141]: I0312 14:10:25.867568 4141 flags.go:64] FLAG: --rotate-certificates="false" Mar 12 14:10:25.871806 master-0 kubenswrapper[4141]: I0312 14:10:25.867574 4141 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 12 14:10:25.871806 master-0 kubenswrapper[4141]: I0312 14:10:25.867579 4141 flags.go:64] FLAG: --runonce="false" Mar 12 14:10:25.871806 master-0 kubenswrapper[4141]: I0312 14:10:25.867585 4141 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 12 14:10:25.871806 master-0 kubenswrapper[4141]: I0312 14:10:25.867590 4141 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 12 14:10:25.871806 master-0 kubenswrapper[4141]: I0312 14:10:25.867596 4141 flags.go:64] FLAG: --seccomp-default="false" Mar 12 14:10:25.871806 master-0 kubenswrapper[4141]: I0312 14:10:25.867602 4141 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 12 14:10:25.871806 master-0 kubenswrapper[4141]: I0312 14:10:25.867607 4141 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 12 14:10:25.871806 master-0 kubenswrapper[4141]: I0312 14:10:25.867614 4141 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 12 14:10:25.871806 master-0 kubenswrapper[4141]: I0312 14:10:25.867619 4141 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 12 14:10:25.871806 master-0 kubenswrapper[4141]: I0312 14:10:25.867625 4141 flags.go:64] FLAG: --storage-driver-password="root" Mar 12 14:10:25.871806 master-0 kubenswrapper[4141]: I0312 14:10:25.867631 4141 flags.go:64] FLAG: --storage-driver-secure="false" Mar 12 14:10:25.871806 master-0 kubenswrapper[4141]: I0312 14:10:25.867636 4141 flags.go:64] FLAG: --storage-driver-table="stats" Mar 12 14:10:25.872665 master-0 kubenswrapper[4141]: I0312 14:10:25.867642 4141 flags.go:64] FLAG: --storage-driver-user="root" Mar 12 14:10:25.872665 master-0 kubenswrapper[4141]: I0312 14:10:25.867648 4141 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 12 14:10:25.872665 master-0 kubenswrapper[4141]: I0312 14:10:25.867654 4141 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 12 14:10:25.872665 master-0 kubenswrapper[4141]: I0312 14:10:25.867660 4141 flags.go:64] FLAG: --system-cgroups="" Mar 12 14:10:25.872665 master-0 kubenswrapper[4141]: I0312 14:10:25.867666 4141 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 12 14:10:25.872665 master-0 kubenswrapper[4141]: I0312 14:10:25.867675 4141 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 12 14:10:25.872665 master-0 kubenswrapper[4141]: I0312 14:10:25.867681 4141 flags.go:64] FLAG: --tls-cert-file="" Mar 12 14:10:25.872665 master-0 kubenswrapper[4141]: I0312 14:10:25.867686 4141 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 12 14:10:25.872665 master-0 kubenswrapper[4141]: I0312 14:10:25.867693 4141 flags.go:64] FLAG: --tls-min-version="" Mar 12 14:10:25.872665 master-0 kubenswrapper[4141]: I0312 14:10:25.867701 4141 flags.go:64] FLAG: --tls-private-key-file="" Mar 12 14:10:25.872665 master-0 kubenswrapper[4141]: I0312 14:10:25.867707 4141 flags.go:64] FLAG: --topology-manager-policy="none" Mar 12 14:10:25.872665 master-0 kubenswrapper[4141]: I0312 14:10:25.867712 4141 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 12 14:10:25.872665 master-0 kubenswrapper[4141]: I0312 14:10:25.867718 4141 flags.go:64] FLAG: --topology-manager-scope="container" Mar 12 14:10:25.872665 master-0 kubenswrapper[4141]: I0312 14:10:25.867723 4141 flags.go:64] FLAG: --v="2" Mar 12 14:10:25.872665 master-0 kubenswrapper[4141]: I0312 14:10:25.867731 4141 flags.go:64] FLAG: --version="false" Mar 12 14:10:25.872665 master-0 kubenswrapper[4141]: I0312 14:10:25.867738 4141 flags.go:64] FLAG: --vmodule="" Mar 12 14:10:25.872665 master-0 kubenswrapper[4141]: I0312 14:10:25.867744 4141 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 12 14:10:25.872665 master-0 kubenswrapper[4141]: I0312 14:10:25.867750 4141 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 12 14:10:25.872665 master-0 kubenswrapper[4141]: W0312 14:10:25.867973 4141 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 14:10:25.872665 master-0 kubenswrapper[4141]: W0312 14:10:25.867985 4141 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 14:10:25.872665 master-0 kubenswrapper[4141]: W0312 14:10:25.867991 4141 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 14:10:25.872665 master-0 kubenswrapper[4141]: W0312 14:10:25.867997 4141 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 14:10:25.872665 master-0 kubenswrapper[4141]: W0312 14:10:25.868002 4141 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 14:10:25.872665 master-0 kubenswrapper[4141]: W0312 14:10:25.868008 4141 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 14:10:25.873480 master-0 kubenswrapper[4141]: W0312 14:10:25.868013 4141 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 14:10:25.873480 master-0 kubenswrapper[4141]: W0312 14:10:25.868019 4141 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 14:10:25.873480 master-0 kubenswrapper[4141]: W0312 14:10:25.868024 4141 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 14:10:25.873480 master-0 kubenswrapper[4141]: W0312 14:10:25.868029 4141 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 14:10:25.873480 master-0 kubenswrapper[4141]: W0312 14:10:25.868034 4141 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 14:10:25.873480 master-0 kubenswrapper[4141]: W0312 14:10:25.868039 4141 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 14:10:25.873480 master-0 kubenswrapper[4141]: W0312 14:10:25.868045 4141 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 14:10:25.873480 master-0 kubenswrapper[4141]: W0312 14:10:25.868050 4141 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 14:10:25.873480 master-0 kubenswrapper[4141]: W0312 14:10:25.868055 4141 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 14:10:25.873480 master-0 kubenswrapper[4141]: W0312 14:10:25.868060 4141 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 14:10:25.873480 master-0 kubenswrapper[4141]: W0312 14:10:25.868065 4141 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 14:10:25.873480 master-0 kubenswrapper[4141]: W0312 14:10:25.868071 4141 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 14:10:25.873480 master-0 kubenswrapper[4141]: W0312 14:10:25.868076 4141 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 14:10:25.873480 master-0 kubenswrapper[4141]: W0312 14:10:25.868081 4141 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 14:10:25.873480 master-0 kubenswrapper[4141]: W0312 14:10:25.868086 4141 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 14:10:25.873480 master-0 kubenswrapper[4141]: W0312 14:10:25.868091 4141 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 14:10:25.873480 master-0 kubenswrapper[4141]: W0312 14:10:25.868096 4141 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 14:10:25.873480 master-0 kubenswrapper[4141]: W0312 14:10:25.868102 4141 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 14:10:25.873480 master-0 kubenswrapper[4141]: W0312 14:10:25.868107 4141 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 14:10:25.873480 master-0 kubenswrapper[4141]: W0312 14:10:25.868112 4141 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 14:10:25.874184 master-0 kubenswrapper[4141]: W0312 14:10:25.868117 4141 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 14:10:25.874184 master-0 kubenswrapper[4141]: W0312 14:10:25.868124 4141 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 14:10:25.874184 master-0 kubenswrapper[4141]: W0312 14:10:25.868130 4141 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 14:10:25.874184 master-0 kubenswrapper[4141]: W0312 14:10:25.868135 4141 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 14:10:25.874184 master-0 kubenswrapper[4141]: W0312 14:10:25.868140 4141 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 14:10:25.874184 master-0 kubenswrapper[4141]: W0312 14:10:25.868145 4141 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 14:10:25.874184 master-0 kubenswrapper[4141]: W0312 14:10:25.868149 4141 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 14:10:25.874184 master-0 kubenswrapper[4141]: W0312 14:10:25.868155 4141 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 14:10:25.874184 master-0 kubenswrapper[4141]: W0312 14:10:25.868160 4141 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 14:10:25.874184 master-0 kubenswrapper[4141]: W0312 14:10:25.868165 4141 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 14:10:25.874184 master-0 kubenswrapper[4141]: W0312 14:10:25.868173 4141 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 14:10:25.874184 master-0 kubenswrapper[4141]: W0312 14:10:25.868179 4141 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 14:10:25.874184 master-0 kubenswrapper[4141]: W0312 14:10:25.868184 4141 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 14:10:25.874184 master-0 kubenswrapper[4141]: W0312 14:10:25.868190 4141 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 14:10:25.874184 master-0 kubenswrapper[4141]: W0312 14:10:25.868197 4141 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 14:10:25.874184 master-0 kubenswrapper[4141]: W0312 14:10:25.868203 4141 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 14:10:25.874184 master-0 kubenswrapper[4141]: W0312 14:10:25.868208 4141 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 14:10:25.874184 master-0 kubenswrapper[4141]: W0312 14:10:25.868213 4141 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 14:10:25.874184 master-0 kubenswrapper[4141]: W0312 14:10:25.868218 4141 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 14:10:25.874808 master-0 kubenswrapper[4141]: W0312 14:10:25.868223 4141 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 14:10:25.874808 master-0 kubenswrapper[4141]: W0312 14:10:25.868228 4141 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 14:10:25.874808 master-0 kubenswrapper[4141]: W0312 14:10:25.868233 4141 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 14:10:25.874808 master-0 kubenswrapper[4141]: W0312 14:10:25.868239 4141 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 14:10:25.874808 master-0 kubenswrapper[4141]: W0312 14:10:25.868245 4141 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 14:10:25.874808 master-0 kubenswrapper[4141]: W0312 14:10:25.868250 4141 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 14:10:25.874808 master-0 kubenswrapper[4141]: W0312 14:10:25.868255 4141 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 14:10:25.874808 master-0 kubenswrapper[4141]: W0312 14:10:25.868260 4141 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 14:10:25.874808 master-0 kubenswrapper[4141]: W0312 14:10:25.868264 4141 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 14:10:25.874808 master-0 kubenswrapper[4141]: W0312 14:10:25.868270 4141 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 14:10:25.874808 master-0 kubenswrapper[4141]: W0312 14:10:25.868277 4141 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 14:10:25.874808 master-0 kubenswrapper[4141]: W0312 14:10:25.868282 4141 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 14:10:25.874808 master-0 kubenswrapper[4141]: W0312 14:10:25.868287 4141 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 14:10:25.874808 master-0 kubenswrapper[4141]: W0312 14:10:25.868292 4141 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 14:10:25.874808 master-0 kubenswrapper[4141]: W0312 14:10:25.868297 4141 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 14:10:25.874808 master-0 kubenswrapper[4141]: W0312 14:10:25.868302 4141 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 14:10:25.874808 master-0 kubenswrapper[4141]: W0312 14:10:25.868307 4141 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 14:10:25.874808 master-0 kubenswrapper[4141]: W0312 14:10:25.868312 4141 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 14:10:25.874808 master-0 kubenswrapper[4141]: W0312 14:10:25.868317 4141 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 14:10:25.874808 master-0 kubenswrapper[4141]: W0312 14:10:25.868322 4141 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 14:10:25.875537 master-0 kubenswrapper[4141]: W0312 14:10:25.868328 4141 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 14:10:25.875537 master-0 kubenswrapper[4141]: W0312 14:10:25.868332 4141 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 14:10:25.875537 master-0 kubenswrapper[4141]: W0312 14:10:25.868337 4141 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 14:10:25.875537 master-0 kubenswrapper[4141]: W0312 14:10:25.868346 4141 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 14:10:25.875537 master-0 kubenswrapper[4141]: W0312 14:10:25.868352 4141 feature_gate.go:330] unrecognized feature gate: Example Mar 12 14:10:25.875537 master-0 kubenswrapper[4141]: W0312 14:10:25.868357 4141 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 14:10:25.875537 master-0 kubenswrapper[4141]: W0312 14:10:25.868362 4141 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 14:10:25.875537 master-0 kubenswrapper[4141]: I0312 14:10:25.869162 4141 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 14:10:25.879349 master-0 kubenswrapper[4141]: I0312 14:10:25.879271 4141 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 12 14:10:25.879349 master-0 kubenswrapper[4141]: I0312 14:10:25.879337 4141 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 14:10:25.879488 master-0 kubenswrapper[4141]: W0312 14:10:25.879431 4141 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 14:10:25.879488 master-0 kubenswrapper[4141]: W0312 14:10:25.879441 4141 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 14:10:25.879488 master-0 kubenswrapper[4141]: W0312 14:10:25.879446 4141 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 14:10:25.879488 master-0 kubenswrapper[4141]: W0312 14:10:25.879450 4141 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 14:10:25.879488 master-0 kubenswrapper[4141]: W0312 14:10:25.879456 4141 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 14:10:25.879488 master-0 kubenswrapper[4141]: W0312 14:10:25.879460 4141 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 14:10:25.879488 master-0 kubenswrapper[4141]: W0312 14:10:25.879465 4141 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 14:10:25.879488 master-0 kubenswrapper[4141]: W0312 14:10:25.879470 4141 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 14:10:25.879488 master-0 kubenswrapper[4141]: W0312 14:10:25.879474 4141 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 14:10:25.879488 master-0 kubenswrapper[4141]: W0312 14:10:25.879479 4141 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 14:10:25.879488 master-0 kubenswrapper[4141]: W0312 14:10:25.879484 4141 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 14:10:25.879488 master-0 kubenswrapper[4141]: W0312 14:10:25.879489 4141 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 14:10:25.879488 master-0 kubenswrapper[4141]: W0312 14:10:25.879496 4141 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 14:10:25.879488 master-0 kubenswrapper[4141]: W0312 14:10:25.879504 4141 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 14:10:25.879889 master-0 kubenswrapper[4141]: W0312 14:10:25.879511 4141 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 14:10:25.879889 master-0 kubenswrapper[4141]: W0312 14:10:25.879517 4141 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 14:10:25.879889 master-0 kubenswrapper[4141]: W0312 14:10:25.879522 4141 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 14:10:25.879889 master-0 kubenswrapper[4141]: W0312 14:10:25.879526 4141 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 14:10:25.879889 master-0 kubenswrapper[4141]: W0312 14:10:25.879530 4141 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 14:10:25.879889 master-0 kubenswrapper[4141]: W0312 14:10:25.879533 4141 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 14:10:25.879889 master-0 kubenswrapper[4141]: W0312 14:10:25.879537 4141 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 14:10:25.879889 master-0 kubenswrapper[4141]: W0312 14:10:25.879541 4141 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 14:10:25.879889 master-0 kubenswrapper[4141]: W0312 14:10:25.879544 4141 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 14:10:25.879889 master-0 kubenswrapper[4141]: W0312 14:10:25.879548 4141 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 14:10:25.879889 master-0 kubenswrapper[4141]: W0312 14:10:25.879552 4141 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 14:10:25.879889 master-0 kubenswrapper[4141]: W0312 14:10:25.879556 4141 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 14:10:25.879889 master-0 kubenswrapper[4141]: W0312 14:10:25.879561 4141 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 14:10:25.879889 master-0 kubenswrapper[4141]: W0312 14:10:25.879566 4141 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 14:10:25.879889 master-0 kubenswrapper[4141]: W0312 14:10:25.879572 4141 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 14:10:25.879889 master-0 kubenswrapper[4141]: W0312 14:10:25.879576 4141 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 14:10:25.879889 master-0 kubenswrapper[4141]: W0312 14:10:25.879582 4141 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 14:10:25.879889 master-0 kubenswrapper[4141]: W0312 14:10:25.879587 4141 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 14:10:25.879889 master-0 kubenswrapper[4141]: W0312 14:10:25.879591 4141 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 14:10:25.879889 master-0 kubenswrapper[4141]: W0312 14:10:25.879596 4141 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 14:10:25.880614 master-0 kubenswrapper[4141]: W0312 14:10:25.879602 4141 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 14:10:25.880614 master-0 kubenswrapper[4141]: W0312 14:10:25.879607 4141 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 14:10:25.880614 master-0 kubenswrapper[4141]: W0312 14:10:25.879612 4141 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 14:10:25.880614 master-0 kubenswrapper[4141]: W0312 14:10:25.879618 4141 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 14:10:25.880614 master-0 kubenswrapper[4141]: W0312 14:10:25.879622 4141 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 14:10:25.880614 master-0 kubenswrapper[4141]: W0312 14:10:25.879628 4141 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 14:10:25.880614 master-0 kubenswrapper[4141]: W0312 14:10:25.879632 4141 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 14:10:25.880614 master-0 kubenswrapper[4141]: W0312 14:10:25.879637 4141 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 14:10:25.880614 master-0 kubenswrapper[4141]: W0312 14:10:25.879641 4141 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 14:10:25.880614 master-0 kubenswrapper[4141]: W0312 14:10:25.879646 4141 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 14:10:25.880614 master-0 kubenswrapper[4141]: W0312 14:10:25.879650 4141 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 14:10:25.880614 master-0 kubenswrapper[4141]: W0312 14:10:25.879655 4141 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 14:10:25.880614 master-0 kubenswrapper[4141]: W0312 14:10:25.879660 4141 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 14:10:25.880614 master-0 kubenswrapper[4141]: W0312 14:10:25.879666 4141 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 14:10:25.880614 master-0 kubenswrapper[4141]: W0312 14:10:25.879671 4141 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 14:10:25.880614 master-0 kubenswrapper[4141]: W0312 14:10:25.879675 4141 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 14:10:25.880614 master-0 kubenswrapper[4141]: W0312 14:10:25.879680 4141 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 14:10:25.880614 master-0 kubenswrapper[4141]: W0312 14:10:25.879684 4141 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 14:10:25.880614 master-0 kubenswrapper[4141]: W0312 14:10:25.879689 4141 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 14:10:25.881553 master-0 kubenswrapper[4141]: W0312 14:10:25.879695 4141 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 14:10:25.881553 master-0 kubenswrapper[4141]: W0312 14:10:25.879700 4141 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 14:10:25.881553 master-0 kubenswrapper[4141]: W0312 14:10:25.879707 4141 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 14:10:25.881553 master-0 kubenswrapper[4141]: W0312 14:10:25.879711 4141 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 14:10:25.881553 master-0 kubenswrapper[4141]: W0312 14:10:25.879716 4141 feature_gate.go:330] unrecognized feature gate: Example Mar 12 14:10:25.881553 master-0 kubenswrapper[4141]: W0312 14:10:25.879720 4141 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 14:10:25.881553 master-0 kubenswrapper[4141]: W0312 14:10:25.879726 4141 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 14:10:25.881553 master-0 kubenswrapper[4141]: W0312 14:10:25.879732 4141 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 14:10:25.881553 master-0 kubenswrapper[4141]: W0312 14:10:25.879738 4141 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 14:10:25.881553 master-0 kubenswrapper[4141]: W0312 14:10:25.879743 4141 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 14:10:25.881553 master-0 kubenswrapper[4141]: W0312 14:10:25.879747 4141 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 14:10:25.881553 master-0 kubenswrapper[4141]: W0312 14:10:25.879752 4141 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 14:10:25.881553 master-0 kubenswrapper[4141]: W0312 14:10:25.879757 4141 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 14:10:25.881553 master-0 kubenswrapper[4141]: W0312 14:10:25.879762 4141 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 14:10:25.881553 master-0 kubenswrapper[4141]: W0312 14:10:25.879767 4141 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 14:10:25.881553 master-0 kubenswrapper[4141]: W0312 14:10:25.879772 4141 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 14:10:25.881553 master-0 kubenswrapper[4141]: W0312 14:10:25.879776 4141 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 14:10:25.881553 master-0 kubenswrapper[4141]: W0312 14:10:25.879781 4141 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 14:10:25.881553 master-0 kubenswrapper[4141]: W0312 14:10:25.879785 4141 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 14:10:25.882233 master-0 kubenswrapper[4141]: I0312 14:10:25.879793 4141 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 14:10:25.882233 master-0 kubenswrapper[4141]: W0312 14:10:25.879972 4141 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 14:10:25.882233 master-0 kubenswrapper[4141]: W0312 14:10:25.879985 4141 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 14:10:25.882233 master-0 kubenswrapper[4141]: W0312 14:10:25.879991 4141 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 14:10:25.882233 master-0 kubenswrapper[4141]: W0312 14:10:25.879996 4141 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 14:10:25.882233 master-0 kubenswrapper[4141]: W0312 14:10:25.880001 4141 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 14:10:25.882233 master-0 kubenswrapper[4141]: W0312 14:10:25.880005 4141 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 14:10:25.882233 master-0 kubenswrapper[4141]: W0312 14:10:25.880016 4141 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 14:10:25.882233 master-0 kubenswrapper[4141]: W0312 14:10:25.880022 4141 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 14:10:25.882233 master-0 kubenswrapper[4141]: W0312 14:10:25.880027 4141 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 14:10:25.882233 master-0 kubenswrapper[4141]: W0312 14:10:25.880032 4141 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 14:10:25.882233 master-0 kubenswrapper[4141]: W0312 14:10:25.880037 4141 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 14:10:25.882233 master-0 kubenswrapper[4141]: W0312 14:10:25.880041 4141 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 14:10:25.882233 master-0 kubenswrapper[4141]: W0312 14:10:25.880046 4141 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 14:10:25.882233 master-0 kubenswrapper[4141]: W0312 14:10:25.880051 4141 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 14:10:25.882681 master-0 kubenswrapper[4141]: W0312 14:10:25.880055 4141 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 14:10:25.882681 master-0 kubenswrapper[4141]: W0312 14:10:25.880060 4141 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 14:10:25.882681 master-0 kubenswrapper[4141]: W0312 14:10:25.880064 4141 feature_gate.go:330] unrecognized feature gate: Example Mar 12 14:10:25.882681 master-0 kubenswrapper[4141]: W0312 14:10:25.880070 4141 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 14:10:25.882681 master-0 kubenswrapper[4141]: W0312 14:10:25.880076 4141 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 14:10:25.882681 master-0 kubenswrapper[4141]: W0312 14:10:25.880081 4141 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 14:10:25.882681 master-0 kubenswrapper[4141]: W0312 14:10:25.880086 4141 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 14:10:25.882681 master-0 kubenswrapper[4141]: W0312 14:10:25.880091 4141 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 14:10:25.882681 master-0 kubenswrapper[4141]: W0312 14:10:25.880096 4141 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 14:10:25.882681 master-0 kubenswrapper[4141]: W0312 14:10:25.880101 4141 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 14:10:25.882681 master-0 kubenswrapper[4141]: W0312 14:10:25.880107 4141 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 14:10:25.882681 master-0 kubenswrapper[4141]: W0312 14:10:25.880112 4141 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 14:10:25.882681 master-0 kubenswrapper[4141]: W0312 14:10:25.880116 4141 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 14:10:25.882681 master-0 kubenswrapper[4141]: W0312 14:10:25.880121 4141 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 14:10:25.882681 master-0 kubenswrapper[4141]: W0312 14:10:25.880126 4141 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 14:10:25.882681 master-0 kubenswrapper[4141]: W0312 14:10:25.880130 4141 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 14:10:25.882681 master-0 kubenswrapper[4141]: W0312 14:10:25.880134 4141 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 14:10:25.882681 master-0 kubenswrapper[4141]: W0312 14:10:25.880141 4141 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 14:10:25.882681 master-0 kubenswrapper[4141]: W0312 14:10:25.880146 4141 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 14:10:25.883308 master-0 kubenswrapper[4141]: W0312 14:10:25.880151 4141 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 14:10:25.883308 master-0 kubenswrapper[4141]: W0312 14:10:25.880157 4141 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 14:10:25.883308 master-0 kubenswrapper[4141]: W0312 14:10:25.880162 4141 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 14:10:25.883308 master-0 kubenswrapper[4141]: W0312 14:10:25.880167 4141 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 14:10:25.883308 master-0 kubenswrapper[4141]: W0312 14:10:25.880172 4141 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 14:10:25.883308 master-0 kubenswrapper[4141]: W0312 14:10:25.880178 4141 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 14:10:25.883308 master-0 kubenswrapper[4141]: W0312 14:10:25.880185 4141 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 14:10:25.883308 master-0 kubenswrapper[4141]: W0312 14:10:25.880191 4141 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 14:10:25.883308 master-0 kubenswrapper[4141]: W0312 14:10:25.880197 4141 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 14:10:25.883308 master-0 kubenswrapper[4141]: W0312 14:10:25.880202 4141 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 14:10:25.883308 master-0 kubenswrapper[4141]: W0312 14:10:25.880207 4141 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 14:10:25.883308 master-0 kubenswrapper[4141]: W0312 14:10:25.880213 4141 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 14:10:25.883308 master-0 kubenswrapper[4141]: W0312 14:10:25.880218 4141 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 14:10:25.883308 master-0 kubenswrapper[4141]: W0312 14:10:25.880224 4141 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 14:10:25.883308 master-0 kubenswrapper[4141]: W0312 14:10:25.880229 4141 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 14:10:25.883308 master-0 kubenswrapper[4141]: W0312 14:10:25.880233 4141 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 14:10:25.883308 master-0 kubenswrapper[4141]: W0312 14:10:25.880239 4141 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 14:10:25.883308 master-0 kubenswrapper[4141]: W0312 14:10:25.880244 4141 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 14:10:25.883308 master-0 kubenswrapper[4141]: W0312 14:10:25.880249 4141 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 14:10:25.883863 master-0 kubenswrapper[4141]: W0312 14:10:25.880254 4141 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 14:10:25.883863 master-0 kubenswrapper[4141]: W0312 14:10:25.880258 4141 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 14:10:25.883863 master-0 kubenswrapper[4141]: W0312 14:10:25.880263 4141 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 14:10:25.883863 master-0 kubenswrapper[4141]: W0312 14:10:25.880268 4141 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 14:10:25.883863 master-0 kubenswrapper[4141]: W0312 14:10:25.880274 4141 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 14:10:25.883863 master-0 kubenswrapper[4141]: W0312 14:10:25.880278 4141 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 14:10:25.883863 master-0 kubenswrapper[4141]: W0312 14:10:25.880283 4141 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 14:10:25.883863 master-0 kubenswrapper[4141]: W0312 14:10:25.880289 4141 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 14:10:25.883863 master-0 kubenswrapper[4141]: W0312 14:10:25.880293 4141 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 14:10:25.883863 master-0 kubenswrapper[4141]: W0312 14:10:25.880298 4141 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 14:10:25.883863 master-0 kubenswrapper[4141]: W0312 14:10:25.880302 4141 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 14:10:25.883863 master-0 kubenswrapper[4141]: W0312 14:10:25.880307 4141 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 14:10:25.883863 master-0 kubenswrapper[4141]: W0312 14:10:25.880312 4141 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 14:10:25.883863 master-0 kubenswrapper[4141]: W0312 14:10:25.880316 4141 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 14:10:25.883863 master-0 kubenswrapper[4141]: W0312 14:10:25.880321 4141 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 14:10:25.883863 master-0 kubenswrapper[4141]: W0312 14:10:25.880325 4141 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 14:10:25.883863 master-0 kubenswrapper[4141]: W0312 14:10:25.880330 4141 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 14:10:25.883863 master-0 kubenswrapper[4141]: W0312 14:10:25.880334 4141 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 14:10:25.883863 master-0 kubenswrapper[4141]: W0312 14:10:25.880339 4141 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 14:10:25.883863 master-0 kubenswrapper[4141]: W0312 14:10:25.880344 4141 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 14:10:25.884556 master-0 kubenswrapper[4141]: I0312 14:10:25.880352 4141 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 14:10:25.884556 master-0 kubenswrapper[4141]: I0312 14:10:25.881359 4141 server.go:940] "Client rotation is on, will bootstrap in background" Mar 12 14:10:25.885266 master-0 kubenswrapper[4141]: I0312 14:10:25.885235 4141 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 12 14:10:25.886262 master-0 kubenswrapper[4141]: I0312 14:10:25.886234 4141 server.go:997] "Starting client certificate rotation" Mar 12 14:10:25.886262 master-0 kubenswrapper[4141]: I0312 14:10:25.886257 4141 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 12 14:10:25.886514 master-0 kubenswrapper[4141]: I0312 14:10:25.886465 4141 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 14:10:25.918850 master-0 kubenswrapper[4141]: I0312 14:10:25.918779 4141 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 14:10:25.921113 master-0 kubenswrapper[4141]: I0312 14:10:25.921066 4141 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 14:10:25.921932 master-0 kubenswrapper[4141]: E0312 14:10:25.921867 4141 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:10:25.941919 master-0 kubenswrapper[4141]: I0312 14:10:25.941841 4141 log.go:25] "Validated CRI v1 runtime API" Mar 12 14:10:25.951031 master-0 kubenswrapper[4141]: I0312 14:10:25.950975 4141 log.go:25] "Validated CRI v1 image API" Mar 12 14:10:25.954588 master-0 kubenswrapper[4141]: I0312 14:10:25.954540 4141 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 14:10:25.959211 master-0 kubenswrapper[4141]: I0312 14:10:25.959168 4141 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 847b9f13-6083-4550-852f-e0336cfa76ca:/dev/vda3 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 12 14:10:25.959211 master-0 kubenswrapper[4141]: I0312 14:10:25.959201 4141 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Mar 12 14:10:25.974080 master-0 kubenswrapper[4141]: I0312 14:10:25.973829 4141 manager.go:217] Machine: {Timestamp:2026-03-12 14:10:25.972302291 +0000 UTC m=+0.533874560 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:e4246b74030446349cda326caa7abc15 SystemUUID:e4246b74-0304-4634-9cda-326caa7abc15 BootID:00119185-c574-4bb3-ab0c-7bce10775874 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:fa:69:5a Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:bb:95:55 Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:6a:3f:76:c6:88:2a Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 12 14:10:25.974080 master-0 kubenswrapper[4141]: I0312 14:10:25.974034 4141 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 12 14:10:25.974327 master-0 kubenswrapper[4141]: I0312 14:10:25.974164 4141 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 12 14:10:25.975735 master-0 kubenswrapper[4141]: I0312 14:10:25.975699 4141 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 12 14:10:25.975889 master-0 kubenswrapper[4141]: I0312 14:10:25.975851 4141 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 14:10:25.976080 master-0 kubenswrapper[4141]: I0312 14:10:25.975878 4141 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 14:10:25.976158 master-0 kubenswrapper[4141]: I0312 14:10:25.976088 4141 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 14:10:25.976158 master-0 kubenswrapper[4141]: I0312 14:10:25.976097 4141 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 14:10:25.976158 master-0 kubenswrapper[4141]: I0312 14:10:25.976105 4141 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 14:10:25.976158 master-0 kubenswrapper[4141]: I0312 14:10:25.976125 4141 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 14:10:25.976797 master-0 kubenswrapper[4141]: I0312 14:10:25.976769 4141 state_mem.go:36] "Initialized new in-memory state store" Mar 12 14:10:25.976870 master-0 kubenswrapper[4141]: I0312 14:10:25.976846 4141 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 12 14:10:25.980560 master-0 kubenswrapper[4141]: I0312 14:10:25.980400 4141 kubelet.go:418] "Attempting to sync node with API server" Mar 12 14:10:25.980560 master-0 kubenswrapper[4141]: I0312 14:10:25.980418 4141 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 14:10:25.980560 master-0 kubenswrapper[4141]: I0312 14:10:25.980457 4141 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 12 14:10:25.980560 master-0 kubenswrapper[4141]: I0312 14:10:25.980470 4141 kubelet.go:324] "Adding apiserver pod source" Mar 12 14:10:25.980560 master-0 kubenswrapper[4141]: I0312 14:10:25.980484 4141 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 14:10:25.985752 master-0 kubenswrapper[4141]: I0312 14:10:25.985715 4141 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 12 14:10:25.986764 master-0 kubenswrapper[4141]: W0312 14:10:25.986650 4141 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:25.986764 master-0 kubenswrapper[4141]: E0312 14:10:25.986713 4141 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:10:25.986764 master-0 kubenswrapper[4141]: W0312 14:10:25.986686 4141 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:25.986764 master-0 kubenswrapper[4141]: E0312 14:10:25.986759 4141 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:10:25.989302 master-0 kubenswrapper[4141]: I0312 14:10:25.989253 4141 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 12 14:10:25.989472 master-0 kubenswrapper[4141]: I0312 14:10:25.989447 4141 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 12 14:10:25.989472 master-0 kubenswrapper[4141]: I0312 14:10:25.989471 4141 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 12 14:10:25.989564 master-0 kubenswrapper[4141]: I0312 14:10:25.989480 4141 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 12 14:10:25.989564 master-0 kubenswrapper[4141]: I0312 14:10:25.989488 4141 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 12 14:10:25.989564 master-0 kubenswrapper[4141]: I0312 14:10:25.989495 4141 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 12 14:10:25.989564 master-0 kubenswrapper[4141]: I0312 14:10:25.989502 4141 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 12 14:10:25.989564 master-0 kubenswrapper[4141]: I0312 14:10:25.989509 4141 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 12 14:10:25.989564 master-0 kubenswrapper[4141]: I0312 14:10:25.989515 4141 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 12 14:10:25.989564 master-0 kubenswrapper[4141]: I0312 14:10:25.989523 4141 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 12 14:10:25.989564 master-0 kubenswrapper[4141]: I0312 14:10:25.989530 4141 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 12 14:10:25.989564 master-0 kubenswrapper[4141]: I0312 14:10:25.989543 4141 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 12 14:10:25.989770 master-0 kubenswrapper[4141]: I0312 14:10:25.989685 4141 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 12 14:10:25.992202 master-0 kubenswrapper[4141]: I0312 14:10:25.992176 4141 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 12 14:10:25.992632 master-0 kubenswrapper[4141]: I0312 14:10:25.992610 4141 server.go:1280] "Started kubelet" Mar 12 14:10:25.992782 master-0 kubenswrapper[4141]: I0312 14:10:25.992756 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:25.993711 master-0 kubenswrapper[4141]: I0312 14:10:25.993627 4141 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 14:10:25.993776 master-0 kubenswrapper[4141]: I0312 14:10:25.993728 4141 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 12 14:10:25.993776 master-0 kubenswrapper[4141]: I0312 14:10:25.993734 4141 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 14:10:25.994038 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 12 14:10:25.994268 master-0 kubenswrapper[4141]: I0312 14:10:25.994147 4141 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 14:10:25.996326 master-0 kubenswrapper[4141]: I0312 14:10:25.996310 4141 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 12 14:10:25.996405 master-0 kubenswrapper[4141]: I0312 14:10:25.996395 4141 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 14:10:25.996569 master-0 kubenswrapper[4141]: I0312 14:10:25.996480 4141 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 12 14:10:25.996569 master-0 kubenswrapper[4141]: I0312 14:10:25.996500 4141 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 12 14:10:25.996569 master-0 kubenswrapper[4141]: E0312 14:10:25.996536 4141 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 14:10:25.996716 master-0 kubenswrapper[4141]: I0312 14:10:25.996578 4141 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 12 14:10:25.996964 master-0 kubenswrapper[4141]: E0312 14:10:25.996924 4141 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 12 14:10:25.997114 master-0 kubenswrapper[4141]: I0312 14:10:25.997064 4141 server.go:449] "Adding debug handlers to kubelet server" Mar 12 14:10:25.997285 master-0 kubenswrapper[4141]: W0312 14:10:25.997181 4141 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:25.997285 master-0 kubenswrapper[4141]: E0312 14:10:25.997247 4141 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:10:25.997374 master-0 kubenswrapper[4141]: I0312 14:10:25.997294 4141 reconstruct.go:97] "Volume reconstruction finished" Mar 12 14:10:25.997374 master-0 kubenswrapper[4141]: I0312 14:10:25.997306 4141 reconciler.go:26] "Reconciler: start to sync state" Mar 12 14:10:25.997843 master-0 kubenswrapper[4141]: E0312 14:10:25.996915 4141 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189c1d5a13bda241 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:25.992589889 +0000 UTC m=+0.554162138,LastTimestamp:2026-03-12 14:10:25.992589889 +0000 UTC m=+0.554162138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:26.003177 master-0 kubenswrapper[4141]: I0312 14:10:26.003147 4141 factory.go:55] Registering systemd factory Mar 12 14:10:26.003177 master-0 kubenswrapper[4141]: I0312 14:10:26.003182 4141 factory.go:221] Registration of the systemd container factory successfully Mar 12 14:10:26.003477 master-0 kubenswrapper[4141]: I0312 14:10:26.003453 4141 factory.go:153] Registering CRI-O factory Mar 12 14:10:26.003554 master-0 kubenswrapper[4141]: I0312 14:10:26.003542 4141 factory.go:221] Registration of the crio container factory successfully Mar 12 14:10:26.003676 master-0 kubenswrapper[4141]: I0312 14:10:26.003666 4141 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 12 14:10:26.003797 master-0 kubenswrapper[4141]: I0312 14:10:26.003763 4141 factory.go:103] Registering Raw factory Mar 12 14:10:26.003834 master-0 kubenswrapper[4141]: I0312 14:10:26.003802 4141 manager.go:1196] Started watching for new ooms in manager Mar 12 14:10:26.007482 master-0 kubenswrapper[4141]: I0312 14:10:26.007230 4141 manager.go:319] Starting recovery of all containers Mar 12 14:10:26.009063 master-0 kubenswrapper[4141]: E0312 14:10:26.009033 4141 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 12 14:10:26.028100 master-0 kubenswrapper[4141]: I0312 14:10:26.027785 4141 manager.go:324] Recovery completed Mar 12 14:10:26.038037 master-0 kubenswrapper[4141]: I0312 14:10:26.037990 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:26.040076 master-0 kubenswrapper[4141]: I0312 14:10:26.040040 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:26.040163 master-0 kubenswrapper[4141]: I0312 14:10:26.040088 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:26.040163 master-0 kubenswrapper[4141]: I0312 14:10:26.040102 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:26.041193 master-0 kubenswrapper[4141]: I0312 14:10:26.041166 4141 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 12 14:10:26.041244 master-0 kubenswrapper[4141]: I0312 14:10:26.041201 4141 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 12 14:10:26.041244 master-0 kubenswrapper[4141]: I0312 14:10:26.041218 4141 state_mem.go:36] "Initialized new in-memory state store" Mar 12 14:10:26.052796 master-0 kubenswrapper[4141]: I0312 14:10:26.052766 4141 policy_none.go:49] "None policy: Start" Mar 12 14:10:26.053692 master-0 kubenswrapper[4141]: I0312 14:10:26.053635 4141 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 12 14:10:26.053692 master-0 kubenswrapper[4141]: I0312 14:10:26.053681 4141 state_mem.go:35] "Initializing new in-memory state store" Mar 12 14:10:26.096998 master-0 kubenswrapper[4141]: E0312 14:10:26.096962 4141 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 14:10:26.152076 master-0 kubenswrapper[4141]: I0312 14:10:26.108990 4141 manager.go:334] "Starting Device Plugin manager" Mar 12 14:10:26.152076 master-0 kubenswrapper[4141]: I0312 14:10:26.109051 4141 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 12 14:10:26.152076 master-0 kubenswrapper[4141]: I0312 14:10:26.109063 4141 server.go:79] "Starting device plugin registration server" Mar 12 14:10:26.152076 master-0 kubenswrapper[4141]: I0312 14:10:26.109496 4141 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 14:10:26.152076 master-0 kubenswrapper[4141]: I0312 14:10:26.109511 4141 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 14:10:26.152076 master-0 kubenswrapper[4141]: E0312 14:10:26.111093 4141 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 12 14:10:26.152076 master-0 kubenswrapper[4141]: I0312 14:10:26.113104 4141 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 12 14:10:26.152076 master-0 kubenswrapper[4141]: I0312 14:10:26.113195 4141 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 12 14:10:26.152076 master-0 kubenswrapper[4141]: I0312 14:10:26.113202 4141 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 14:10:26.152076 master-0 kubenswrapper[4141]: I0312 14:10:26.128265 4141 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 12 14:10:26.152076 master-0 kubenswrapper[4141]: I0312 14:10:26.130243 4141 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 12 14:10:26.152076 master-0 kubenswrapper[4141]: I0312 14:10:26.130283 4141 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 12 14:10:26.152076 master-0 kubenswrapper[4141]: I0312 14:10:26.130299 4141 kubelet.go:2335] "Starting kubelet main sync loop" Mar 12 14:10:26.152076 master-0 kubenswrapper[4141]: E0312 14:10:26.130335 4141 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 12 14:10:26.152076 master-0 kubenswrapper[4141]: W0312 14:10:26.131089 4141 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:26.152076 master-0 kubenswrapper[4141]: E0312 14:10:26.131158 4141 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:10:26.198063 master-0 kubenswrapper[4141]: E0312 14:10:26.198000 4141 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 12 14:10:26.210403 master-0 kubenswrapper[4141]: I0312 14:10:26.210353 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:26.212740 master-0 kubenswrapper[4141]: I0312 14:10:26.212714 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:26.212845 master-0 kubenswrapper[4141]: I0312 14:10:26.212749 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:26.212845 master-0 kubenswrapper[4141]: I0312 14:10:26.212757 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:26.212845 master-0 kubenswrapper[4141]: I0312 14:10:26.212780 4141 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 14:10:26.213543 master-0 kubenswrapper[4141]: E0312 14:10:26.213512 4141 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 12 14:10:26.230571 master-0 kubenswrapper[4141]: I0312 14:10:26.230497 4141 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Mar 12 14:10:26.230833 master-0 kubenswrapper[4141]: I0312 14:10:26.230816 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:26.231943 master-0 kubenswrapper[4141]: I0312 14:10:26.231890 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:26.232005 master-0 kubenswrapper[4141]: I0312 14:10:26.231952 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:26.232005 master-0 kubenswrapper[4141]: I0312 14:10:26.231961 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:26.232113 master-0 kubenswrapper[4141]: I0312 14:10:26.232088 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:26.232322 master-0 kubenswrapper[4141]: I0312 14:10:26.232294 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 14:10:26.232363 master-0 kubenswrapper[4141]: I0312 14:10:26.232333 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:26.232793 master-0 kubenswrapper[4141]: I0312 14:10:26.232778 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:26.232920 master-0 kubenswrapper[4141]: I0312 14:10:26.232909 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:26.233084 master-0 kubenswrapper[4141]: I0312 14:10:26.232993 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:26.233084 master-0 kubenswrapper[4141]: I0312 14:10:26.233031 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:26.233084 master-0 kubenswrapper[4141]: I0312 14:10:26.233047 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:26.233084 master-0 kubenswrapper[4141]: I0312 14:10:26.233054 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:26.233202 master-0 kubenswrapper[4141]: I0312 14:10:26.233096 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:26.233202 master-0 kubenswrapper[4141]: I0312 14:10:26.233176 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:10:26.233258 master-0 kubenswrapper[4141]: I0312 14:10:26.233207 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:26.233618 master-0 kubenswrapper[4141]: I0312 14:10:26.233596 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:26.233618 master-0 kubenswrapper[4141]: I0312 14:10:26.233621 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:26.233716 master-0 kubenswrapper[4141]: I0312 14:10:26.233631 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:26.233716 master-0 kubenswrapper[4141]: I0312 14:10:26.233666 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:26.233716 master-0 kubenswrapper[4141]: I0312 14:10:26.233681 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:26.233716 master-0 kubenswrapper[4141]: I0312 14:10:26.233689 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:26.233854 master-0 kubenswrapper[4141]: I0312 14:10:26.233749 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:26.234108 master-0 kubenswrapper[4141]: I0312 14:10:26.233863 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:10:26.234108 master-0 kubenswrapper[4141]: I0312 14:10:26.233886 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:26.234447 master-0 kubenswrapper[4141]: I0312 14:10:26.234418 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:26.234508 master-0 kubenswrapper[4141]: I0312 14:10:26.234493 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:26.234508 master-0 kubenswrapper[4141]: I0312 14:10:26.234505 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:26.234608 master-0 kubenswrapper[4141]: I0312 14:10:26.234571 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:26.234608 master-0 kubenswrapper[4141]: I0312 14:10:26.234591 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:26.234608 master-0 kubenswrapper[4141]: I0312 14:10:26.234601 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:26.234714 master-0 kubenswrapper[4141]: I0312 14:10:26.234670 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 14:10:26.234714 master-0 kubenswrapper[4141]: I0312 14:10:26.234710 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:26.234788 master-0 kubenswrapper[4141]: I0312 14:10:26.234726 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:26.235342 master-0 kubenswrapper[4141]: I0312 14:10:26.235310 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:26.235342 master-0 kubenswrapper[4141]: I0312 14:10:26.235328 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:26.235445 master-0 kubenswrapper[4141]: I0312 14:10:26.235347 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:26.235445 master-0 kubenswrapper[4141]: I0312 14:10:26.235358 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:26.235445 master-0 kubenswrapper[4141]: I0312 14:10:26.235332 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:26.235445 master-0 kubenswrapper[4141]: I0312 14:10:26.235420 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:26.235580 master-0 kubenswrapper[4141]: I0312 14:10:26.235537 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 14:10:26.235580 master-0 kubenswrapper[4141]: I0312 14:10:26.235559 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:26.236081 master-0 kubenswrapper[4141]: I0312 14:10:26.236063 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:26.236081 master-0 kubenswrapper[4141]: I0312 14:10:26.236086 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:26.236292 master-0 kubenswrapper[4141]: I0312 14:10:26.236096 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:26.400247 master-0 kubenswrapper[4141]: I0312 14:10:26.400108 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:10:26.400433 master-0 kubenswrapper[4141]: I0312 14:10:26.400363 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:10:26.400466 master-0 kubenswrapper[4141]: I0312 14:10:26.400446 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:10:26.400690 master-0 kubenswrapper[4141]: I0312 14:10:26.400638 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:10:26.400840 master-0 kubenswrapper[4141]: I0312 14:10:26.400776 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:10:26.400887 master-0 kubenswrapper[4141]: I0312 14:10:26.400862 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 14:10:26.401094 master-0 kubenswrapper[4141]: I0312 14:10:26.401058 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 14:10:26.401274 master-0 kubenswrapper[4141]: I0312 14:10:26.401187 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 14:10:26.401361 master-0 kubenswrapper[4141]: I0312 14:10:26.401329 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 14:10:26.401443 master-0 kubenswrapper[4141]: I0312 14:10:26.401421 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 14:10:26.401520 master-0 kubenswrapper[4141]: I0312 14:10:26.401499 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:10:26.401649 master-0 kubenswrapper[4141]: I0312 14:10:26.401536 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 14:10:26.401747 master-0 kubenswrapper[4141]: I0312 14:10:26.401723 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:10:26.401830 master-0 kubenswrapper[4141]: I0312 14:10:26.401811 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:10:26.402074 master-0 kubenswrapper[4141]: I0312 14:10:26.402045 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:10:26.402209 master-0 kubenswrapper[4141]: I0312 14:10:26.402149 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:10:26.402278 master-0 kubenswrapper[4141]: I0312 14:10:26.402254 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:10:26.414090 master-0 kubenswrapper[4141]: I0312 14:10:26.414040 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:26.415090 master-0 kubenswrapper[4141]: I0312 14:10:26.415061 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:26.415143 master-0 kubenswrapper[4141]: I0312 14:10:26.415122 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:26.415143 master-0 kubenswrapper[4141]: I0312 14:10:26.415132 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:26.415200 master-0 kubenswrapper[4141]: I0312 14:10:26.415164 4141 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 14:10:26.416250 master-0 kubenswrapper[4141]: E0312 14:10:26.416175 4141 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 12 14:10:26.502555 master-0 kubenswrapper[4141]: I0312 14:10:26.502470 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:10:26.502555 master-0 kubenswrapper[4141]: I0312 14:10:26.502401 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:10:26.502555 master-0 kubenswrapper[4141]: I0312 14:10:26.502568 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:10:26.502555 master-0 kubenswrapper[4141]: I0312 14:10:26.502584 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:10:26.503257 master-0 kubenswrapper[4141]: I0312 14:10:26.502600 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:10:26.503257 master-0 kubenswrapper[4141]: I0312 14:10:26.502618 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:10:26.503257 master-0 kubenswrapper[4141]: I0312 14:10:26.502658 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:10:26.503257 master-0 kubenswrapper[4141]: I0312 14:10:26.502682 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:10:26.503257 master-0 kubenswrapper[4141]: I0312 14:10:26.502702 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:10:26.503257 master-0 kubenswrapper[4141]: I0312 14:10:26.502699 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:10:26.503257 master-0 kubenswrapper[4141]: I0312 14:10:26.502735 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:10:26.503257 master-0 kubenswrapper[4141]: I0312 14:10:26.502821 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:10:26.503257 master-0 kubenswrapper[4141]: I0312 14:10:26.502887 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:10:26.503257 master-0 kubenswrapper[4141]: I0312 14:10:26.502937 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:10:26.503257 master-0 kubenswrapper[4141]: I0312 14:10:26.502943 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:10:26.503257 master-0 kubenswrapper[4141]: I0312 14:10:26.502975 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:10:26.503257 master-0 kubenswrapper[4141]: I0312 14:10:26.502980 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:10:26.503257 master-0 kubenswrapper[4141]: I0312 14:10:26.503194 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:10:26.503257 master-0 kubenswrapper[4141]: I0312 14:10:26.503205 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:10:26.503257 master-0 kubenswrapper[4141]: I0312 14:10:26.503233 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 14:10:26.503257 master-0 kubenswrapper[4141]: I0312 14:10:26.503265 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 14:10:26.504264 master-0 kubenswrapper[4141]: I0312 14:10:26.503317 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 14:10:26.504264 master-0 kubenswrapper[4141]: I0312 14:10:26.503342 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 14:10:26.504264 master-0 kubenswrapper[4141]: I0312 14:10:26.503379 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 14:10:26.504264 master-0 kubenswrapper[4141]: I0312 14:10:26.503353 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:10:26.504264 master-0 kubenswrapper[4141]: I0312 14:10:26.503393 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 14:10:26.504264 master-0 kubenswrapper[4141]: I0312 14:10:26.503400 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 14:10:26.504264 master-0 kubenswrapper[4141]: I0312 14:10:26.503434 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 14:10:26.504264 master-0 kubenswrapper[4141]: I0312 14:10:26.503439 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:10:26.504264 master-0 kubenswrapper[4141]: I0312 14:10:26.503466 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 14:10:26.504264 master-0 kubenswrapper[4141]: I0312 14:10:26.503481 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:10:26.504264 master-0 kubenswrapper[4141]: I0312 14:10:26.503491 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 14:10:26.504264 master-0 kubenswrapper[4141]: I0312 14:10:26.503510 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 14:10:26.504264 master-0 kubenswrapper[4141]: I0312 14:10:26.503529 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 14:10:26.567876 master-0 kubenswrapper[4141]: I0312 14:10:26.567706 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 14:10:26.589182 master-0 kubenswrapper[4141]: I0312 14:10:26.589128 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:10:26.595162 master-0 kubenswrapper[4141]: I0312 14:10:26.595130 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:10:26.599670 master-0 kubenswrapper[4141]: E0312 14:10:26.599637 4141 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 12 14:10:26.612004 master-0 kubenswrapper[4141]: I0312 14:10:26.611940 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 14:10:26.616729 master-0 kubenswrapper[4141]: I0312 14:10:26.616704 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 14:10:26.816719 master-0 kubenswrapper[4141]: I0312 14:10:26.816650 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:26.818349 master-0 kubenswrapper[4141]: I0312 14:10:26.818277 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:26.818349 master-0 kubenswrapper[4141]: I0312 14:10:26.818333 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:26.818349 master-0 kubenswrapper[4141]: I0312 14:10:26.818348 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:26.818644 master-0 kubenswrapper[4141]: I0312 14:10:26.818408 4141 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 14:10:26.819460 master-0 kubenswrapper[4141]: E0312 14:10:26.819377 4141 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 12 14:10:26.859867 master-0 kubenswrapper[4141]: W0312 14:10:26.859747 4141 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:26.860217 master-0 kubenswrapper[4141]: E0312 14:10:26.859871 4141 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:10:26.994540 master-0 kubenswrapper[4141]: I0312 14:10:26.994490 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:27.152390 master-0 kubenswrapper[4141]: W0312 14:10:27.152146 4141 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:27.152390 master-0 kubenswrapper[4141]: E0312 14:10:27.152265 4141 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:10:27.401732 master-0 kubenswrapper[4141]: E0312 14:10:27.401650 4141 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 12 14:10:27.410092 master-0 kubenswrapper[4141]: W0312 14:10:27.409961 4141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f77c8e18b751d90bc0dfe2d4e304050.slice/crio-451cb30a0b8b39cb726cc182b92fb7f0c2e916a7e1138a7ad734d273a44b3de6 WatchSource:0}: Error finding container 451cb30a0b8b39cb726cc182b92fb7f0c2e916a7e1138a7ad734d273a44b3de6: Status 404 returned error can't find the container with id 451cb30a0b8b39cb726cc182b92fb7f0c2e916a7e1138a7ad734d273a44b3de6 Mar 12 14:10:27.416238 master-0 kubenswrapper[4141]: I0312 14:10:27.416195 4141 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 14:10:27.502621 master-0 kubenswrapper[4141]: W0312 14:10:27.502437 4141 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:27.502621 master-0 kubenswrapper[4141]: E0312 14:10:27.502554 4141 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:10:27.517184 master-0 kubenswrapper[4141]: W0312 14:10:27.516959 4141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf78c05e1499b533b83f091333d61f045.slice/crio-b1fca57791a870ac4ac75e7237e7b4e82aa4de3284ea9553565786a397ec7628 WatchSource:0}: Error finding container b1fca57791a870ac4ac75e7237e7b4e82aa4de3284ea9553565786a397ec7628: Status 404 returned error can't find the container with id b1fca57791a870ac4ac75e7237e7b4e82aa4de3284ea9553565786a397ec7628 Mar 12 14:10:27.517919 master-0 kubenswrapper[4141]: W0312 14:10:27.517832 4141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9add8df47182fc2eaf8cd78016ebe72.slice/crio-360de6d7cd6901ac994724b265fa41deda5af26bfc1f5396acb31cdc3acfea90 WatchSource:0}: Error finding container 360de6d7cd6901ac994724b265fa41deda5af26bfc1f5396acb31cdc3acfea90: Status 404 returned error can't find the container with id 360de6d7cd6901ac994724b265fa41deda5af26bfc1f5396acb31cdc3acfea90 Mar 12 14:10:27.535711 master-0 kubenswrapper[4141]: W0312 14:10:27.535673 4141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod354f29997baa583b6238f7de9108ee10.slice/crio-b8604dab0ababfe57b1fd26a526dbe9889c845e06d2a34bab1a127fa06b3b512 WatchSource:0}: Error finding container b8604dab0ababfe57b1fd26a526dbe9889c845e06d2a34bab1a127fa06b3b512: Status 404 returned error can't find the container with id b8604dab0ababfe57b1fd26a526dbe9889c845e06d2a34bab1a127fa06b3b512 Mar 12 14:10:27.620554 master-0 kubenswrapper[4141]: I0312 14:10:27.620420 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:27.622134 master-0 kubenswrapper[4141]: I0312 14:10:27.622101 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:27.622134 master-0 kubenswrapper[4141]: I0312 14:10:27.622136 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:27.622222 master-0 kubenswrapper[4141]: I0312 14:10:27.622146 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:27.622222 master-0 kubenswrapper[4141]: I0312 14:10:27.622180 4141 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 14:10:27.623401 master-0 kubenswrapper[4141]: E0312 14:10:27.623325 4141 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 12 14:10:27.634458 master-0 kubenswrapper[4141]: W0312 14:10:27.634326 4141 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:27.634537 master-0 kubenswrapper[4141]: E0312 14:10:27.634482 4141 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:10:27.660046 master-0 kubenswrapper[4141]: W0312 14:10:27.659933 4141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1a56802af72ce1aac6b5077f1695ac0.slice/crio-6cddeeb3d78172cd6ac796885f0e90479fda94b207b0174c18397e7f3e17b7e9 WatchSource:0}: Error finding container 6cddeeb3d78172cd6ac796885f0e90479fda94b207b0174c18397e7f3e17b7e9: Status 404 returned error can't find the container with id 6cddeeb3d78172cd6ac796885f0e90479fda94b207b0174c18397e7f3e17b7e9 Mar 12 14:10:27.994487 master-0 kubenswrapper[4141]: I0312 14:10:27.994429 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:28.039125 master-0 kubenswrapper[4141]: I0312 14:10:28.039046 4141 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 14:10:28.040737 master-0 kubenswrapper[4141]: E0312 14:10:28.040658 4141 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:10:28.136314 master-0 kubenswrapper[4141]: I0312 14:10:28.136184 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"451cb30a0b8b39cb726cc182b92fb7f0c2e916a7e1138a7ad734d273a44b3de6"} Mar 12 14:10:28.137342 master-0 kubenswrapper[4141]: I0312 14:10:28.137287 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"6cddeeb3d78172cd6ac796885f0e90479fda94b207b0174c18397e7f3e17b7e9"} Mar 12 14:10:28.138281 master-0 kubenswrapper[4141]: I0312 14:10:28.138237 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"b8604dab0ababfe57b1fd26a526dbe9889c845e06d2a34bab1a127fa06b3b512"} Mar 12 14:10:28.139406 master-0 kubenswrapper[4141]: I0312 14:10:28.139375 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"360de6d7cd6901ac994724b265fa41deda5af26bfc1f5396acb31cdc3acfea90"} Mar 12 14:10:28.140513 master-0 kubenswrapper[4141]: I0312 14:10:28.140459 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"b1fca57791a870ac4ac75e7237e7b4e82aa4de3284ea9553565786a397ec7628"} Mar 12 14:10:28.871634 master-0 kubenswrapper[4141]: W0312 14:10:28.871572 4141 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:28.871634 master-0 kubenswrapper[4141]: E0312 14:10:28.871626 4141 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:10:28.994233 master-0 kubenswrapper[4141]: I0312 14:10:28.994186 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:29.002969 master-0 kubenswrapper[4141]: E0312 14:10:29.002914 4141 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 12 14:10:29.224349 master-0 kubenswrapper[4141]: I0312 14:10:29.224226 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:29.225145 master-0 kubenswrapper[4141]: I0312 14:10:29.225112 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:29.225206 master-0 kubenswrapper[4141]: I0312 14:10:29.225152 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:29.225206 master-0 kubenswrapper[4141]: I0312 14:10:29.225162 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:29.225206 master-0 kubenswrapper[4141]: I0312 14:10:29.225200 4141 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 14:10:29.226433 master-0 kubenswrapper[4141]: E0312 14:10:29.226397 4141 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 12 14:10:29.585747 master-0 kubenswrapper[4141]: W0312 14:10:29.585699 4141 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:29.585974 master-0 kubenswrapper[4141]: E0312 14:10:29.585761 4141 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:10:29.698750 master-0 kubenswrapper[4141]: W0312 14:10:29.698702 4141 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:29.698957 master-0 kubenswrapper[4141]: E0312 14:10:29.698760 4141 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:10:29.994579 master-0 kubenswrapper[4141]: I0312 14:10:29.994536 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:30.025590 master-0 kubenswrapper[4141]: E0312 14:10:30.025441 4141 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189c1d5a13bda241 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:25.992589889 +0000 UTC m=+0.554162138,LastTimestamp:2026-03-12 14:10:25.992589889 +0000 UTC m=+0.554162138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:30.044087 master-0 kubenswrapper[4141]: W0312 14:10:30.044063 4141 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:30.044169 master-0 kubenswrapper[4141]: E0312 14:10:30.044106 4141 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:10:30.994029 master-0 kubenswrapper[4141]: I0312 14:10:30.993978 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:31.994251 master-0 kubenswrapper[4141]: I0312 14:10:31.994195 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:32.204732 master-0 kubenswrapper[4141]: E0312 14:10:32.204683 4141 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 12 14:10:32.406754 master-0 kubenswrapper[4141]: I0312 14:10:32.406369 4141 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 14:10:32.407968 master-0 kubenswrapper[4141]: E0312 14:10:32.407861 4141 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:10:32.427466 master-0 kubenswrapper[4141]: I0312 14:10:32.427427 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:32.429441 master-0 kubenswrapper[4141]: I0312 14:10:32.429273 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:32.429441 master-0 kubenswrapper[4141]: I0312 14:10:32.429319 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:32.429441 master-0 kubenswrapper[4141]: I0312 14:10:32.429327 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:32.429441 master-0 kubenswrapper[4141]: I0312 14:10:32.429370 4141 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 14:10:32.430485 master-0 kubenswrapper[4141]: E0312 14:10:32.430407 4141 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 12 14:10:32.994393 master-0 kubenswrapper[4141]: I0312 14:10:32.994329 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:33.150427 master-0 kubenswrapper[4141]: I0312 14:10:33.150317 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"857cc78e0c0678c5508c4eb58b1fbdd872cb096a1de1ff4746f9a88c2863a73c"} Mar 12 14:10:33.151256 master-0 kubenswrapper[4141]: I0312 14:10:33.151236 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"ece77fc75f8a7b32ae075ac5d9a3759a5a3b706e4492b696da7d62701d1c5eb8"} Mar 12 14:10:33.151329 master-0 kubenswrapper[4141]: I0312 14:10:33.151315 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:33.151982 master-0 kubenswrapper[4141]: I0312 14:10:33.151963 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:33.152018 master-0 kubenswrapper[4141]: I0312 14:10:33.151986 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:33.152018 master-0 kubenswrapper[4141]: I0312 14:10:33.151994 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:33.894779 master-0 kubenswrapper[4141]: W0312 14:10:33.894684 4141 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:33.894779 master-0 kubenswrapper[4141]: E0312 14:10:33.894786 4141 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:10:33.994766 master-0 kubenswrapper[4141]: I0312 14:10:33.994710 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:34.152385 master-0 kubenswrapper[4141]: I0312 14:10:34.152251 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:34.153080 master-0 kubenswrapper[4141]: I0312 14:10:34.153055 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:34.153120 master-0 kubenswrapper[4141]: I0312 14:10:34.153085 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:34.153120 master-0 kubenswrapper[4141]: I0312 14:10:34.153097 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:34.212891 master-0 kubenswrapper[4141]: W0312 14:10:34.212811 4141 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:34.212891 master-0 kubenswrapper[4141]: E0312 14:10:34.212877 4141 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:10:34.993654 master-0 kubenswrapper[4141]: I0312 14:10:34.993593 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:35.158092 master-0 kubenswrapper[4141]: I0312 14:10:35.157681 4141 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="ece77fc75f8a7b32ae075ac5d9a3759a5a3b706e4492b696da7d62701d1c5eb8" exitCode=0 Mar 12 14:10:35.158092 master-0 kubenswrapper[4141]: I0312 14:10:35.157746 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"ece77fc75f8a7b32ae075ac5d9a3759a5a3b706e4492b696da7d62701d1c5eb8"} Mar 12 14:10:35.158092 master-0 kubenswrapper[4141]: I0312 14:10:35.157851 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:35.158714 master-0 kubenswrapper[4141]: I0312 14:10:35.158686 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:35.158772 master-0 kubenswrapper[4141]: I0312 14:10:35.158719 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:35.158772 master-0 kubenswrapper[4141]: I0312 14:10:35.158728 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:35.393407 master-0 kubenswrapper[4141]: W0312 14:10:35.393312 4141 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:35.393407 master-0 kubenswrapper[4141]: E0312 14:10:35.393391 4141 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:10:35.702990 master-0 kubenswrapper[4141]: W0312 14:10:35.702785 4141 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:35.702990 master-0 kubenswrapper[4141]: E0312 14:10:35.702859 4141 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 14:10:35.994809 master-0 kubenswrapper[4141]: I0312 14:10:35.994739 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 14:10:36.111304 master-0 kubenswrapper[4141]: E0312 14:10:36.111205 4141 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 12 14:10:36.161643 master-0 kubenswrapper[4141]: I0312 14:10:36.161568 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"d81715b1a2dbc54afa6b4ebf0b0cbc31e29e0bdb6377beba9d7f0f245fb67694"} Mar 12 14:10:36.162176 master-0 kubenswrapper[4141]: I0312 14:10:36.161745 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:36.163086 master-0 kubenswrapper[4141]: I0312 14:10:36.162883 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:36.163349 master-0 kubenswrapper[4141]: I0312 14:10:36.163162 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:36.163349 master-0 kubenswrapper[4141]: I0312 14:10:36.163193 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:36.163428 master-0 kubenswrapper[4141]: I0312 14:10:36.163346 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"d62d60cfbaec34b17f1179067155a280075561a18ae5a4aaf75af0a737c10b39"} Mar 12 14:10:36.163428 master-0 kubenswrapper[4141]: I0312 14:10:36.163408 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:36.164320 master-0 kubenswrapper[4141]: I0312 14:10:36.164276 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:36.164366 master-0 kubenswrapper[4141]: I0312 14:10:36.164323 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:36.164366 master-0 kubenswrapper[4141]: I0312 14:10:36.164346 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:36.165002 master-0 kubenswrapper[4141]: I0312 14:10:36.164968 4141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/0.log" Mar 12 14:10:36.165336 master-0 kubenswrapper[4141]: I0312 14:10:36.165308 4141 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="51b053d66d8a55499522dc9c1cd9c53ef4bbb87602af0b11668e0ffa1196778a" exitCode=1 Mar 12 14:10:36.165382 master-0 kubenswrapper[4141]: I0312 14:10:36.165365 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"51b053d66d8a55499522dc9c1cd9c53ef4bbb87602af0b11668e0ffa1196778a"} Mar 12 14:10:36.165455 master-0 kubenswrapper[4141]: I0312 14:10:36.165428 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:36.166003 master-0 kubenswrapper[4141]: I0312 14:10:36.165974 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:36.166003 master-0 kubenswrapper[4141]: I0312 14:10:36.165999 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:36.166086 master-0 kubenswrapper[4141]: I0312 14:10:36.166009 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:36.166213 master-0 kubenswrapper[4141]: I0312 14:10:36.166190 4141 scope.go:117] "RemoveContainer" containerID="51b053d66d8a55499522dc9c1cd9c53ef4bbb87602af0b11668e0ffa1196778a" Mar 12 14:10:36.166759 master-0 kubenswrapper[4141]: I0312 14:10:36.166725 4141 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="fc7c0f722bd2f10c123348ade47d19a8deffa1a39c549432778dbf52755ce3ca" exitCode=1 Mar 12 14:10:36.166805 master-0 kubenswrapper[4141]: I0312 14:10:36.166774 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"fc7c0f722bd2f10c123348ade47d19a8deffa1a39c549432778dbf52755ce3ca"} Mar 12 14:10:36.168308 master-0 kubenswrapper[4141]: I0312 14:10:36.168211 4141 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="e520d98d7cf8903cafb8595cf7b3f03df14b8a00d253f1fd4abb1292c29d616a" exitCode=0 Mar 12 14:10:36.168308 master-0 kubenswrapper[4141]: I0312 14:10:36.168253 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerDied","Data":"e520d98d7cf8903cafb8595cf7b3f03df14b8a00d253f1fd4abb1292c29d616a"} Mar 12 14:10:36.168308 master-0 kubenswrapper[4141]: I0312 14:10:36.168274 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:36.169030 master-0 kubenswrapper[4141]: I0312 14:10:36.168990 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:36.169030 master-0 kubenswrapper[4141]: I0312 14:10:36.169030 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:36.169102 master-0 kubenswrapper[4141]: I0312 14:10:36.169047 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:36.171569 master-0 kubenswrapper[4141]: I0312 14:10:36.171540 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:36.172083 master-0 kubenswrapper[4141]: I0312 14:10:36.172053 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:36.172083 master-0 kubenswrapper[4141]: I0312 14:10:36.172083 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:36.172209 master-0 kubenswrapper[4141]: I0312 14:10:36.172094 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:37.175951 master-0 kubenswrapper[4141]: I0312 14:10:37.175754 4141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 12 14:10:37.177477 master-0 kubenswrapper[4141]: I0312 14:10:37.176181 4141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/0.log" Mar 12 14:10:37.177477 master-0 kubenswrapper[4141]: I0312 14:10:37.176511 4141 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="26c199200e12c0e96f1ef9586e41a918844eecbe5904742ec180d5436e1b0a15" exitCode=1 Mar 12 14:10:37.177477 master-0 kubenswrapper[4141]: I0312 14:10:37.176574 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"26c199200e12c0e96f1ef9586e41a918844eecbe5904742ec180d5436e1b0a15"} Mar 12 14:10:37.177477 master-0 kubenswrapper[4141]: I0312 14:10:37.176638 4141 scope.go:117] "RemoveContainer" containerID="51b053d66d8a55499522dc9c1cd9c53ef4bbb87602af0b11668e0ffa1196778a" Mar 12 14:10:37.177477 master-0 kubenswrapper[4141]: I0312 14:10:37.176763 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:37.180380 master-0 kubenswrapper[4141]: I0312 14:10:37.180352 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:37.180380 master-0 kubenswrapper[4141]: I0312 14:10:37.180381 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:37.180480 master-0 kubenswrapper[4141]: I0312 14:10:37.180389 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:37.180844 master-0 kubenswrapper[4141]: I0312 14:10:37.180813 4141 scope.go:117] "RemoveContainer" containerID="26c199200e12c0e96f1ef9586e41a918844eecbe5904742ec180d5436e1b0a15" Mar 12 14:10:37.181054 master-0 kubenswrapper[4141]: E0312 14:10:37.181032 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 12 14:10:37.183162 master-0 kubenswrapper[4141]: I0312 14:10:37.183131 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:37.183486 master-0 kubenswrapper[4141]: I0312 14:10:37.183469 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:37.183577 master-0 kubenswrapper[4141]: I0312 14:10:37.183520 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"76e7b395c2a9ba3ff27523b5970961a2bb5a85db216f39e42f2dea82ac7351d4"} Mar 12 14:10:37.183812 master-0 kubenswrapper[4141]: I0312 14:10:37.183782 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:37.183858 master-0 kubenswrapper[4141]: I0312 14:10:37.183821 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:37.183858 master-0 kubenswrapper[4141]: I0312 14:10:37.183830 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:37.184003 master-0 kubenswrapper[4141]: I0312 14:10:37.183976 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:37.184003 master-0 kubenswrapper[4141]: I0312 14:10:37.183996 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:37.184003 master-0 kubenswrapper[4141]: I0312 14:10:37.184004 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:37.819331 master-0 kubenswrapper[4141]: I0312 14:10:37.818670 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:10:37.998606 master-0 kubenswrapper[4141]: I0312 14:10:37.998255 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:10:38.187345 master-0 kubenswrapper[4141]: I0312 14:10:38.187151 4141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 12 14:10:38.609463 master-0 kubenswrapper[4141]: E0312 14:10:38.609425 4141 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 14:10:38.834554 master-0 kubenswrapper[4141]: I0312 14:10:38.832989 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:38.837919 master-0 kubenswrapper[4141]: I0312 14:10:38.835060 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:38.837919 master-0 kubenswrapper[4141]: I0312 14:10:38.835097 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:38.837919 master-0 kubenswrapper[4141]: I0312 14:10:38.835107 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:38.837919 master-0 kubenswrapper[4141]: I0312 14:10:38.835176 4141 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 14:10:38.876322 master-0 kubenswrapper[4141]: E0312 14:10:38.876201 4141 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 12 14:10:38.998537 master-0 kubenswrapper[4141]: I0312 14:10:38.998487 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:10:39.191712 master-0 kubenswrapper[4141]: I0312 14:10:39.191670 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"24ee3eeca5a94629f5c47b0ce9433577ce076c824acff7a3bc086c327eefa56a"} Mar 12 14:10:39.192111 master-0 kubenswrapper[4141]: I0312 14:10:39.191777 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:39.192440 master-0 kubenswrapper[4141]: I0312 14:10:39.192425 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:39.192500 master-0 kubenswrapper[4141]: I0312 14:10:39.192444 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:39.192500 master-0 kubenswrapper[4141]: I0312 14:10:39.192454 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:39.192689 master-0 kubenswrapper[4141]: I0312 14:10:39.192647 4141 scope.go:117] "RemoveContainer" containerID="fc7c0f722bd2f10c123348ade47d19a8deffa1a39c549432778dbf52755ce3ca" Mar 12 14:10:40.000854 master-0 kubenswrapper[4141]: I0312 14:10:40.000806 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:10:40.033455 master-0 kubenswrapper[4141]: E0312 14:10:40.032852 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c1d5a13bda241 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:25.992589889 +0000 UTC m=+0.554162138,LastTimestamp:2026-03-12 14:10:25.992589889 +0000 UTC m=+0.554162138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.040510 master-0 kubenswrapper[4141]: E0312 14:10:40.040398 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c1d5a16923484 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:26.040075396 +0000 UTC m=+0.601647645,LastTimestamp:2026-03-12 14:10:26.040075396 +0000 UTC m=+0.601647645,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.050367 master-0 kubenswrapper[4141]: E0312 14:10:40.049823 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c1d5a16928f0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:26.040098572 +0000 UTC m=+0.601670821,LastTimestamp:2026-03-12 14:10:26.040098572 +0000 UTC m=+0.601670821,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.059374 master-0 kubenswrapper[4141]: E0312 14:10:40.059217 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c1d5a1692be35 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:26.040110645 +0000 UTC m=+0.601682894,LastTimestamp:2026-03-12 14:10:26.040110645 +0000 UTC m=+0.601682894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.066934 master-0 kubenswrapper[4141]: E0312 14:10:40.066714 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c1d5a1ae355b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:26.112501177 +0000 UTC m=+0.674073426,LastTimestamp:2026-03-12 14:10:26.112501177 +0000 UTC m=+0.674073426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.075173 master-0 kubenswrapper[4141]: E0312 14:10:40.075032 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c1d5a16923484\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c1d5a16923484 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:26.040075396 +0000 UTC m=+0.601647645,LastTimestamp:2026-03-12 14:10:26.212737762 +0000 UTC m=+0.774310001,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.083583 master-0 kubenswrapper[4141]: E0312 14:10:40.083424 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c1d5a16928f0c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c1d5a16928f0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:26.040098572 +0000 UTC m=+0.601670821,LastTimestamp:2026-03-12 14:10:26.212754217 +0000 UTC m=+0.774326466,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.088267 master-0 kubenswrapper[4141]: E0312 14:10:40.088137 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c1d5a1692be35\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c1d5a1692be35 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:26.040110645 +0000 UTC m=+0.601682894,LastTimestamp:2026-03-12 14:10:26.212761989 +0000 UTC m=+0.774334238,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.092565 master-0 kubenswrapper[4141]: E0312 14:10:40.092472 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c1d5a16923484\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c1d5a16923484 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:26.040075396 +0000 UTC m=+0.601647645,LastTimestamp:2026-03-12 14:10:26.231936434 +0000 UTC m=+0.793508683,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.097310 master-0 kubenswrapper[4141]: E0312 14:10:40.097214 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c1d5a16928f0c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c1d5a16928f0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:26.040098572 +0000 UTC m=+0.601670821,LastTimestamp:2026-03-12 14:10:26.23195845 +0000 UTC m=+0.793530699,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.101803 master-0 kubenswrapper[4141]: E0312 14:10:40.101630 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c1d5a1692be35\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c1d5a1692be35 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:26.040110645 +0000 UTC m=+0.601682894,LastTimestamp:2026-03-12 14:10:26.231966682 +0000 UTC m=+0.793538931,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.108418 master-0 kubenswrapper[4141]: E0312 14:10:40.108321 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c1d5a16923484\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c1d5a16923484 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:26.040075396 +0000 UTC m=+0.601647645,LastTimestamp:2026-03-12 14:10:26.23287645 +0000 UTC m=+0.794448699,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.112844 master-0 kubenswrapper[4141]: E0312 14:10:40.112749 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c1d5a16928f0c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c1d5a16928f0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:26.040098572 +0000 UTC m=+0.601670821,LastTimestamp:2026-03-12 14:10:26.232976327 +0000 UTC m=+0.794548636,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.118711 master-0 kubenswrapper[4141]: E0312 14:10:40.118548 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c1d5a1692be35\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c1d5a1692be35 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:26.040110645 +0000 UTC m=+0.601682894,LastTimestamp:2026-03-12 14:10:26.233013837 +0000 UTC m=+0.794586086,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.125012 master-0 kubenswrapper[4141]: E0312 14:10:40.124842 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c1d5a16923484\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c1d5a16923484 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:26.040075396 +0000 UTC m=+0.601647645,LastTimestamp:2026-03-12 14:10:26.233042065 +0000 UTC m=+0.794614304,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.129526 master-0 kubenswrapper[4141]: E0312 14:10:40.129440 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c1d5a16928f0c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c1d5a16928f0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:26.040098572 +0000 UTC m=+0.601670821,LastTimestamp:2026-03-12 14:10:26.233051538 +0000 UTC m=+0.794623787,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.135032 master-0 kubenswrapper[4141]: E0312 14:10:40.134954 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c1d5a1692be35\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c1d5a1692be35 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:26.040110645 +0000 UTC m=+0.601682894,LastTimestamp:2026-03-12 14:10:26.23305903 +0000 UTC m=+0.794631279,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.140795 master-0 kubenswrapper[4141]: E0312 14:10:40.140706 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c1d5a16923484\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c1d5a16923484 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:26.040075396 +0000 UTC m=+0.601647645,LastTimestamp:2026-03-12 14:10:26.233613871 +0000 UTC m=+0.795186120,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.144996 master-0 kubenswrapper[4141]: E0312 14:10:40.144882 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c1d5a16928f0c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c1d5a16928f0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:26.040098572 +0000 UTC m=+0.601670821,LastTimestamp:2026-03-12 14:10:26.233627415 +0000 UTC m=+0.795199664,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.151164 master-0 kubenswrapper[4141]: E0312 14:10:40.151038 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c1d5a1692be35\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c1d5a1692be35 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:26.040110645 +0000 UTC m=+0.601682894,LastTimestamp:2026-03-12 14:10:26.233636917 +0000 UTC m=+0.795209166,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.156021 master-0 kubenswrapper[4141]: E0312 14:10:40.155858 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c1d5a16923484\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c1d5a16923484 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:26.040075396 +0000 UTC m=+0.601647645,LastTimestamp:2026-03-12 14:10:26.233677338 +0000 UTC m=+0.795249587,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.161916 master-0 kubenswrapper[4141]: E0312 14:10:40.161771 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c1d5a16928f0c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c1d5a16928f0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:26.040098572 +0000 UTC m=+0.601670821,LastTimestamp:2026-03-12 14:10:26.233686311 +0000 UTC m=+0.795258560,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.166706 master-0 kubenswrapper[4141]: E0312 14:10:40.166557 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c1d5a1692be35\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c1d5a1692be35 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:26.040110645 +0000 UTC m=+0.601682894,LastTimestamp:2026-03-12 14:10:26.233694143 +0000 UTC m=+0.795266392,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.171659 master-0 kubenswrapper[4141]: E0312 14:10:40.171537 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c1d5a16923484\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c1d5a16923484 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:26.040075396 +0000 UTC m=+0.601647645,LastTimestamp:2026-03-12 14:10:26.234480126 +0000 UTC m=+0.796052385,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.176920 master-0 kubenswrapper[4141]: E0312 14:10:40.176676 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c1d5a16928f0c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c1d5a16928f0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:26.040098572 +0000 UTC m=+0.601670821,LastTimestamp:2026-03-12 14:10:26.234501772 +0000 UTC m=+0.796074031,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.183203 master-0 kubenswrapper[4141]: E0312 14:10:40.183075 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c1d5a6895ed78 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:27.416051064 +0000 UTC m=+1.977623313,LastTimestamp:2026-03-12 14:10:27.416051064 +0000 UTC m=+1.977623313,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.187669 master-0 kubenswrapper[4141]: E0312 14:10:40.187535 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c1d5a6f112d01 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:27.524791553 +0000 UTC m=+2.086363812,LastTimestamp:2026-03-12 14:10:27.524791553 +0000 UTC m=+2.086363812,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.192150 master-0 kubenswrapper[4141]: E0312 14:10:40.191995 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c1d5a6f302285 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:27.526820485 +0000 UTC m=+2.088392734,LastTimestamp:2026-03-12 14:10:27.526820485 +0000 UTC m=+2.088392734,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.195746 master-0 kubenswrapper[4141]: I0312 14:10:40.195687 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"e2b0c764e775c64bb06daa502f6ffcef2b80b99417457721ebe17108234fc61d"} Mar 12 14:10:40.195811 master-0 kubenswrapper[4141]: I0312 14:10:40.195759 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:40.196759 master-0 kubenswrapper[4141]: I0312 14:10:40.196734 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:40.196807 master-0 kubenswrapper[4141]: I0312 14:10:40.196764 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:40.196807 master-0 kubenswrapper[4141]: I0312 14:10:40.196777 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:40.196865 master-0 kubenswrapper[4141]: E0312 14:10:40.196758 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c1d5a6fd0f021 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:27.537358881 +0000 UTC m=+2.098931130,LastTimestamp:2026-03-12 14:10:27.537358881 +0000 UTC m=+2.098931130,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.198164 master-0 kubenswrapper[4141]: I0312 14:10:40.198122 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"db63589c7d51a05a8314fa99d2bcd36f7d574dddf92caf850f4dc8319e77bd65"} Mar 12 14:10:40.198221 master-0 kubenswrapper[4141]: I0312 14:10:40.198203 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:40.198887 master-0 kubenswrapper[4141]: I0312 14:10:40.198860 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:40.198943 master-0 kubenswrapper[4141]: I0312 14:10:40.198890 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:40.198943 master-0 kubenswrapper[4141]: I0312 14:10:40.198933 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:40.202366 master-0 kubenswrapper[4141]: E0312 14:10:40.202284 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c1d5a774fdeab kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:27.663117995 +0000 UTC m=+2.224690244,LastTimestamp:2026-03-12 14:10:27.663117995 +0000 UTC m=+2.224690244,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.206981 master-0 kubenswrapper[4141]: E0312 14:10:40.206833 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c1d5b8b74e5bb openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" in 4.769s (4.769s including waiting). Image size: 465086330 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:32.296056251 +0000 UTC m=+6.857628500,LastTimestamp:2026-03-12 14:10:32.296056251 +0000 UTC m=+6.857628500,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.210674 master-0 kubenswrapper[4141]: E0312 14:10:40.210581 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c1d5b8be06131 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\" in 4.765s (4.765s including waiting). Image size: 529324693 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:32.303100209 +0000 UTC m=+6.864672458,LastTimestamp:2026-03-12 14:10:32.303100209 +0000 UTC m=+6.864672458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.214261 master-0 kubenswrapper[4141]: E0312 14:10:40.214186 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c1d5ba66be283 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:32.748450435 +0000 UTC m=+7.310022684,LastTimestamp:2026-03-12 14:10:32.748450435 +0000 UTC m=+7.310022684,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.218539 master-0 kubenswrapper[4141]: E0312 14:10:40.218433 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c1d5ba82600b7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:32.777425079 +0000 UTC m=+7.338997328,LastTimestamp:2026-03-12 14:10:32.777425079 +0000 UTC m=+7.338997328,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.222685 master-0 kubenswrapper[4141]: E0312 14:10:40.222579 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c1d5ba848c447 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:32.779703367 +0000 UTC m=+7.341275616,LastTimestamp:2026-03-12 14:10:32.779703367 +0000 UTC m=+7.341275616,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.227116 master-0 kubenswrapper[4141]: E0312 14:10:40.227039 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c1d5ba9112d85 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:32.792837509 +0000 UTC m=+7.354409758,LastTimestamp:2026-03-12 14:10:32.792837509 +0000 UTC m=+7.354409758,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.231885 master-0 kubenswrapper[4141]: E0312 14:10:40.231776 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c1d5c2ef9a248 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:35.03944148 +0000 UTC m=+9.601013729,LastTimestamp:2026-03-12 14:10:35.03944148 +0000 UTC m=+9.601013729,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.236324 master-0 kubenswrapper[4141]: E0312 14:10:40.236236 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c1d5c33b36680 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 7.702s (7.702s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:35.118724736 +0000 UTC m=+9.680296985,LastTimestamp:2026-03-12 14:10:35.118724736 +0000 UTC m=+9.680296985,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.241045 master-0 kubenswrapper[4141]: E0312 14:10:40.240974 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c1d5c34b6984a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 7.61s (7.61s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:35.135711306 +0000 UTC m=+9.697283555,LastTimestamp:2026-03-12 14:10:35.135711306 +0000 UTC m=+9.697283555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.245031 master-0 kubenswrapper[4141]: E0312 14:10:40.244955 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c1d5c35b9e1cb kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 7.489s (7.489s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:35.152703947 +0000 UTC m=+9.714276196,LastTimestamp:2026-03-12 14:10:35.152703947 +0000 UTC m=+9.714276196,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.249695 master-0 kubenswrapper[4141]: E0312 14:10:40.249622 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c1d5c363becfd openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:35.161226493 +0000 UTC m=+9.722798742,LastTimestamp:2026-03-12 14:10:35.161226493 +0000 UTC m=+9.722798742,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.253374 master-0 kubenswrapper[4141]: E0312 14:10:40.253231 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c1d5c39de5d62 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:35.222203746 +0000 UTC m=+9.783775995,LastTimestamp:2026-03-12 14:10:35.222203746 +0000 UTC m=+9.783775995,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.257643 master-0 kubenswrapper[4141]: E0312 14:10:40.257572 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c1d5c3a9e82c1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:35.234796225 +0000 UTC m=+9.796368474,LastTimestamp:2026-03-12 14:10:35.234796225 +0000 UTC m=+9.796368474,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.261720 master-0 kubenswrapper[4141]: E0312 14:10:40.261651 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c1d5c3ec1b4bb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:35.304211643 +0000 UTC m=+9.865783902,LastTimestamp:2026-03-12 14:10:35.304211643 +0000 UTC m=+9.865783902,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.269070 master-0 kubenswrapper[4141]: E0312 14:10:40.266706 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c1d5c3f06ae53 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:35.308731987 +0000 UTC m=+9.870304236,LastTimestamp:2026-03-12 14:10:35.308731987 +0000 UTC m=+9.870304236,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.276578 master-0 kubenswrapper[4141]: E0312 14:10:40.275925 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c1d5c3f6068c0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:35.314612416 +0000 UTC m=+9.876184665,LastTimestamp:2026-03-12 14:10:35.314612416 +0000 UTC m=+9.876184665,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.284418 master-0 kubenswrapper[4141]: E0312 14:10:40.283384 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c1d5c3fb1ced1 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:35.319946961 +0000 UTC m=+9.881519210,LastTimestamp:2026-03-12 14:10:35.319946961 +0000 UTC m=+9.881519210,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.288398 master-0 kubenswrapper[4141]: E0312 14:10:40.288231 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c1d5c3fb2f5a1 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:35.320022433 +0000 UTC m=+9.881594682,LastTimestamp:2026-03-12 14:10:35.320022433 +0000 UTC m=+9.881594682,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.293335 master-0 kubenswrapper[4141]: E0312 14:10:40.293235 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c1d5c3fc2841d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:35.321041949 +0000 UTC m=+9.882614198,LastTimestamp:2026-03-12 14:10:35.321041949 +0000 UTC m=+9.882614198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.297295 master-0 kubenswrapper[4141]: E0312 14:10:40.297200 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c1d5c40052e86 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:35.32541095 +0000 UTC m=+9.886983199,LastTimestamp:2026-03-12 14:10:35.32541095 +0000 UTC m=+9.886983199,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.301194 master-0 kubenswrapper[4141]: E0312 14:10:40.301095 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c1d5c40a2b7e7 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:35.335735271 +0000 UTC m=+9.897307520,LastTimestamp:2026-03-12 14:10:35.335735271 +0000 UTC m=+9.897307520,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.305716 master-0 kubenswrapper[4141]: E0312 14:10:40.305610 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c1d5c40cbd9d1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:35.338430929 +0000 UTC m=+9.900003198,LastTimestamp:2026-03-12 14:10:35.338430929 +0000 UTC m=+9.900003198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.310137 master-0 kubenswrapper[4141]: E0312 14:10:40.310012 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c1d5c363becfd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c1d5c363becfd openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:35.161226493 +0000 UTC m=+9.722798742,LastTimestamp:2026-03-12 14:10:36.168715181 +0000 UTC m=+10.730287430,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.315021 master-0 kubenswrapper[4141]: E0312 14:10:40.314931 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c1d5c727350fb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:36.171489531 +0000 UTC m=+10.733061780,LastTimestamp:2026-03-12 14:10:36.171489531 +0000 UTC m=+10.733061780,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.319158 master-0 kubenswrapper[4141]: E0312 14:10:40.319040 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c1d5c7bd7536c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:36.3290387 +0000 UTC m=+10.890610969,LastTimestamp:2026-03-12 14:10:36.3290387 +0000 UTC m=+10.890610969,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.324250 master-0 kubenswrapper[4141]: E0312 14:10:40.324137 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c1d5c40052e86\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c1d5c40052e86 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:35.32541095 +0000 UTC m=+9.886983199,LastTimestamp:2026-03-12 14:10:36.338346775 +0000 UTC m=+10.899919024,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.327807 master-0 kubenswrapper[4141]: E0312 14:10:40.327742 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c1d5c7c6bbcf7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:36.338765047 +0000 UTC m=+10.900337296,LastTimestamp:2026-03-12 14:10:36.338765047 +0000 UTC m=+10.900337296,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.331096 master-0 kubenswrapper[4141]: E0312 14:10:40.331009 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c1d5c7c795855 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:36.339656789 +0000 UTC m=+10.901229038,LastTimestamp:2026-03-12 14:10:36.339656789 +0000 UTC m=+10.901229038,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.334567 master-0 kubenswrapper[4141]: E0312 14:10:40.334490 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c1d5c40cbd9d1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c1d5c40cbd9d1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:35.338430929 +0000 UTC m=+9.900003198,LastTimestamp:2026-03-12 14:10:36.34801597 +0000 UTC m=+10.909588229,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.342005 master-0 kubenswrapper[4141]: E0312 14:10:40.341796 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c1d5cae9e99a3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:37.180959139 +0000 UTC m=+11.742531378,LastTimestamp:2026-03-12 14:10:37.180959139 +0000 UTC m=+11.742531378,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.346936 master-0 kubenswrapper[4141]: E0312 14:10:40.346797 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c1d5cf0364332 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\" in 2.96s (2.96s including waiting). Image size: 505242594 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:38.281417522 +0000 UTC m=+12.842989771,LastTimestamp:2026-03-12 14:10:38.281417522 +0000 UTC m=+12.842989771,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.353567 master-0 kubenswrapper[4141]: E0312 14:10:40.353262 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c1d5d1d575fd6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" in 2.698s (2.698s including waiting). Image size: 514980169 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:39.038562262 +0000 UTC m=+13.600134511,LastTimestamp:2026-03-12 14:10:39.038562262 +0000 UTC m=+13.600134511,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.358520 master-0 kubenswrapper[4141]: E0312 14:10:40.358407 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c1d5d24b6c911 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:39.162255633 +0000 UTC m=+13.723827882,LastTimestamp:2026-03-12 14:10:39.162255633 +0000 UTC m=+13.723827882,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.362302 master-0 kubenswrapper[4141]: E0312 14:10:40.362200 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c1d5d2531ac57 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:39.170309207 +0000 UTC m=+13.731881456,LastTimestamp:2026-03-12 14:10:39.170309207 +0000 UTC m=+13.731881456,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.366164 master-0 kubenswrapper[4141]: E0312 14:10:40.366060 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c1d5d26bb1a15 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:39.196092949 +0000 UTC m=+13.757665198,LastTimestamp:2026-03-12 14:10:39.196092949 +0000 UTC m=+13.757665198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.371196 master-0 kubenswrapper[4141]: E0312 14:10:40.371095 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c1d5d26caace8 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:39.197113576 +0000 UTC m=+13.758685825,LastTimestamp:2026-03-12 14:10:39.197113576 +0000 UTC m=+13.758685825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.374647 master-0 kubenswrapper[4141]: E0312 14:10:40.374565 4141 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c1d5d273dda11 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:39.204661777 +0000 UTC m=+13.766234046,LastTimestamp:2026-03-12 14:10:39.204661777 +0000 UTC m=+13.766234046,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.378011 master-0 kubenswrapper[4141]: E0312 14:10:40.377941 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.189c1d5c3f06ae53\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c1d5c3f06ae53 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:35.308731987 +0000 UTC m=+9.870304236,LastTimestamp:2026-03-12 14:10:39.350442678 +0000 UTC m=+13.912014927,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:40.381521 master-0 kubenswrapper[4141]: E0312 14:10:40.381435 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.189c1d5c3fb2f5a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c1d5c3fb2f5a1 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:35.320022433 +0000 UTC m=+9.881594682,LastTimestamp:2026-03-12 14:10:39.360506593 +0000 UTC m=+13.922078842,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:41.002536 master-0 kubenswrapper[4141]: I0312 14:10:41.002438 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:10:41.085654 master-0 kubenswrapper[4141]: W0312 14:10:41.085549 4141 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 12 14:10:41.085654 master-0 kubenswrapper[4141]: E0312 14:10:41.085630 4141 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 12 14:10:41.130793 master-0 kubenswrapper[4141]: I0312 14:10:41.130672 4141 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 14:10:41.151235 master-0 kubenswrapper[4141]: I0312 14:10:41.151144 4141 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 12 14:10:41.200755 master-0 kubenswrapper[4141]: I0312 14:10:41.200659 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:41.201354 master-0 kubenswrapper[4141]: I0312 14:10:41.200683 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:41.202051 master-0 kubenswrapper[4141]: I0312 14:10:41.201931 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:41.202051 master-0 kubenswrapper[4141]: I0312 14:10:41.201979 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:41.202051 master-0 kubenswrapper[4141]: I0312 14:10:41.201995 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:41.202541 master-0 kubenswrapper[4141]: I0312 14:10:41.202173 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:41.202541 master-0 kubenswrapper[4141]: I0312 14:10:41.202266 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:41.202541 master-0 kubenswrapper[4141]: I0312 14:10:41.202290 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:42.001981 master-0 kubenswrapper[4141]: I0312 14:10:42.001858 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:10:42.022756 master-0 kubenswrapper[4141]: W0312 14:10:42.022684 4141 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 12 14:10:42.022756 master-0 kubenswrapper[4141]: E0312 14:10:42.022756 4141 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 12 14:10:42.373026 master-0 kubenswrapper[4141]: W0312 14:10:42.372887 4141 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 12 14:10:42.373026 master-0 kubenswrapper[4141]: E0312 14:10:42.372972 4141 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 12 14:10:42.670842 master-0 kubenswrapper[4141]: I0312 14:10:42.670687 4141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:10:42.671062 master-0 kubenswrapper[4141]: I0312 14:10:42.671026 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:42.672849 master-0 kubenswrapper[4141]: I0312 14:10:42.672800 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:42.672929 master-0 kubenswrapper[4141]: I0312 14:10:42.672871 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:42.672929 master-0 kubenswrapper[4141]: I0312 14:10:42.672888 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:42.678556 master-0 kubenswrapper[4141]: I0312 14:10:42.678520 4141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:10:43.001801 master-0 kubenswrapper[4141]: I0312 14:10:43.001720 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:10:43.172319 master-0 kubenswrapper[4141]: I0312 14:10:43.172254 4141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:10:43.176266 master-0 kubenswrapper[4141]: I0312 14:10:43.176250 4141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:10:43.206360 master-0 kubenswrapper[4141]: I0312 14:10:43.206308 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:43.207244 master-0 kubenswrapper[4141]: I0312 14:10:43.207207 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:43.207300 master-0 kubenswrapper[4141]: I0312 14:10:43.207248 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:43.207300 master-0 kubenswrapper[4141]: I0312 14:10:43.207261 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:43.998718 master-0 kubenswrapper[4141]: I0312 14:10:43.998520 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:10:44.208637 master-0 kubenswrapper[4141]: I0312 14:10:44.208593 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:44.209307 master-0 kubenswrapper[4141]: I0312 14:10:44.209270 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:44.209372 master-0 kubenswrapper[4141]: I0312 14:10:44.209319 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:44.209372 master-0 kubenswrapper[4141]: I0312 14:10:44.209336 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:44.280242 master-0 kubenswrapper[4141]: I0312 14:10:44.280085 4141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:10:44.280242 master-0 kubenswrapper[4141]: I0312 14:10:44.280225 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:44.281764 master-0 kubenswrapper[4141]: I0312 14:10:44.281608 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:44.281764 master-0 kubenswrapper[4141]: I0312 14:10:44.281656 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:44.281764 master-0 kubenswrapper[4141]: I0312 14:10:44.281666 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:44.698624 master-0 kubenswrapper[4141]: I0312 14:10:44.698555 4141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:10:44.702804 master-0 kubenswrapper[4141]: I0312 14:10:44.702769 4141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:10:44.998615 master-0 kubenswrapper[4141]: I0312 14:10:44.998558 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:10:45.171310 master-0 kubenswrapper[4141]: W0312 14:10:45.171257 4141 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 12 14:10:45.171310 master-0 kubenswrapper[4141]: E0312 14:10:45.171327 4141 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 12 14:10:45.210844 master-0 kubenswrapper[4141]: I0312 14:10:45.210722 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:45.211579 master-0 kubenswrapper[4141]: I0312 14:10:45.211546 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:45.211579 master-0 kubenswrapper[4141]: I0312 14:10:45.211577 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:45.211649 master-0 kubenswrapper[4141]: I0312 14:10:45.211585 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:45.344106 master-0 kubenswrapper[4141]: I0312 14:10:45.344052 4141 csr.go:261] certificate signing request csr-zqf7h is approved, waiting to be issued Mar 12 14:10:45.614865 master-0 kubenswrapper[4141]: E0312 14:10:45.614755 4141 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 14:10:45.876812 master-0 kubenswrapper[4141]: I0312 14:10:45.876585 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:45.878034 master-0 kubenswrapper[4141]: I0312 14:10:45.877955 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:45.878034 master-0 kubenswrapper[4141]: I0312 14:10:45.878017 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:45.878034 master-0 kubenswrapper[4141]: I0312 14:10:45.878034 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:45.878326 master-0 kubenswrapper[4141]: I0312 14:10:45.878089 4141 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 14:10:45.883832 master-0 kubenswrapper[4141]: E0312 14:10:45.883760 4141 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 12 14:10:45.999874 master-0 kubenswrapper[4141]: I0312 14:10:45.999802 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:10:46.112770 master-0 kubenswrapper[4141]: E0312 14:10:46.112104 4141 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 12 14:10:46.213151 master-0 kubenswrapper[4141]: I0312 14:10:46.212990 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:46.214119 master-0 kubenswrapper[4141]: I0312 14:10:46.214048 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:46.214119 master-0 kubenswrapper[4141]: I0312 14:10:46.214110 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:46.214306 master-0 kubenswrapper[4141]: I0312 14:10:46.214140 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:47.002451 master-0 kubenswrapper[4141]: I0312 14:10:47.002331 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:10:47.999228 master-0 kubenswrapper[4141]: I0312 14:10:47.999154 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:10:48.660023 master-0 kubenswrapper[4141]: I0312 14:10:48.659888 4141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:10:48.660345 master-0 kubenswrapper[4141]: I0312 14:10:48.660089 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:48.660938 master-0 kubenswrapper[4141]: I0312 14:10:48.660880 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:48.661007 master-0 kubenswrapper[4141]: I0312 14:10:48.660943 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:48.661007 master-0 kubenswrapper[4141]: I0312 14:10:48.660958 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:48.667838 master-0 kubenswrapper[4141]: I0312 14:10:48.667799 4141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:10:49.000157 master-0 kubenswrapper[4141]: I0312 14:10:49.000092 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:10:49.090818 master-0 kubenswrapper[4141]: I0312 14:10:49.090732 4141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:10:49.096772 master-0 kubenswrapper[4141]: I0312 14:10:49.096748 4141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:10:49.219328 master-0 kubenswrapper[4141]: I0312 14:10:49.219279 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:49.220081 master-0 kubenswrapper[4141]: I0312 14:10:49.219996 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:49.220151 master-0 kubenswrapper[4141]: I0312 14:10:49.220114 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:49.220151 master-0 kubenswrapper[4141]: I0312 14:10:49.220132 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:49.223274 master-0 kubenswrapper[4141]: I0312 14:10:49.223233 4141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:10:49.998441 master-0 kubenswrapper[4141]: I0312 14:10:49.998352 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:10:50.221663 master-0 kubenswrapper[4141]: I0312 14:10:50.221615 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:50.222253 master-0 kubenswrapper[4141]: I0312 14:10:50.222226 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:50.222311 master-0 kubenswrapper[4141]: I0312 14:10:50.222261 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:50.222311 master-0 kubenswrapper[4141]: I0312 14:10:50.222271 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:50.998851 master-0 kubenswrapper[4141]: I0312 14:10:50.998773 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:10:51.130765 master-0 kubenswrapper[4141]: I0312 14:10:51.130707 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:51.131870 master-0 kubenswrapper[4141]: I0312 14:10:51.131721 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:51.131870 master-0 kubenswrapper[4141]: I0312 14:10:51.131826 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:51.131870 master-0 kubenswrapper[4141]: I0312 14:10:51.131843 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:51.132399 master-0 kubenswrapper[4141]: I0312 14:10:51.132363 4141 scope.go:117] "RemoveContainer" containerID="26c199200e12c0e96f1ef9586e41a918844eecbe5904742ec180d5436e1b0a15" Mar 12 14:10:51.141233 master-0 kubenswrapper[4141]: E0312 14:10:51.140984 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c1d5c363becfd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c1d5c363becfd openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:35.161226493 +0000 UTC m=+9.722798742,LastTimestamp:2026-03-12 14:10:51.13524328 +0000 UTC m=+25.696815529,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:51.223088 master-0 kubenswrapper[4141]: I0312 14:10:51.223034 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:51.223767 master-0 kubenswrapper[4141]: I0312 14:10:51.223734 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:51.223807 master-0 kubenswrapper[4141]: I0312 14:10:51.223768 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:51.223807 master-0 kubenswrapper[4141]: I0312 14:10:51.223780 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:51.290665 master-0 kubenswrapper[4141]: E0312 14:10:51.290498 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c1d5c40052e86\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c1d5c40052e86 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:35.32541095 +0000 UTC m=+9.886983199,LastTimestamp:2026-03-12 14:10:51.286241822 +0000 UTC m=+25.847814071,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:51.300082 master-0 kubenswrapper[4141]: E0312 14:10:51.299989 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c1d5c40cbd9d1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c1d5c40cbd9d1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:35.338430929 +0000 UTC m=+9.900003198,LastTimestamp:2026-03-12 14:10:51.29603409 +0000 UTC m=+25.857606339,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:52.001047 master-0 kubenswrapper[4141]: I0312 14:10:52.000945 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:10:52.226180 master-0 kubenswrapper[4141]: I0312 14:10:52.226127 4141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 12 14:10:52.226691 master-0 kubenswrapper[4141]: I0312 14:10:52.226642 4141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 12 14:10:52.227045 master-0 kubenswrapper[4141]: I0312 14:10:52.227006 4141 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="93a2be4c1cc0002fe72e77c70515d0d6599835f46c575d492bb4928167ddaaac" exitCode=1 Mar 12 14:10:52.227103 master-0 kubenswrapper[4141]: I0312 14:10:52.227066 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"93a2be4c1cc0002fe72e77c70515d0d6599835f46c575d492bb4928167ddaaac"} Mar 12 14:10:52.227145 master-0 kubenswrapper[4141]: I0312 14:10:52.227113 4141 scope.go:117] "RemoveContainer" containerID="26c199200e12c0e96f1ef9586e41a918844eecbe5904742ec180d5436e1b0a15" Mar 12 14:10:52.227220 master-0 kubenswrapper[4141]: I0312 14:10:52.227198 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:52.227871 master-0 kubenswrapper[4141]: I0312 14:10:52.227839 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:52.227871 master-0 kubenswrapper[4141]: I0312 14:10:52.227870 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:52.227994 master-0 kubenswrapper[4141]: I0312 14:10:52.227881 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:52.228418 master-0 kubenswrapper[4141]: I0312 14:10:52.228191 4141 scope.go:117] "RemoveContainer" containerID="93a2be4c1cc0002fe72e77c70515d0d6599835f46c575d492bb4928167ddaaac" Mar 12 14:10:52.228418 master-0 kubenswrapper[4141]: E0312 14:10:52.228333 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 12 14:10:52.232848 master-0 kubenswrapper[4141]: E0312 14:10:52.232760 4141 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c1d5cae9e99a3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c1d5cae9e99a3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:10:37.180959139 +0000 UTC m=+11.742531378,LastTimestamp:2026-03-12 14:10:52.228310434 +0000 UTC m=+26.789882683,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:10:52.621583 master-0 kubenswrapper[4141]: E0312 14:10:52.621530 4141 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 14:10:52.884197 master-0 kubenswrapper[4141]: I0312 14:10:52.884049 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:52.885086 master-0 kubenswrapper[4141]: I0312 14:10:52.885054 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:52.885146 master-0 kubenswrapper[4141]: I0312 14:10:52.885111 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:52.885146 master-0 kubenswrapper[4141]: I0312 14:10:52.885122 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:52.885238 master-0 kubenswrapper[4141]: I0312 14:10:52.885173 4141 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 14:10:52.889404 master-0 kubenswrapper[4141]: E0312 14:10:52.889377 4141 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 12 14:10:52.997614 master-0 kubenswrapper[4141]: I0312 14:10:52.997580 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:10:53.230502 master-0 kubenswrapper[4141]: I0312 14:10:53.230405 4141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 12 14:10:54.000730 master-0 kubenswrapper[4141]: I0312 14:10:54.000589 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:10:55.000172 master-0 kubenswrapper[4141]: I0312 14:10:55.000120 4141 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 14:10:55.873611 master-0 kubenswrapper[4141]: I0312 14:10:55.873549 4141 csr.go:257] certificate signing request csr-zqf7h is issued Mar 12 14:10:55.887877 master-0 kubenswrapper[4141]: I0312 14:10:55.887810 4141 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 12 14:10:56.002001 master-0 kubenswrapper[4141]: I0312 14:10:56.001947 4141 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 14:10:56.018099 master-0 kubenswrapper[4141]: I0312 14:10:56.018029 4141 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 14:10:56.076186 master-0 kubenswrapper[4141]: I0312 14:10:56.076142 4141 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 14:10:56.113754 master-0 kubenswrapper[4141]: E0312 14:10:56.113708 4141 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 12 14:10:56.336969 master-0 kubenswrapper[4141]: I0312 14:10:56.336910 4141 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 14:10:56.336969 master-0 kubenswrapper[4141]: E0312 14:10:56.336960 4141 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 12 14:10:56.360052 master-0 kubenswrapper[4141]: I0312 14:10:56.359997 4141 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 14:10:56.386035 master-0 kubenswrapper[4141]: I0312 14:10:56.385966 4141 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 14:10:56.452033 master-0 kubenswrapper[4141]: I0312 14:10:56.451941 4141 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 14:10:56.722771 master-0 kubenswrapper[4141]: I0312 14:10:56.722622 4141 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 14:10:56.722771 master-0 kubenswrapper[4141]: E0312 14:10:56.722685 4141 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 12 14:10:56.820839 master-0 kubenswrapper[4141]: I0312 14:10:56.820778 4141 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 14:10:56.841082 master-0 kubenswrapper[4141]: I0312 14:10:56.841020 4141 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 14:10:56.875266 master-0 kubenswrapper[4141]: I0312 14:10:56.875188 4141 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-13 14:02:38 +0000 UTC, rotation deadline is 2026-03-13 09:02:52.306089434 +0000 UTC Mar 12 14:10:56.875266 master-0 kubenswrapper[4141]: I0312 14:10:56.875258 4141 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h51m55.430838978s for next certificate rotation Mar 12 14:10:56.895746 master-0 kubenswrapper[4141]: I0312 14:10:56.895687 4141 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 14:10:57.159980 master-0 kubenswrapper[4141]: I0312 14:10:57.159924 4141 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 14:10:57.159980 master-0 kubenswrapper[4141]: E0312 14:10:57.159965 4141 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 12 14:10:57.738590 master-0 kubenswrapper[4141]: I0312 14:10:57.738539 4141 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 14:10:57.753066 master-0 kubenswrapper[4141]: I0312 14:10:57.753020 4141 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 14:10:57.807314 master-0 kubenswrapper[4141]: I0312 14:10:57.807261 4141 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 14:10:58.077388 master-0 kubenswrapper[4141]: I0312 14:10:58.077279 4141 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 14:10:58.077388 master-0 kubenswrapper[4141]: E0312 14:10:58.077316 4141 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 12 14:10:59.889816 master-0 kubenswrapper[4141]: I0312 14:10:59.889740 4141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:10:59.891429 master-0 kubenswrapper[4141]: I0312 14:10:59.891384 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:10:59.891488 master-0 kubenswrapper[4141]: I0312 14:10:59.891444 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:10:59.891488 master-0 kubenswrapper[4141]: I0312 14:10:59.891457 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:10:59.891559 master-0 kubenswrapper[4141]: I0312 14:10:59.891522 4141 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 14:11:00.000666 master-0 kubenswrapper[4141]: E0312 14:11:00.000611 4141 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Mar 12 14:11:00.137389 master-0 kubenswrapper[4141]: I0312 14:11:00.137326 4141 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 12 14:11:00.137389 master-0 kubenswrapper[4141]: E0312 14:11:00.137374 4141 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 12 14:11:00.747340 master-0 kubenswrapper[4141]: E0312 14:11:00.747289 4141 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 14:11:00.847875 master-0 kubenswrapper[4141]: E0312 14:11:00.847811 4141 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 14:11:00.887077 master-0 kubenswrapper[4141]: I0312 14:11:00.887005 4141 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 12 14:11:00.942963 master-0 kubenswrapper[4141]: I0312 14:11:00.942853 4141 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 12 14:11:00.998584 master-0 kubenswrapper[4141]: I0312 14:11:00.998408 4141 apiserver.go:52] "Watching apiserver" Mar 12 14:11:01.000276 master-0 kubenswrapper[4141]: I0312 14:11:01.000231 4141 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 12 14:11:01.000379 master-0 kubenswrapper[4141]: I0312 14:11:01.000322 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=[] Mar 12 14:11:01.019269 master-0 kubenswrapper[4141]: I0312 14:11:01.019192 4141 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 12 14:11:01.097586 master-0 kubenswrapper[4141]: I0312 14:11:01.097521 4141 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 12 14:11:01.221261 master-0 kubenswrapper[4141]: I0312 14:11:01.221192 4141 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 12 14:11:01.387957 master-0 kubenswrapper[4141]: I0312 14:11:01.387788 4141 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 12 14:11:01.653683 master-0 kubenswrapper[4141]: I0312 14:11:01.653533 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-vs878"] Mar 12 14:11:01.653885 master-0 kubenswrapper[4141]: I0312 14:11:01.653794 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:11:01.656784 master-0 kubenswrapper[4141]: I0312 14:11:01.656755 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 12 14:11:01.656988 master-0 kubenswrapper[4141]: I0312 14:11:01.656968 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 12 14:11:01.657739 master-0 kubenswrapper[4141]: I0312 14:11:01.657718 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 12 14:11:01.724158 master-0 kubenswrapper[4141]: I0312 14:11:01.724093 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:11:01.724158 master-0 kubenswrapper[4141]: I0312 14:11:01.724140 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-service-ca\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:11:01.724158 master-0 kubenswrapper[4141]: I0312 14:11:01.724162 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:11:01.724417 master-0 kubenswrapper[4141]: I0312 14:11:01.724223 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:11:01.724417 master-0 kubenswrapper[4141]: I0312 14:11:01.724265 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-kube-api-access\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:11:01.740163 master-0 kubenswrapper[4141]: I0312 14:11:01.740120 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-7c649bf6d4-ldxfn"] Mar 12 14:11:01.740366 master-0 kubenswrapper[4141]: I0312 14:11:01.740346 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" Mar 12 14:11:01.741713 master-0 kubenswrapper[4141]: I0312 14:11:01.741675 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 12 14:11:01.741861 master-0 kubenswrapper[4141]: I0312 14:11:01.741837 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 12 14:11:01.743478 master-0 kubenswrapper[4141]: I0312 14:11:01.743436 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 12 14:11:01.825331 master-0 kubenswrapper[4141]: I0312 14:11:01.825274 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-kube-api-access\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:11:01.825331 master-0 kubenswrapper[4141]: I0312 14:11:01.825324 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7433d9bf-4edf-4787-a7a1-e5102c7264c7-metrics-tls\") pod \"network-operator-7c649bf6d4-ldxfn\" (UID: \"7433d9bf-4edf-4787-a7a1-e5102c7264c7\") " pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" Mar 12 14:11:01.825562 master-0 kubenswrapper[4141]: I0312 14:11:01.825348 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-service-ca\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:11:01.825562 master-0 kubenswrapper[4141]: I0312 14:11:01.825494 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4q4w\" (UniqueName: \"kubernetes.io/projected/7433d9bf-4edf-4787-a7a1-e5102c7264c7-kube-api-access-t4q4w\") pod \"network-operator-7c649bf6d4-ldxfn\" (UID: \"7433d9bf-4edf-4787-a7a1-e5102c7264c7\") " pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" Mar 12 14:11:01.825614 master-0 kubenswrapper[4141]: I0312 14:11:01.825558 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:11:01.825725 master-0 kubenswrapper[4141]: I0312 14:11:01.825676 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:11:01.825800 master-0 kubenswrapper[4141]: I0312 14:11:01.825769 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:11:01.825835 master-0 kubenswrapper[4141]: I0312 14:11:01.825815 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:11:01.825835 master-0 kubenswrapper[4141]: I0312 14:11:01.825813 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:11:01.825960 master-0 kubenswrapper[4141]: I0312 14:11:01.825915 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/7433d9bf-4edf-4787-a7a1-e5102c7264c7-host-etc-kube\") pod \"network-operator-7c649bf6d4-ldxfn\" (UID: \"7433d9bf-4edf-4787-a7a1-e5102c7264c7\") " pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" Mar 12 14:11:01.826001 master-0 kubenswrapper[4141]: E0312 14:11:01.825982 4141 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 14:11:01.826157 master-0 kubenswrapper[4141]: E0312 14:11:01.826132 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert podName:29ab0e68-ebc6-48a3-b234-e1794c4c5ad6 nodeName:}" failed. No retries permitted until 2026-03-12 14:11:02.326040326 +0000 UTC m=+36.887612585 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert") pod "cluster-version-operator-745944c6b7-vs878" (UID: "29ab0e68-ebc6-48a3-b234-e1794c4c5ad6") : secret "cluster-version-operator-serving-cert" not found Mar 12 14:11:01.826427 master-0 kubenswrapper[4141]: I0312 14:11:01.826393 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-service-ca\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:11:01.842191 master-0 kubenswrapper[4141]: I0312 14:11:01.842129 4141 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 12 14:11:01.847541 master-0 kubenswrapper[4141]: I0312 14:11:01.847506 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-kube-api-access\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:11:01.927233 master-0 kubenswrapper[4141]: I0312 14:11:01.927104 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/7433d9bf-4edf-4787-a7a1-e5102c7264c7-host-etc-kube\") pod \"network-operator-7c649bf6d4-ldxfn\" (UID: \"7433d9bf-4edf-4787-a7a1-e5102c7264c7\") " pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" Mar 12 14:11:01.927233 master-0 kubenswrapper[4141]: I0312 14:11:01.927145 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7433d9bf-4edf-4787-a7a1-e5102c7264c7-metrics-tls\") pod \"network-operator-7c649bf6d4-ldxfn\" (UID: \"7433d9bf-4edf-4787-a7a1-e5102c7264c7\") " pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" Mar 12 14:11:01.927233 master-0 kubenswrapper[4141]: I0312 14:11:01.927163 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4q4w\" (UniqueName: \"kubernetes.io/projected/7433d9bf-4edf-4787-a7a1-e5102c7264c7-kube-api-access-t4q4w\") pod \"network-operator-7c649bf6d4-ldxfn\" (UID: \"7433d9bf-4edf-4787-a7a1-e5102c7264c7\") " pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" Mar 12 14:11:01.927470 master-0 kubenswrapper[4141]: I0312 14:11:01.927369 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/7433d9bf-4edf-4787-a7a1-e5102c7264c7-host-etc-kube\") pod \"network-operator-7c649bf6d4-ldxfn\" (UID: \"7433d9bf-4edf-4787-a7a1-e5102c7264c7\") " pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" Mar 12 14:11:01.930360 master-0 kubenswrapper[4141]: I0312 14:11:01.930324 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7433d9bf-4edf-4787-a7a1-e5102c7264c7-metrics-tls\") pod \"network-operator-7c649bf6d4-ldxfn\" (UID: \"7433d9bf-4edf-4787-a7a1-e5102c7264c7\") " pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" Mar 12 14:11:01.944690 master-0 kubenswrapper[4141]: I0312 14:11:01.944642 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4q4w\" (UniqueName: \"kubernetes.io/projected/7433d9bf-4edf-4787-a7a1-e5102c7264c7-kube-api-access-t4q4w\") pod \"network-operator-7c649bf6d4-ldxfn\" (UID: \"7433d9bf-4edf-4787-a7a1-e5102c7264c7\") " pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" Mar 12 14:11:02.051916 master-0 kubenswrapper[4141]: I0312 14:11:02.051840 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" Mar 12 14:11:02.251736 master-0 kubenswrapper[4141]: I0312 14:11:02.251634 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" event={"ID":"7433d9bf-4edf-4787-a7a1-e5102c7264c7","Type":"ContainerStarted","Data":"422b72f1d9f4ed3748b07f1e5c14fad3faa59d5f9a198007cce69e02be1d9fa2"} Mar 12 14:11:02.331204 master-0 kubenswrapper[4141]: I0312 14:11:02.331115 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:11:02.331498 master-0 kubenswrapper[4141]: E0312 14:11:02.331253 4141 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 14:11:02.331498 master-0 kubenswrapper[4141]: E0312 14:11:02.331326 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert podName:29ab0e68-ebc6-48a3-b234-e1794c4c5ad6 nodeName:}" failed. No retries permitted until 2026-03-12 14:11:03.331304432 +0000 UTC m=+37.892876691 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert") pod "cluster-version-operator-745944c6b7-vs878" (UID: "29ab0e68-ebc6-48a3-b234-e1794c4c5ad6") : secret "cluster-version-operator-serving-cert" not found Mar 12 14:11:02.805352 master-0 kubenswrapper[4141]: I0312 14:11:02.805240 4141 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 12 14:11:02.942644 master-0 kubenswrapper[4141]: I0312 14:11:02.942575 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-lbcvf"] Mar 12 14:11:02.943112 master-0 kubenswrapper[4141]: I0312 14:11:02.943061 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-lbcvf" Mar 12 14:11:02.946084 master-0 kubenswrapper[4141]: I0312 14:11:02.946026 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Mar 12 14:11:02.946580 master-0 kubenswrapper[4141]: I0312 14:11:02.946085 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Mar 12 14:11:02.946580 master-0 kubenswrapper[4141]: I0312 14:11:02.946141 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Mar 12 14:11:02.947023 master-0 kubenswrapper[4141]: I0312 14:11:02.946988 4141 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Mar 12 14:11:03.035611 master-0 kubenswrapper[4141]: I0312 14:11:03.035514 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/146495bf-0787-483f-a9fc-0e8925b89150-sno-bootstrap-files\") pod \"assisted-installer-controller-lbcvf\" (UID: \"146495bf-0787-483f-a9fc-0e8925b89150\") " pod="assisted-installer/assisted-installer-controller-lbcvf" Mar 12 14:11:03.035854 master-0 kubenswrapper[4141]: I0312 14:11:03.035667 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/146495bf-0787-483f-a9fc-0e8925b89150-host-resolv-conf\") pod \"assisted-installer-controller-lbcvf\" (UID: \"146495bf-0787-483f-a9fc-0e8925b89150\") " pod="assisted-installer/assisted-installer-controller-lbcvf" Mar 12 14:11:03.035854 master-0 kubenswrapper[4141]: I0312 14:11:03.035713 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/146495bf-0787-483f-a9fc-0e8925b89150-host-var-run-resolv-conf\") pod \"assisted-installer-controller-lbcvf\" (UID: \"146495bf-0787-483f-a9fc-0e8925b89150\") " pod="assisted-installer/assisted-installer-controller-lbcvf" Mar 12 14:11:03.035854 master-0 kubenswrapper[4141]: I0312 14:11:03.035758 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/146495bf-0787-483f-a9fc-0e8925b89150-host-ca-bundle\") pod \"assisted-installer-controller-lbcvf\" (UID: \"146495bf-0787-483f-a9fc-0e8925b89150\") " pod="assisted-installer/assisted-installer-controller-lbcvf" Mar 12 14:11:03.035854 master-0 kubenswrapper[4141]: I0312 14:11:03.035801 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c7qs\" (UniqueName: \"kubernetes.io/projected/146495bf-0787-483f-a9fc-0e8925b89150-kube-api-access-7c7qs\") pod \"assisted-installer-controller-lbcvf\" (UID: \"146495bf-0787-483f-a9fc-0e8925b89150\") " pod="assisted-installer/assisted-installer-controller-lbcvf" Mar 12 14:11:03.137187 master-0 kubenswrapper[4141]: I0312 14:11:03.137049 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/146495bf-0787-483f-a9fc-0e8925b89150-host-resolv-conf\") pod \"assisted-installer-controller-lbcvf\" (UID: \"146495bf-0787-483f-a9fc-0e8925b89150\") " pod="assisted-installer/assisted-installer-controller-lbcvf" Mar 12 14:11:03.137187 master-0 kubenswrapper[4141]: I0312 14:11:03.137105 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/146495bf-0787-483f-a9fc-0e8925b89150-host-var-run-resolv-conf\") pod \"assisted-installer-controller-lbcvf\" (UID: \"146495bf-0787-483f-a9fc-0e8925b89150\") " pod="assisted-installer/assisted-installer-controller-lbcvf" Mar 12 14:11:03.137187 master-0 kubenswrapper[4141]: I0312 14:11:03.137130 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/146495bf-0787-483f-a9fc-0e8925b89150-host-ca-bundle\") pod \"assisted-installer-controller-lbcvf\" (UID: \"146495bf-0787-483f-a9fc-0e8925b89150\") " pod="assisted-installer/assisted-installer-controller-lbcvf" Mar 12 14:11:03.137187 master-0 kubenswrapper[4141]: I0312 14:11:03.137191 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/146495bf-0787-483f-a9fc-0e8925b89150-host-var-run-resolv-conf\") pod \"assisted-installer-controller-lbcvf\" (UID: \"146495bf-0787-483f-a9fc-0e8925b89150\") " pod="assisted-installer/assisted-installer-controller-lbcvf" Mar 12 14:11:03.137560 master-0 kubenswrapper[4141]: I0312 14:11:03.137213 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/146495bf-0787-483f-a9fc-0e8925b89150-host-resolv-conf\") pod \"assisted-installer-controller-lbcvf\" (UID: \"146495bf-0787-483f-a9fc-0e8925b89150\") " pod="assisted-installer/assisted-installer-controller-lbcvf" Mar 12 14:11:03.137560 master-0 kubenswrapper[4141]: I0312 14:11:03.137287 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c7qs\" (UniqueName: \"kubernetes.io/projected/146495bf-0787-483f-a9fc-0e8925b89150-kube-api-access-7c7qs\") pod \"assisted-installer-controller-lbcvf\" (UID: \"146495bf-0787-483f-a9fc-0e8925b89150\") " pod="assisted-installer/assisted-installer-controller-lbcvf" Mar 12 14:11:03.137560 master-0 kubenswrapper[4141]: I0312 14:11:03.137319 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/146495bf-0787-483f-a9fc-0e8925b89150-host-ca-bundle\") pod \"assisted-installer-controller-lbcvf\" (UID: \"146495bf-0787-483f-a9fc-0e8925b89150\") " pod="assisted-installer/assisted-installer-controller-lbcvf" Mar 12 14:11:03.137560 master-0 kubenswrapper[4141]: I0312 14:11:03.137333 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/146495bf-0787-483f-a9fc-0e8925b89150-sno-bootstrap-files\") pod \"assisted-installer-controller-lbcvf\" (UID: \"146495bf-0787-483f-a9fc-0e8925b89150\") " pod="assisted-installer/assisted-installer-controller-lbcvf" Mar 12 14:11:03.137560 master-0 kubenswrapper[4141]: I0312 14:11:03.137415 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/146495bf-0787-483f-a9fc-0e8925b89150-sno-bootstrap-files\") pod \"assisted-installer-controller-lbcvf\" (UID: \"146495bf-0787-483f-a9fc-0e8925b89150\") " pod="assisted-installer/assisted-installer-controller-lbcvf" Mar 12 14:11:03.175632 master-0 kubenswrapper[4141]: I0312 14:11:03.175502 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c7qs\" (UniqueName: \"kubernetes.io/projected/146495bf-0787-483f-a9fc-0e8925b89150-kube-api-access-7c7qs\") pod \"assisted-installer-controller-lbcvf\" (UID: \"146495bf-0787-483f-a9fc-0e8925b89150\") " pod="assisted-installer/assisted-installer-controller-lbcvf" Mar 12 14:11:03.271874 master-0 kubenswrapper[4141]: I0312 14:11:03.271409 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-lbcvf" Mar 12 14:11:03.282096 master-0 kubenswrapper[4141]: W0312 14:11:03.282046 4141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod146495bf_0787_483f_a9fc_0e8925b89150.slice/crio-8691ff1161482cb0ea7536261d7d49ae2b9d112fc1e670e086005a7ae489ba6c WatchSource:0}: Error finding container 8691ff1161482cb0ea7536261d7d49ae2b9d112fc1e670e086005a7ae489ba6c: Status 404 returned error can't find the container with id 8691ff1161482cb0ea7536261d7d49ae2b9d112fc1e670e086005a7ae489ba6c Mar 12 14:11:03.338574 master-0 kubenswrapper[4141]: I0312 14:11:03.338530 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:11:03.338761 master-0 kubenswrapper[4141]: E0312 14:11:03.338650 4141 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 14:11:03.338761 master-0 kubenswrapper[4141]: E0312 14:11:03.338702 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert podName:29ab0e68-ebc6-48a3-b234-e1794c4c5ad6 nodeName:}" failed. No retries permitted until 2026-03-12 14:11:05.338687072 +0000 UTC m=+39.900259321 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert") pod "cluster-version-operator-745944c6b7-vs878" (UID: "29ab0e68-ebc6-48a3-b234-e1794c4c5ad6") : secret "cluster-version-operator-serving-cert" not found Mar 12 14:11:04.260105 master-0 kubenswrapper[4141]: I0312 14:11:04.260033 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-lbcvf" event={"ID":"146495bf-0787-483f-a9fc-0e8925b89150","Type":"ContainerStarted","Data":"8691ff1161482cb0ea7536261d7d49ae2b9d112fc1e670e086005a7ae489ba6c"} Mar 12 14:11:05.146279 master-0 kubenswrapper[4141]: I0312 14:11:05.146176 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Mar 12 14:11:05.146753 master-0 kubenswrapper[4141]: I0312 14:11:05.146304 4141 scope.go:117] "RemoveContainer" containerID="93a2be4c1cc0002fe72e77c70515d0d6599835f46c575d492bb4928167ddaaac" Mar 12 14:11:05.146753 master-0 kubenswrapper[4141]: E0312 14:11:05.146490 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 12 14:11:05.262561 master-0 kubenswrapper[4141]: I0312 14:11:05.262502 4141 scope.go:117] "RemoveContainer" containerID="93a2be4c1cc0002fe72e77c70515d0d6599835f46c575d492bb4928167ddaaac" Mar 12 14:11:05.263075 master-0 kubenswrapper[4141]: E0312 14:11:05.262648 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 12 14:11:05.350664 master-0 kubenswrapper[4141]: I0312 14:11:05.350610 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:11:05.350864 master-0 kubenswrapper[4141]: E0312 14:11:05.350770 4141 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 14:11:05.350864 master-0 kubenswrapper[4141]: E0312 14:11:05.350833 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert podName:29ab0e68-ebc6-48a3-b234-e1794c4c5ad6 nodeName:}" failed. No retries permitted until 2026-03-12 14:11:09.350817149 +0000 UTC m=+43.912389398 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert") pod "cluster-version-operator-745944c6b7-vs878" (UID: "29ab0e68-ebc6-48a3-b234-e1794c4c5ad6") : secret "cluster-version-operator-serving-cert" not found Mar 12 14:11:06.070855 master-0 kubenswrapper[4141]: I0312 14:11:06.070705 4141 csr.go:261] certificate signing request csr-xkzbn is approved, waiting to be issued Mar 12 14:11:06.076198 master-0 kubenswrapper[4141]: I0312 14:11:06.076068 4141 csr.go:257] certificate signing request csr-xkzbn is issued Mar 12 14:11:07.077929 master-0 kubenswrapper[4141]: I0312 14:11:07.077847 4141 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-13 14:02:38 +0000 UTC, rotation deadline is 2026-03-13 08:03:35.432566696 +0000 UTC Mar 12 14:11:07.077929 master-0 kubenswrapper[4141]: I0312 14:11:07.077893 4141 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h52m28.354677012s for next certificate rotation Mar 12 14:11:07.268898 master-0 kubenswrapper[4141]: I0312 14:11:07.268818 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" event={"ID":"7433d9bf-4edf-4787-a7a1-e5102c7264c7","Type":"ContainerStarted","Data":"9ba513db643889b41a810dd1c7684949b6c126d71f8ce738dd6a0c0db835816a"} Mar 12 14:11:07.282456 master-0 kubenswrapper[4141]: I0312 14:11:07.281525 4141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" podStartSLOduration=2.167880276 podStartE2EDuration="6.281324902s" podCreationTimestamp="2026-03-12 14:11:01 +0000 UTC" firstStartedPulling="2026-03-12 14:11:02.068270343 +0000 UTC m=+36.629842592" lastFinishedPulling="2026-03-12 14:11:06.181714969 +0000 UTC m=+40.743287218" observedRunningTime="2026-03-12 14:11:07.281183158 +0000 UTC m=+41.842755417" watchObservedRunningTime="2026-03-12 14:11:07.281324902 +0000 UTC m=+41.842897161" Mar 12 14:11:08.078334 master-0 kubenswrapper[4141]: I0312 14:11:08.078264 4141 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-13 14:02:38 +0000 UTC, rotation deadline is 2026-03-13 08:51:27.727964973 +0000 UTC Mar 12 14:11:08.078334 master-0 kubenswrapper[4141]: I0312 14:11:08.078306 4141 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 18h40m19.649661165s for next certificate rotation Mar 12 14:11:08.714304 master-0 kubenswrapper[4141]: I0312 14:11:08.713738 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-78v9p"] Mar 12 14:11:08.714304 master-0 kubenswrapper[4141]: I0312 14:11:08.714030 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-78v9p" Mar 12 14:11:08.776721 master-0 kubenswrapper[4141]: I0312 14:11:08.776658 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftl2k\" (UniqueName: \"kubernetes.io/projected/9e7877fc-0d91-4dbe-b2ae-fa50012ced6c-kube-api-access-ftl2k\") pod \"mtu-prober-78v9p\" (UID: \"9e7877fc-0d91-4dbe-b2ae-fa50012ced6c\") " pod="openshift-network-operator/mtu-prober-78v9p" Mar 12 14:11:08.877784 master-0 kubenswrapper[4141]: I0312 14:11:08.877705 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftl2k\" (UniqueName: \"kubernetes.io/projected/9e7877fc-0d91-4dbe-b2ae-fa50012ced6c-kube-api-access-ftl2k\") pod \"mtu-prober-78v9p\" (UID: \"9e7877fc-0d91-4dbe-b2ae-fa50012ced6c\") " pod="openshift-network-operator/mtu-prober-78v9p" Mar 12 14:11:08.895736 master-0 kubenswrapper[4141]: I0312 14:11:08.895660 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftl2k\" (UniqueName: \"kubernetes.io/projected/9e7877fc-0d91-4dbe-b2ae-fa50012ced6c-kube-api-access-ftl2k\") pod \"mtu-prober-78v9p\" (UID: \"9e7877fc-0d91-4dbe-b2ae-fa50012ced6c\") " pod="openshift-network-operator/mtu-prober-78v9p" Mar 12 14:11:09.025885 master-0 kubenswrapper[4141]: I0312 14:11:09.025827 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-78v9p" Mar 12 14:11:09.036110 master-0 kubenswrapper[4141]: W0312 14:11:09.036074 4141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e7877fc_0d91_4dbe_b2ae_fa50012ced6c.slice/crio-6a9ca791a9c31d32eb3b1f76dacaa4dbf6e803ba7631d129d0e8b60119983844 WatchSource:0}: Error finding container 6a9ca791a9c31d32eb3b1f76dacaa4dbf6e803ba7631d129d0e8b60119983844: Status 404 returned error can't find the container with id 6a9ca791a9c31d32eb3b1f76dacaa4dbf6e803ba7631d129d0e8b60119983844 Mar 12 14:11:09.273750 master-0 kubenswrapper[4141]: I0312 14:11:09.273705 4141 generic.go:334] "Generic (PLEG): container finished" podID="146495bf-0787-483f-a9fc-0e8925b89150" containerID="6033bc31672a320e7b8ffbe7a63f79564d187ec798713169c640338dfe2b84c4" exitCode=0 Mar 12 14:11:09.274502 master-0 kubenswrapper[4141]: I0312 14:11:09.273768 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-lbcvf" event={"ID":"146495bf-0787-483f-a9fc-0e8925b89150","Type":"ContainerDied","Data":"6033bc31672a320e7b8ffbe7a63f79564d187ec798713169c640338dfe2b84c4"} Mar 12 14:11:09.275572 master-0 kubenswrapper[4141]: I0312 14:11:09.275495 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-78v9p" event={"ID":"9e7877fc-0d91-4dbe-b2ae-fa50012ced6c","Type":"ContainerStarted","Data":"e918e5e1279bbcaf698142b1c788174be79639920e9232ace941582c175becab"} Mar 12 14:11:09.275572 master-0 kubenswrapper[4141]: I0312 14:11:09.275565 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-78v9p" event={"ID":"9e7877fc-0d91-4dbe-b2ae-fa50012ced6c","Type":"ContainerStarted","Data":"6a9ca791a9c31d32eb3b1f76dacaa4dbf6e803ba7631d129d0e8b60119983844"} Mar 12 14:11:09.381171 master-0 kubenswrapper[4141]: I0312 14:11:09.381104 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:11:09.381496 master-0 kubenswrapper[4141]: E0312 14:11:09.381426 4141 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 14:11:09.381548 master-0 kubenswrapper[4141]: E0312 14:11:09.381531 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert podName:29ab0e68-ebc6-48a3-b234-e1794c4c5ad6 nodeName:}" failed. No retries permitted until 2026-03-12 14:11:17.381509633 +0000 UTC m=+51.943081882 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert") pod "cluster-version-operator-745944c6b7-vs878" (UID: "29ab0e68-ebc6-48a3-b234-e1794c4c5ad6") : secret "cluster-version-operator-serving-cert" not found Mar 12 14:11:10.279547 master-0 kubenswrapper[4141]: I0312 14:11:10.279458 4141 generic.go:334] "Generic (PLEG): container finished" podID="9e7877fc-0d91-4dbe-b2ae-fa50012ced6c" containerID="e918e5e1279bbcaf698142b1c788174be79639920e9232ace941582c175becab" exitCode=0 Mar 12 14:11:10.280575 master-0 kubenswrapper[4141]: I0312 14:11:10.279788 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-78v9p" event={"ID":"9e7877fc-0d91-4dbe-b2ae-fa50012ced6c","Type":"ContainerDied","Data":"e918e5e1279bbcaf698142b1c788174be79639920e9232ace941582c175becab"} Mar 12 14:11:10.305660 master-0 kubenswrapper[4141]: I0312 14:11:10.305587 4141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-lbcvf" Mar 12 14:11:10.389413 master-0 kubenswrapper[4141]: I0312 14:11:10.389333 4141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c7qs\" (UniqueName: \"kubernetes.io/projected/146495bf-0787-483f-a9fc-0e8925b89150-kube-api-access-7c7qs\") pod \"146495bf-0787-483f-a9fc-0e8925b89150\" (UID: \"146495bf-0787-483f-a9fc-0e8925b89150\") " Mar 12 14:11:10.389413 master-0 kubenswrapper[4141]: I0312 14:11:10.389380 4141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/146495bf-0787-483f-a9fc-0e8925b89150-sno-bootstrap-files\") pod \"146495bf-0787-483f-a9fc-0e8925b89150\" (UID: \"146495bf-0787-483f-a9fc-0e8925b89150\") " Mar 12 14:11:10.389413 master-0 kubenswrapper[4141]: I0312 14:11:10.389400 4141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/146495bf-0787-483f-a9fc-0e8925b89150-host-resolv-conf\") pod \"146495bf-0787-483f-a9fc-0e8925b89150\" (UID: \"146495bf-0787-483f-a9fc-0e8925b89150\") " Mar 12 14:11:10.389413 master-0 kubenswrapper[4141]: I0312 14:11:10.389433 4141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/146495bf-0787-483f-a9fc-0e8925b89150-host-ca-bundle\") pod \"146495bf-0787-483f-a9fc-0e8925b89150\" (UID: \"146495bf-0787-483f-a9fc-0e8925b89150\") " Mar 12 14:11:10.389869 master-0 kubenswrapper[4141]: I0312 14:11:10.389450 4141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/146495bf-0787-483f-a9fc-0e8925b89150-host-var-run-resolv-conf\") pod \"146495bf-0787-483f-a9fc-0e8925b89150\" (UID: \"146495bf-0787-483f-a9fc-0e8925b89150\") " Mar 12 14:11:10.389869 master-0 kubenswrapper[4141]: I0312 14:11:10.389535 4141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/146495bf-0787-483f-a9fc-0e8925b89150-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "146495bf-0787-483f-a9fc-0e8925b89150" (UID: "146495bf-0787-483f-a9fc-0e8925b89150"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:11:10.389869 master-0 kubenswrapper[4141]: I0312 14:11:10.389538 4141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/146495bf-0787-483f-a9fc-0e8925b89150-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "146495bf-0787-483f-a9fc-0e8925b89150" (UID: "146495bf-0787-483f-a9fc-0e8925b89150"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:11:10.389869 master-0 kubenswrapper[4141]: I0312 14:11:10.389583 4141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/146495bf-0787-483f-a9fc-0e8925b89150-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "146495bf-0787-483f-a9fc-0e8925b89150" (UID: "146495bf-0787-483f-a9fc-0e8925b89150"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:11:10.389869 master-0 kubenswrapper[4141]: I0312 14:11:10.389709 4141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/146495bf-0787-483f-a9fc-0e8925b89150-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "146495bf-0787-483f-a9fc-0e8925b89150" (UID: "146495bf-0787-483f-a9fc-0e8925b89150"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:11:10.396026 master-0 kubenswrapper[4141]: I0312 14:11:10.395434 4141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/146495bf-0787-483f-a9fc-0e8925b89150-kube-api-access-7c7qs" (OuterVolumeSpecName: "kube-api-access-7c7qs") pod "146495bf-0787-483f-a9fc-0e8925b89150" (UID: "146495bf-0787-483f-a9fc-0e8925b89150"). InnerVolumeSpecName "kube-api-access-7c7qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:11:10.490665 master-0 kubenswrapper[4141]: I0312 14:11:10.490521 4141 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/146495bf-0787-483f-a9fc-0e8925b89150-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 12 14:11:10.490665 master-0 kubenswrapper[4141]: I0312 14:11:10.490577 4141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c7qs\" (UniqueName: \"kubernetes.io/projected/146495bf-0787-483f-a9fc-0e8925b89150-kube-api-access-7c7qs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:11:10.490665 master-0 kubenswrapper[4141]: I0312 14:11:10.490588 4141 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/146495bf-0787-483f-a9fc-0e8925b89150-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Mar 12 14:11:10.490665 master-0 kubenswrapper[4141]: I0312 14:11:10.490597 4141 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/146495bf-0787-483f-a9fc-0e8925b89150-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 12 14:11:10.490665 master-0 kubenswrapper[4141]: I0312 14:11:10.490605 4141 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/146495bf-0787-483f-a9fc-0e8925b89150-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:11:11.284154 master-0 kubenswrapper[4141]: I0312 14:11:11.284109 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-lbcvf" event={"ID":"146495bf-0787-483f-a9fc-0e8925b89150","Type":"ContainerDied","Data":"8691ff1161482cb0ea7536261d7d49ae2b9d112fc1e670e086005a7ae489ba6c"} Mar 12 14:11:11.284154 master-0 kubenswrapper[4141]: I0312 14:11:11.284156 4141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8691ff1161482cb0ea7536261d7d49ae2b9d112fc1e670e086005a7ae489ba6c" Mar 12 14:11:11.285197 master-0 kubenswrapper[4141]: I0312 14:11:11.284166 4141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-lbcvf" Mar 12 14:11:11.301976 master-0 kubenswrapper[4141]: I0312 14:11:11.301934 4141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-78v9p" Mar 12 14:11:11.395698 master-0 kubenswrapper[4141]: I0312 14:11:11.395650 4141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftl2k\" (UniqueName: \"kubernetes.io/projected/9e7877fc-0d91-4dbe-b2ae-fa50012ced6c-kube-api-access-ftl2k\") pod \"9e7877fc-0d91-4dbe-b2ae-fa50012ced6c\" (UID: \"9e7877fc-0d91-4dbe-b2ae-fa50012ced6c\") " Mar 12 14:11:11.398201 master-0 kubenswrapper[4141]: I0312 14:11:11.398088 4141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e7877fc-0d91-4dbe-b2ae-fa50012ced6c-kube-api-access-ftl2k" (OuterVolumeSpecName: "kube-api-access-ftl2k") pod "9e7877fc-0d91-4dbe-b2ae-fa50012ced6c" (UID: "9e7877fc-0d91-4dbe-b2ae-fa50012ced6c"). InnerVolumeSpecName "kube-api-access-ftl2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:11:11.496054 master-0 kubenswrapper[4141]: I0312 14:11:11.495987 4141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftl2k\" (UniqueName: \"kubernetes.io/projected/9e7877fc-0d91-4dbe-b2ae-fa50012ced6c-kube-api-access-ftl2k\") on node \"master-0\" DevicePath \"\"" Mar 12 14:11:12.288170 master-0 kubenswrapper[4141]: I0312 14:11:12.288105 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-78v9p" event={"ID":"9e7877fc-0d91-4dbe-b2ae-fa50012ced6c","Type":"ContainerDied","Data":"6a9ca791a9c31d32eb3b1f76dacaa4dbf6e803ba7631d129d0e8b60119983844"} Mar 12 14:11:12.288170 master-0 kubenswrapper[4141]: I0312 14:11:12.288146 4141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a9ca791a9c31d32eb3b1f76dacaa4dbf6e803ba7631d129d0e8b60119983844" Mar 12 14:11:12.288834 master-0 kubenswrapper[4141]: I0312 14:11:12.288172 4141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-78v9p" Mar 12 14:11:13.712484 master-0 kubenswrapper[4141]: I0312 14:11:13.712433 4141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-78v9p"] Mar 12 14:11:13.716384 master-0 kubenswrapper[4141]: I0312 14:11:13.716348 4141 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-78v9p"] Mar 12 14:11:14.134252 master-0 kubenswrapper[4141]: I0312 14:11:14.134157 4141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e7877fc-0d91-4dbe-b2ae-fa50012ced6c" path="/var/lib/kubelet/pods/9e7877fc-0d91-4dbe-b2ae-fa50012ced6c/volumes" Mar 12 14:11:17.131870 master-0 kubenswrapper[4141]: I0312 14:11:17.131486 4141 scope.go:117] "RemoveContainer" containerID="93a2be4c1cc0002fe72e77c70515d0d6599835f46c575d492bb4928167ddaaac" Mar 12 14:11:17.440229 master-0 kubenswrapper[4141]: I0312 14:11:17.440078 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:11:17.440406 master-0 kubenswrapper[4141]: E0312 14:11:17.440237 4141 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 14:11:17.440406 master-0 kubenswrapper[4141]: E0312 14:11:17.440319 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert podName:29ab0e68-ebc6-48a3-b234-e1794c4c5ad6 nodeName:}" failed. No retries permitted until 2026-03-12 14:11:33.440297419 +0000 UTC m=+68.001869668 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert") pod "cluster-version-operator-745944c6b7-vs878" (UID: "29ab0e68-ebc6-48a3-b234-e1794c4c5ad6") : secret "cluster-version-operator-serving-cert" not found Mar 12 14:11:18.303850 master-0 kubenswrapper[4141]: I0312 14:11:18.303754 4141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 12 14:11:18.304743 master-0 kubenswrapper[4141]: I0312 14:11:18.304284 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"464680c0443f63fd05a16f58ce52f9d2432c0930cf81a8fc5c4fea579afa01c4"} Mar 12 14:11:18.591174 master-0 kubenswrapper[4141]: I0312 14:11:18.591035 4141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=13.591012045 podStartE2EDuration="13.591012045s" podCreationTimestamp="2026-03-12 14:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:11:18.317198492 +0000 UTC m=+52.878770751" watchObservedRunningTime="2026-03-12 14:11:18.591012045 +0000 UTC m=+53.152584294" Mar 12 14:11:18.591174 master-0 kubenswrapper[4141]: I0312 14:11:18.591172 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-zttwz"] Mar 12 14:11:18.591392 master-0 kubenswrapper[4141]: E0312 14:11:18.591232 4141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e7877fc-0d91-4dbe-b2ae-fa50012ced6c" containerName="prober" Mar 12 14:11:18.591392 master-0 kubenswrapper[4141]: I0312 14:11:18.591244 4141 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e7877fc-0d91-4dbe-b2ae-fa50012ced6c" containerName="prober" Mar 12 14:11:18.591392 master-0 kubenswrapper[4141]: E0312 14:11:18.591254 4141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="146495bf-0787-483f-a9fc-0e8925b89150" containerName="assisted-installer-controller" Mar 12 14:11:18.591392 master-0 kubenswrapper[4141]: I0312 14:11:18.591263 4141 state_mem.go:107] "Deleted CPUSet assignment" podUID="146495bf-0787-483f-a9fc-0e8925b89150" containerName="assisted-installer-controller" Mar 12 14:11:18.591392 master-0 kubenswrapper[4141]: I0312 14:11:18.591306 4141 memory_manager.go:354] "RemoveStaleState removing state" podUID="146495bf-0787-483f-a9fc-0e8925b89150" containerName="assisted-installer-controller" Mar 12 14:11:18.591392 master-0 kubenswrapper[4141]: I0312 14:11:18.591315 4141 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e7877fc-0d91-4dbe-b2ae-fa50012ced6c" containerName="prober" Mar 12 14:11:18.591619 master-0 kubenswrapper[4141]: I0312 14:11:18.591503 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.594860 master-0 kubenswrapper[4141]: I0312 14:11:18.594070 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 12 14:11:18.594860 master-0 kubenswrapper[4141]: I0312 14:11:18.594345 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 12 14:11:18.596125 master-0 kubenswrapper[4141]: I0312 14:11:18.596092 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 12 14:11:18.596962 master-0 kubenswrapper[4141]: I0312 14:11:18.596294 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 12 14:11:18.649866 master-0 kubenswrapper[4141]: I0312 14:11:18.649810 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-etc-kubernetes\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.649866 master-0 kubenswrapper[4141]: I0312 14:11:18.649867 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-cni-multus\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.650105 master-0 kubenswrapper[4141]: I0312 14:11:18.649963 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-cni-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.650105 master-0 kubenswrapper[4141]: I0312 14:11:18.650003 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-k8s-cni-cncf-io\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.650105 master-0 kubenswrapper[4141]: I0312 14:11:18.650023 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-conf-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.650105 master-0 kubenswrapper[4141]: I0312 14:11:18.650044 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-os-release\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.650105 master-0 kubenswrapper[4141]: I0312 14:11:18.650059 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/95c11263-0d68-4b11-bcfd-bcb0e96a6988-cni-binary-copy\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.650105 master-0 kubenswrapper[4141]: I0312 14:11:18.650073 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-cni-bin\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.650315 master-0 kubenswrapper[4141]: I0312 14:11:18.650113 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-kubelet\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.650315 master-0 kubenswrapper[4141]: I0312 14:11:18.650148 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-daemon-config\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.650315 master-0 kubenswrapper[4141]: I0312 14:11:18.650189 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-multus-certs\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.650315 master-0 kubenswrapper[4141]: I0312 14:11:18.650221 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-netns\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.650315 master-0 kubenswrapper[4141]: I0312 14:11:18.650240 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-hostroot\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.650315 master-0 kubenswrapper[4141]: I0312 14:11:18.650263 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-system-cni-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.650315 master-0 kubenswrapper[4141]: I0312 14:11:18.650283 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-socket-dir-parent\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.650315 master-0 kubenswrapper[4141]: I0312 14:11:18.650302 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pfns\" (UniqueName: \"kubernetes.io/projected/95c11263-0d68-4b11-bcfd-bcb0e96a6988-kube-api-access-6pfns\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.650566 master-0 kubenswrapper[4141]: I0312 14:11:18.650342 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-cnibin\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.751086 master-0 kubenswrapper[4141]: I0312 14:11:18.751021 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-system-cni-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.751086 master-0 kubenswrapper[4141]: I0312 14:11:18.751069 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-netns\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.751388 master-0 kubenswrapper[4141]: I0312 14:11:18.751238 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-hostroot\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.751388 master-0 kubenswrapper[4141]: I0312 14:11:18.751228 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-system-cni-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.751388 master-0 kubenswrapper[4141]: I0312 14:11:18.751312 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-netns\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.751732 master-0 kubenswrapper[4141]: I0312 14:11:18.751391 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pfns\" (UniqueName: \"kubernetes.io/projected/95c11263-0d68-4b11-bcfd-bcb0e96a6988-kube-api-access-6pfns\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.751732 master-0 kubenswrapper[4141]: I0312 14:11:18.751544 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-socket-dir-parent\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.751732 master-0 kubenswrapper[4141]: I0312 14:11:18.751585 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-cnibin\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.751732 master-0 kubenswrapper[4141]: I0312 14:11:18.751611 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-etc-kubernetes\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.751732 master-0 kubenswrapper[4141]: I0312 14:11:18.751618 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-hostroot\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.751732 master-0 kubenswrapper[4141]: I0312 14:11:18.751708 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-cnibin\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.752158 master-0 kubenswrapper[4141]: I0312 14:11:18.751752 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-cni-multus\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.752158 master-0 kubenswrapper[4141]: I0312 14:11:18.751768 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-etc-kubernetes\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.752158 master-0 kubenswrapper[4141]: I0312 14:11:18.751785 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-cni-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.752158 master-0 kubenswrapper[4141]: I0312 14:11:18.751809 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-k8s-cni-cncf-io\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.752158 master-0 kubenswrapper[4141]: I0312 14:11:18.751830 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-conf-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.752158 master-0 kubenswrapper[4141]: I0312 14:11:18.751806 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-cni-multus\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.752158 master-0 kubenswrapper[4141]: I0312 14:11:18.751862 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-cni-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.752158 master-0 kubenswrapper[4141]: I0312 14:11:18.751862 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-k8s-cni-cncf-io\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.752158 master-0 kubenswrapper[4141]: I0312 14:11:18.751871 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-os-release\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.752158 master-0 kubenswrapper[4141]: I0312 14:11:18.751887 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-conf-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.752158 master-0 kubenswrapper[4141]: I0312 14:11:18.751928 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/95c11263-0d68-4b11-bcfd-bcb0e96a6988-cni-binary-copy\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.752158 master-0 kubenswrapper[4141]: I0312 14:11:18.751956 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-os-release\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.752158 master-0 kubenswrapper[4141]: I0312 14:11:18.751963 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-cni-bin\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.752158 master-0 kubenswrapper[4141]: I0312 14:11:18.751991 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-cni-bin\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.752158 master-0 kubenswrapper[4141]: I0312 14:11:18.751992 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-kubelet\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.752158 master-0 kubenswrapper[4141]: I0312 14:11:18.752020 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-daemon-config\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.752158 master-0 kubenswrapper[4141]: I0312 14:11:18.752015 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-kubelet\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.752158 master-0 kubenswrapper[4141]: I0312 14:11:18.752043 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-multus-certs\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.753447 master-0 kubenswrapper[4141]: I0312 14:11:18.752095 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-multus-certs\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.753447 master-0 kubenswrapper[4141]: I0312 14:11:18.752161 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-socket-dir-parent\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.753447 master-0 kubenswrapper[4141]: I0312 14:11:18.752690 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/95c11263-0d68-4b11-bcfd-bcb0e96a6988-cni-binary-copy\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.753447 master-0 kubenswrapper[4141]: I0312 14:11:18.752839 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-daemon-config\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.773235 master-0 kubenswrapper[4141]: I0312 14:11:18.773180 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pfns\" (UniqueName: \"kubernetes.io/projected/95c11263-0d68-4b11-bcfd-bcb0e96a6988-kube-api-access-6pfns\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.798618 master-0 kubenswrapper[4141]: I0312 14:11:18.798526 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-h868v"] Mar 12 14:11:18.798942 master-0 kubenswrapper[4141]: I0312 14:11:18.798928 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:11:18.801162 master-0 kubenswrapper[4141]: I0312 14:11:18.801113 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 12 14:11:18.802141 master-0 kubenswrapper[4141]: I0312 14:11:18.802097 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 12 14:11:18.852738 master-0 kubenswrapper[4141]: I0312 14:11:18.852608 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-cnibin\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:11:18.852738 master-0 kubenswrapper[4141]: I0312 14:11:18.852699 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:11:18.853047 master-0 kubenswrapper[4141]: I0312 14:11:18.852740 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:11:18.853047 master-0 kubenswrapper[4141]: I0312 14:11:18.852773 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxnzm\" (UniqueName: \"kubernetes.io/projected/9757756c-cb67-4b6f-99c3-dd63f904897a-kube-api-access-hxnzm\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:11:18.853047 master-0 kubenswrapper[4141]: I0312 14:11:18.852809 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-whereabouts-configmap\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:11:18.853159 master-0 kubenswrapper[4141]: I0312 14:11:18.853085 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-os-release\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:11:18.853159 master-0 kubenswrapper[4141]: I0312 14:11:18.853122 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-cni-binary-copy\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:11:18.853342 master-0 kubenswrapper[4141]: I0312 14:11:18.853261 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-system-cni-dir\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:11:18.905464 master-0 kubenswrapper[4141]: I0312 14:11:18.905360 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zttwz" Mar 12 14:11:18.922760 master-0 kubenswrapper[4141]: W0312 14:11:18.922684 4141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95c11263_0d68_4b11_bcfd_bcb0e96a6988.slice/crio-fb9c2d52a7f820046d4d8f7dbc4ab42d1bcf38f9fbb4f9b3e069dc056c52a7d9 WatchSource:0}: Error finding container fb9c2d52a7f820046d4d8f7dbc4ab42d1bcf38f9fbb4f9b3e069dc056c52a7d9: Status 404 returned error can't find the container with id fb9c2d52a7f820046d4d8f7dbc4ab42d1bcf38f9fbb4f9b3e069dc056c52a7d9 Mar 12 14:11:18.954227 master-0 kubenswrapper[4141]: I0312 14:11:18.954140 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-cni-binary-copy\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:11:18.954227 master-0 kubenswrapper[4141]: I0312 14:11:18.954215 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-os-release\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:11:18.954388 master-0 kubenswrapper[4141]: I0312 14:11:18.954241 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-system-cni-dir\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:11:18.954388 master-0 kubenswrapper[4141]: I0312 14:11:18.954263 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-cnibin\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:11:18.954388 master-0 kubenswrapper[4141]: I0312 14:11:18.954301 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxnzm\" (UniqueName: \"kubernetes.io/projected/9757756c-cb67-4b6f-99c3-dd63f904897a-kube-api-access-hxnzm\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:11:18.954388 master-0 kubenswrapper[4141]: I0312 14:11:18.954327 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:11:18.954388 master-0 kubenswrapper[4141]: I0312 14:11:18.954348 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:11:18.954388 master-0 kubenswrapper[4141]: I0312 14:11:18.954371 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-whereabouts-configmap\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:11:18.954724 master-0 kubenswrapper[4141]: I0312 14:11:18.954669 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-cnibin\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:11:18.954837 master-0 kubenswrapper[4141]: I0312 14:11:18.954787 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-os-release\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:11:18.955217 master-0 kubenswrapper[4141]: I0312 14:11:18.955171 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:11:18.955291 master-0 kubenswrapper[4141]: I0312 14:11:18.955197 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-system-cni-dir\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:11:18.955347 master-0 kubenswrapper[4141]: I0312 14:11:18.955295 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-whereabouts-configmap\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:11:18.955863 master-0 kubenswrapper[4141]: I0312 14:11:18.955811 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-cni-binary-copy\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:11:18.956639 master-0 kubenswrapper[4141]: I0312 14:11:18.956593 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:11:18.984282 master-0 kubenswrapper[4141]: I0312 14:11:18.984203 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxnzm\" (UniqueName: \"kubernetes.io/projected/9757756c-cb67-4b6f-99c3-dd63f904897a-kube-api-access-hxnzm\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:11:19.112350 master-0 kubenswrapper[4141]: I0312 14:11:19.112173 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:11:19.124341 master-0 kubenswrapper[4141]: W0312 14:11:19.124294 4141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9757756c_cb67_4b6f_99c3_dd63f904897a.slice/crio-6f063e04e3f4cea4c5a58314f5a114923174086e042c2c243d9038f9f34bad2b WatchSource:0}: Error finding container 6f063e04e3f4cea4c5a58314f5a114923174086e042c2c243d9038f9f34bad2b: Status 404 returned error can't find the container with id 6f063e04e3f4cea4c5a58314f5a114923174086e042c2c243d9038f9f34bad2b Mar 12 14:11:19.309241 master-0 kubenswrapper[4141]: I0312 14:11:19.309172 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zttwz" event={"ID":"95c11263-0d68-4b11-bcfd-bcb0e96a6988","Type":"ContainerStarted","Data":"fb9c2d52a7f820046d4d8f7dbc4ab42d1bcf38f9fbb4f9b3e069dc056c52a7d9"} Mar 12 14:11:19.310421 master-0 kubenswrapper[4141]: I0312 14:11:19.310385 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h868v" event={"ID":"9757756c-cb67-4b6f-99c3-dd63f904897a","Type":"ContainerStarted","Data":"6f063e04e3f4cea4c5a58314f5a114923174086e042c2c243d9038f9f34bad2b"} Mar 12 14:11:19.581329 master-0 kubenswrapper[4141]: I0312 14:11:19.581281 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-n9v7g"] Mar 12 14:11:19.581726 master-0 kubenswrapper[4141]: I0312 14:11:19.581658 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:19.581805 master-0 kubenswrapper[4141]: E0312 14:11:19.581738 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:11:19.662460 master-0 kubenswrapper[4141]: I0312 14:11:19.662384 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bdqv\" (UniqueName: \"kubernetes.io/projected/7fdce71e-8085-4316-be40-e535530c2ca4-kube-api-access-5bdqv\") pod \"network-metrics-daemon-n9v7g\" (UID: \"7fdce71e-8085-4316-be40-e535530c2ca4\") " pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:19.662460 master-0 kubenswrapper[4141]: I0312 14:11:19.662454 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs\") pod \"network-metrics-daemon-n9v7g\" (UID: \"7fdce71e-8085-4316-be40-e535530c2ca4\") " pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:19.763404 master-0 kubenswrapper[4141]: I0312 14:11:19.763274 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs\") pod \"network-metrics-daemon-n9v7g\" (UID: \"7fdce71e-8085-4316-be40-e535530c2ca4\") " pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:19.763404 master-0 kubenswrapper[4141]: I0312 14:11:19.763378 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bdqv\" (UniqueName: \"kubernetes.io/projected/7fdce71e-8085-4316-be40-e535530c2ca4-kube-api-access-5bdqv\") pod \"network-metrics-daemon-n9v7g\" (UID: \"7fdce71e-8085-4316-be40-e535530c2ca4\") " pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:19.763680 master-0 kubenswrapper[4141]: E0312 14:11:19.763453 4141 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:11:19.763680 master-0 kubenswrapper[4141]: E0312 14:11:19.763543 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs podName:7fdce71e-8085-4316-be40-e535530c2ca4 nodeName:}" failed. No retries permitted until 2026-03-12 14:11:20.263523268 +0000 UTC m=+54.825095517 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs") pod "network-metrics-daemon-n9v7g" (UID: "7fdce71e-8085-4316-be40-e535530c2ca4") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:11:19.781023 master-0 kubenswrapper[4141]: I0312 14:11:19.780973 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bdqv\" (UniqueName: \"kubernetes.io/projected/7fdce71e-8085-4316-be40-e535530c2ca4-kube-api-access-5bdqv\") pod \"network-metrics-daemon-n9v7g\" (UID: \"7fdce71e-8085-4316-be40-e535530c2ca4\") " pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:20.267943 master-0 kubenswrapper[4141]: I0312 14:11:20.267846 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs\") pod \"network-metrics-daemon-n9v7g\" (UID: \"7fdce71e-8085-4316-be40-e535530c2ca4\") " pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:20.268197 master-0 kubenswrapper[4141]: E0312 14:11:20.268071 4141 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:11:20.268197 master-0 kubenswrapper[4141]: E0312 14:11:20.268143 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs podName:7fdce71e-8085-4316-be40-e535530c2ca4 nodeName:}" failed. No retries permitted until 2026-03-12 14:11:21.268118927 +0000 UTC m=+55.829691196 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs") pod "network-metrics-daemon-n9v7g" (UID: "7fdce71e-8085-4316-be40-e535530c2ca4") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:11:21.131073 master-0 kubenswrapper[4141]: I0312 14:11:21.131019 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:21.131585 master-0 kubenswrapper[4141]: E0312 14:11:21.131150 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:11:21.275378 master-0 kubenswrapper[4141]: I0312 14:11:21.275332 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs\") pod \"network-metrics-daemon-n9v7g\" (UID: \"7fdce71e-8085-4316-be40-e535530c2ca4\") " pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:21.275590 master-0 kubenswrapper[4141]: E0312 14:11:21.275456 4141 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:11:21.275590 master-0 kubenswrapper[4141]: E0312 14:11:21.275514 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs podName:7fdce71e-8085-4316-be40-e535530c2ca4 nodeName:}" failed. No retries permitted until 2026-03-12 14:11:23.275497028 +0000 UTC m=+57.837069277 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs") pod "network-metrics-daemon-n9v7g" (UID: "7fdce71e-8085-4316-be40-e535530c2ca4") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:11:22.319272 master-0 kubenswrapper[4141]: I0312 14:11:22.319021 4141 generic.go:334] "Generic (PLEG): container finished" podID="9757756c-cb67-4b6f-99c3-dd63f904897a" containerID="cfa5b038bc7b07de92bf843b3a45833830090fe9d6879ece21a0622781be697c" exitCode=0 Mar 12 14:11:22.319272 master-0 kubenswrapper[4141]: I0312 14:11:22.319072 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h868v" event={"ID":"9757756c-cb67-4b6f-99c3-dd63f904897a","Type":"ContainerDied","Data":"cfa5b038bc7b07de92bf843b3a45833830090fe9d6879ece21a0622781be697c"} Mar 12 14:11:23.130463 master-0 kubenswrapper[4141]: I0312 14:11:23.130412 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:23.130658 master-0 kubenswrapper[4141]: E0312 14:11:23.130526 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:11:23.345261 master-0 kubenswrapper[4141]: I0312 14:11:23.345186 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs\") pod \"network-metrics-daemon-n9v7g\" (UID: \"7fdce71e-8085-4316-be40-e535530c2ca4\") " pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:23.345691 master-0 kubenswrapper[4141]: E0312 14:11:23.345340 4141 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:11:23.345691 master-0 kubenswrapper[4141]: E0312 14:11:23.345437 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs podName:7fdce71e-8085-4316-be40-e535530c2ca4 nodeName:}" failed. No retries permitted until 2026-03-12 14:11:27.345419197 +0000 UTC m=+61.906991446 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs") pod "network-metrics-daemon-n9v7g" (UID: "7fdce71e-8085-4316-be40-e535530c2ca4") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:11:25.131351 master-0 kubenswrapper[4141]: I0312 14:11:25.131291 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:25.131940 master-0 kubenswrapper[4141]: E0312 14:11:25.131407 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:11:27.131014 master-0 kubenswrapper[4141]: I0312 14:11:27.130964 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:27.131563 master-0 kubenswrapper[4141]: E0312 14:11:27.131151 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:11:27.405572 master-0 kubenswrapper[4141]: I0312 14:11:27.405468 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs\") pod \"network-metrics-daemon-n9v7g\" (UID: \"7fdce71e-8085-4316-be40-e535530c2ca4\") " pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:27.405724 master-0 kubenswrapper[4141]: E0312 14:11:27.405626 4141 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:11:27.405724 master-0 kubenswrapper[4141]: E0312 14:11:27.405703 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs podName:7fdce71e-8085-4316-be40-e535530c2ca4 nodeName:}" failed. No retries permitted until 2026-03-12 14:11:35.405685186 +0000 UTC m=+69.967257435 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs") pod "network-metrics-daemon-n9v7g" (UID: "7fdce71e-8085-4316-be40-e535530c2ca4") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:11:29.131541 master-0 kubenswrapper[4141]: I0312 14:11:29.131334 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:29.131541 master-0 kubenswrapper[4141]: E0312 14:11:29.131494 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:11:30.989751 master-0 kubenswrapper[4141]: I0312 14:11:30.989686 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82"] Mar 12 14:11:30.990599 master-0 kubenswrapper[4141]: I0312 14:11:30.990565 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:11:30.992208 master-0 kubenswrapper[4141]: I0312 14:11:30.992033 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 12 14:11:30.992732 master-0 kubenswrapper[4141]: I0312 14:11:30.992612 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 12 14:11:30.993730 master-0 kubenswrapper[4141]: I0312 14:11:30.993265 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 12 14:11:30.993730 master-0 kubenswrapper[4141]: I0312 14:11:30.993430 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 12 14:11:30.993730 master-0 kubenswrapper[4141]: I0312 14:11:30.993616 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 12 14:11:31.130582 master-0 kubenswrapper[4141]: I0312 14:11:31.130491 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:31.130792 master-0 kubenswrapper[4141]: E0312 14:11:31.130645 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:11:31.177425 master-0 kubenswrapper[4141]: I0312 14:11:31.177337 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6defef79-6058-466a-ae0b-8eb9258126be-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:11:31.177425 master-0 kubenswrapper[4141]: I0312 14:11:31.177389 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6defef79-6058-466a-ae0b-8eb9258126be-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:11:31.177710 master-0 kubenswrapper[4141]: I0312 14:11:31.177589 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxt4g\" (UniqueName: \"kubernetes.io/projected/6defef79-6058-466a-ae0b-8eb9258126be-kube-api-access-zxt4g\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:11:31.177710 master-0 kubenswrapper[4141]: I0312 14:11:31.177677 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6defef79-6058-466a-ae0b-8eb9258126be-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:11:31.201855 master-0 kubenswrapper[4141]: I0312 14:11:31.201815 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pq7n2"] Mar 12 14:11:31.202591 master-0 kubenswrapper[4141]: I0312 14:11:31.202571 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.205973 master-0 kubenswrapper[4141]: I0312 14:11:31.205918 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 12 14:11:31.207155 master-0 kubenswrapper[4141]: I0312 14:11:31.207100 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 12 14:11:31.281508 master-0 kubenswrapper[4141]: I0312 14:11:31.278952 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxt4g\" (UniqueName: \"kubernetes.io/projected/6defef79-6058-466a-ae0b-8eb9258126be-kube-api-access-zxt4g\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:11:31.281508 master-0 kubenswrapper[4141]: I0312 14:11:31.279069 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6defef79-6058-466a-ae0b-8eb9258126be-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:11:31.281508 master-0 kubenswrapper[4141]: I0312 14:11:31.279112 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6defef79-6058-466a-ae0b-8eb9258126be-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:11:31.281508 master-0 kubenswrapper[4141]: I0312 14:11:31.279149 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6defef79-6058-466a-ae0b-8eb9258126be-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:11:31.281508 master-0 kubenswrapper[4141]: I0312 14:11:31.280268 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6defef79-6058-466a-ae0b-8eb9258126be-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:11:31.281836 master-0 kubenswrapper[4141]: I0312 14:11:31.281804 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6defef79-6058-466a-ae0b-8eb9258126be-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:11:31.289350 master-0 kubenswrapper[4141]: I0312 14:11:31.286019 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6defef79-6058-466a-ae0b-8eb9258126be-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:11:31.299444 master-0 kubenswrapper[4141]: I0312 14:11:31.296172 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxt4g\" (UniqueName: \"kubernetes.io/projected/6defef79-6058-466a-ae0b-8eb9258126be-kube-api-access-zxt4g\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:11:31.313615 master-0 kubenswrapper[4141]: I0312 14:11:31.313574 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:11:31.382738 master-0 kubenswrapper[4141]: I0312 14:11:31.382652 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-systemd-units\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.382738 master-0 kubenswrapper[4141]: I0312 14:11:31.382730 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-cni-bin\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.382983 master-0 kubenswrapper[4141]: I0312 14:11:31.382758 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/86ab127e-897e-48d9-aea7-fd4eec84730f-ovnkube-config\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.382983 master-0 kubenswrapper[4141]: I0312 14:11:31.382778 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/86ab127e-897e-48d9-aea7-fd4eec84730f-ovn-node-metrics-cert\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.383259 master-0 kubenswrapper[4141]: I0312 14:11:31.383214 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/86ab127e-897e-48d9-aea7-fd4eec84730f-ovnkube-script-lib\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.383294 master-0 kubenswrapper[4141]: I0312 14:11:31.383270 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-run-openvswitch\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.383324 master-0 kubenswrapper[4141]: I0312 14:11:31.383292 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-cni-netd\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.383324 master-0 kubenswrapper[4141]: I0312 14:11:31.383313 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-run-systemd\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.383382 master-0 kubenswrapper[4141]: I0312 14:11:31.383332 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-run-netns\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.383382 master-0 kubenswrapper[4141]: I0312 14:11:31.383349 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-slash\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.383382 master-0 kubenswrapper[4141]: I0312 14:11:31.383362 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-node-log\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.383458 master-0 kubenswrapper[4141]: I0312 14:11:31.383409 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr8bw\" (UniqueName: \"kubernetes.io/projected/86ab127e-897e-48d9-aea7-fd4eec84730f-kube-api-access-lr8bw\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.383458 master-0 kubenswrapper[4141]: I0312 14:11:31.383427 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-run-ovn\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.383458 master-0 kubenswrapper[4141]: I0312 14:11:31.383445 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-var-lib-openvswitch\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.383537 master-0 kubenswrapper[4141]: I0312 14:11:31.383463 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/86ab127e-897e-48d9-aea7-fd4eec84730f-env-overrides\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.383537 master-0 kubenswrapper[4141]: I0312 14:11:31.383480 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-run-ovn-kubernetes\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.383537 master-0 kubenswrapper[4141]: I0312 14:11:31.383521 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-kubelet\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.383537 master-0 kubenswrapper[4141]: I0312 14:11:31.383536 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-etc-openvswitch\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.383656 master-0 kubenswrapper[4141]: I0312 14:11:31.383558 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.383656 master-0 kubenswrapper[4141]: I0312 14:11:31.383581 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-log-socket\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.483837 master-0 kubenswrapper[4141]: I0312 14:11:31.483742 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-run-netns\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.483837 master-0 kubenswrapper[4141]: I0312 14:11:31.483790 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-slash\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.483837 master-0 kubenswrapper[4141]: I0312 14:11:31.483806 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-node-log\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.483837 master-0 kubenswrapper[4141]: I0312 14:11:31.483822 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr8bw\" (UniqueName: \"kubernetes.io/projected/86ab127e-897e-48d9-aea7-fd4eec84730f-kube-api-access-lr8bw\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.483837 master-0 kubenswrapper[4141]: I0312 14:11:31.483836 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-run-ovn\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.483837 master-0 kubenswrapper[4141]: I0312 14:11:31.483852 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-var-lib-openvswitch\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.484289 master-0 kubenswrapper[4141]: I0312 14:11:31.483868 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-run-ovn-kubernetes\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.484289 master-0 kubenswrapper[4141]: I0312 14:11:31.483887 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/86ab127e-897e-48d9-aea7-fd4eec84730f-env-overrides\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.484289 master-0 kubenswrapper[4141]: I0312 14:11:31.484126 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-var-lib-openvswitch\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.484289 master-0 kubenswrapper[4141]: I0312 14:11:31.484167 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-slash\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.484289 master-0 kubenswrapper[4141]: I0312 14:11:31.484164 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-run-ovn-kubernetes\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.484289 master-0 kubenswrapper[4141]: I0312 14:11:31.484220 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-node-log\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.484289 master-0 kubenswrapper[4141]: I0312 14:11:31.484217 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-run-netns\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.484289 master-0 kubenswrapper[4141]: I0312 14:11:31.484217 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-run-ovn\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.484611 master-0 kubenswrapper[4141]: I0312 14:11:31.484335 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-etc-openvswitch\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.484611 master-0 kubenswrapper[4141]: I0312 14:11:31.484361 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.484611 master-0 kubenswrapper[4141]: I0312 14:11:31.484381 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-kubelet\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.484611 master-0 kubenswrapper[4141]: I0312 14:11:31.484396 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-log-socket\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.484611 master-0 kubenswrapper[4141]: I0312 14:11:31.484411 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-systemd-units\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.484611 master-0 kubenswrapper[4141]: I0312 14:11:31.484431 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-cni-bin\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.484611 master-0 kubenswrapper[4141]: I0312 14:11:31.484465 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/86ab127e-897e-48d9-aea7-fd4eec84730f-ovnkube-config\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.484611 master-0 kubenswrapper[4141]: I0312 14:11:31.484487 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/86ab127e-897e-48d9-aea7-fd4eec84730f-ovn-node-metrics-cert\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.484611 master-0 kubenswrapper[4141]: I0312 14:11:31.484507 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/86ab127e-897e-48d9-aea7-fd4eec84730f-ovnkube-script-lib\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.484611 master-0 kubenswrapper[4141]: I0312 14:11:31.484530 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-run-openvswitch\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.484611 master-0 kubenswrapper[4141]: I0312 14:11:31.484557 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.484611 master-0 kubenswrapper[4141]: I0312 14:11:31.484564 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-cni-netd\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.484611 master-0 kubenswrapper[4141]: I0312 14:11:31.484599 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-run-systemd\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.484611 master-0 kubenswrapper[4141]: I0312 14:11:31.484601 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-kubelet\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.485259 master-0 kubenswrapper[4141]: I0312 14:11:31.484548 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-log-socket\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.485259 master-0 kubenswrapper[4141]: I0312 14:11:31.484661 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-etc-openvswitch\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.485259 master-0 kubenswrapper[4141]: I0312 14:11:31.484663 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-run-systemd\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.485259 master-0 kubenswrapper[4141]: I0312 14:11:31.484700 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-cni-bin\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.485259 master-0 kubenswrapper[4141]: I0312 14:11:31.484734 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-cni-netd\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.485259 master-0 kubenswrapper[4141]: I0312 14:11:31.484820 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-systemd-units\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.485259 master-0 kubenswrapper[4141]: I0312 14:11:31.484841 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-run-openvswitch\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.485259 master-0 kubenswrapper[4141]: I0312 14:11:31.485100 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/86ab127e-897e-48d9-aea7-fd4eec84730f-env-overrides\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.485704 master-0 kubenswrapper[4141]: I0312 14:11:31.485272 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/86ab127e-897e-48d9-aea7-fd4eec84730f-ovnkube-config\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.485704 master-0 kubenswrapper[4141]: I0312 14:11:31.485466 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/86ab127e-897e-48d9-aea7-fd4eec84730f-ovnkube-script-lib\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.492591 master-0 kubenswrapper[4141]: I0312 14:11:31.487603 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/86ab127e-897e-48d9-aea7-fd4eec84730f-ovn-node-metrics-cert\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.504831 master-0 kubenswrapper[4141]: I0312 14:11:31.501980 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr8bw\" (UniqueName: \"kubernetes.io/projected/86ab127e-897e-48d9-aea7-fd4eec84730f-kube-api-access-lr8bw\") pod \"ovnkube-node-pq7n2\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:31.514910 master-0 kubenswrapper[4141]: I0312 14:11:31.514813 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:33.131343 master-0 kubenswrapper[4141]: I0312 14:11:33.131292 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:33.132128 master-0 kubenswrapper[4141]: E0312 14:11:33.131420 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:11:33.497615 master-0 kubenswrapper[4141]: I0312 14:11:33.497542 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:11:33.497807 master-0 kubenswrapper[4141]: E0312 14:11:33.497688 4141 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 14:11:33.497807 master-0 kubenswrapper[4141]: E0312 14:11:33.497745 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert podName:29ab0e68-ebc6-48a3-b234-e1794c4c5ad6 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:05.497725662 +0000 UTC m=+100.059297911 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert") pod "cluster-version-operator-745944c6b7-vs878" (UID: "29ab0e68-ebc6-48a3-b234-e1794c4c5ad6") : secret "cluster-version-operator-serving-cert" not found Mar 12 14:11:35.130783 master-0 kubenswrapper[4141]: I0312 14:11:35.130746 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:35.131341 master-0 kubenswrapper[4141]: E0312 14:11:35.131268 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:11:35.414413 master-0 kubenswrapper[4141]: I0312 14:11:35.414287 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs\") pod \"network-metrics-daemon-n9v7g\" (UID: \"7fdce71e-8085-4316-be40-e535530c2ca4\") " pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:35.414571 master-0 kubenswrapper[4141]: E0312 14:11:35.414433 4141 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:11:35.414571 master-0 kubenswrapper[4141]: E0312 14:11:35.414498 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs podName:7fdce71e-8085-4316-be40-e535530c2ca4 nodeName:}" failed. No retries permitted until 2026-03-12 14:11:51.414482313 +0000 UTC m=+85.976054562 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs") pod "network-metrics-daemon-n9v7g" (UID: "7fdce71e-8085-4316-be40-e535530c2ca4") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:11:35.509498 master-0 kubenswrapper[4141]: W0312 14:11:35.509221 4141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6defef79_6058_466a_ae0b_8eb9258126be.slice/crio-7ad7c4acbfd0070259486f35a18b99f96bb34f57c1bf16a0b81a55c2de084162 WatchSource:0}: Error finding container 7ad7c4acbfd0070259486f35a18b99f96bb34f57c1bf16a0b81a55c2de084162: Status 404 returned error can't find the container with id 7ad7c4acbfd0070259486f35a18b99f96bb34f57c1bf16a0b81a55c2de084162 Mar 12 14:11:36.309260 master-0 kubenswrapper[4141]: I0312 14:11:36.306963 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-8q2fv"] Mar 12 14:11:36.309260 master-0 kubenswrapper[4141]: I0312 14:11:36.307253 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:11:36.309260 master-0 kubenswrapper[4141]: E0312 14:11:36.307326 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8q2fv" podUID="8e733069-752a-4140-83eb-8287f1bce1a7" Mar 12 14:11:36.323140 master-0 kubenswrapper[4141]: I0312 14:11:36.323086 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvngn\" (UniqueName: \"kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn\") pod \"network-check-target-8q2fv\" (UID: \"8e733069-752a-4140-83eb-8287f1bce1a7\") " pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:11:36.369821 master-0 kubenswrapper[4141]: I0312 14:11:36.369722 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" event={"ID":"86ab127e-897e-48d9-aea7-fd4eec84730f","Type":"ContainerStarted","Data":"b89d1bc2f4a8ea2138bad228a8f181af661c81a072e9cd06792d7137bd4ebc43"} Mar 12 14:11:36.371273 master-0 kubenswrapper[4141]: I0312 14:11:36.371228 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" event={"ID":"6defef79-6058-466a-ae0b-8eb9258126be","Type":"ContainerStarted","Data":"9f2fe9790563ec38565007414495f6da66cc6ef242600efb951afc8284d7b4ba"} Mar 12 14:11:36.371273 master-0 kubenswrapper[4141]: I0312 14:11:36.371260 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" event={"ID":"6defef79-6058-466a-ae0b-8eb9258126be","Type":"ContainerStarted","Data":"7ad7c4acbfd0070259486f35a18b99f96bb34f57c1bf16a0b81a55c2de084162"} Mar 12 14:11:36.372882 master-0 kubenswrapper[4141]: I0312 14:11:36.372833 4141 generic.go:334] "Generic (PLEG): container finished" podID="9757756c-cb67-4b6f-99c3-dd63f904897a" containerID="badf1c98d1937a2f8e44bf83e8bf87b7da9889235c52744f099d88d3a841de7f" exitCode=0 Mar 12 14:11:36.372882 master-0 kubenswrapper[4141]: I0312 14:11:36.372869 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h868v" event={"ID":"9757756c-cb67-4b6f-99c3-dd63f904897a","Type":"ContainerDied","Data":"badf1c98d1937a2f8e44bf83e8bf87b7da9889235c52744f099d88d3a841de7f"} Mar 12 14:11:36.424758 master-0 kubenswrapper[4141]: I0312 14:11:36.424682 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvngn\" (UniqueName: \"kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn\") pod \"network-check-target-8q2fv\" (UID: \"8e733069-752a-4140-83eb-8287f1bce1a7\") " pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:11:36.448763 master-0 kubenswrapper[4141]: E0312 14:11:36.448713 4141 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 14:11:36.448763 master-0 kubenswrapper[4141]: E0312 14:11:36.448756 4141 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 14:11:36.448914 master-0 kubenswrapper[4141]: E0312 14:11:36.448772 4141 projected.go:194] Error preparing data for projected volume kube-api-access-qvngn for pod openshift-network-diagnostics/network-check-target-8q2fv: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:11:36.448914 master-0 kubenswrapper[4141]: E0312 14:11:36.448841 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn podName:8e733069-752a-4140-83eb-8287f1bce1a7 nodeName:}" failed. No retries permitted until 2026-03-12 14:11:36.948822224 +0000 UTC m=+71.510394493 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qvngn" (UniqueName: "kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn") pod "network-check-target-8q2fv" (UID: "8e733069-752a-4140-83eb-8287f1bce1a7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:11:36.794283 master-0 kubenswrapper[4141]: I0312 14:11:36.794237 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-rqq4v"] Mar 12 14:11:36.795354 master-0 kubenswrapper[4141]: I0312 14:11:36.795328 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:11:36.797447 master-0 kubenswrapper[4141]: I0312 14:11:36.797406 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 12 14:11:36.797759 master-0 kubenswrapper[4141]: I0312 14:11:36.797658 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 12 14:11:36.797845 master-0 kubenswrapper[4141]: I0312 14:11:36.797420 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 12 14:11:36.797882 master-0 kubenswrapper[4141]: I0312 14:11:36.797867 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 12 14:11:36.798035 master-0 kubenswrapper[4141]: I0312 14:11:36.797980 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 12 14:11:36.946059 master-0 kubenswrapper[4141]: I0312 14:11:36.943301 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-webhook-cert\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:11:36.946358 master-0 kubenswrapper[4141]: I0312 14:11:36.946058 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-env-overrides\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:11:36.946358 master-0 kubenswrapper[4141]: I0312 14:11:36.946105 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-ovnkube-identity-cm\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:11:36.946358 master-0 kubenswrapper[4141]: I0312 14:11:36.946172 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwtr9\" (UniqueName: \"kubernetes.io/projected/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-kube-api-access-wwtr9\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:11:37.047628 master-0 kubenswrapper[4141]: I0312 14:11:37.047481 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-webhook-cert\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:11:37.047628 master-0 kubenswrapper[4141]: I0312 14:11:37.047530 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-env-overrides\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:11:37.047628 master-0 kubenswrapper[4141]: I0312 14:11:37.047550 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-ovnkube-identity-cm\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:11:37.047628 master-0 kubenswrapper[4141]: I0312 14:11:37.047589 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwtr9\" (UniqueName: \"kubernetes.io/projected/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-kube-api-access-wwtr9\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:11:37.048882 master-0 kubenswrapper[4141]: I0312 14:11:37.047907 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvngn\" (UniqueName: \"kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn\") pod \"network-check-target-8q2fv\" (UID: \"8e733069-752a-4140-83eb-8287f1bce1a7\") " pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:11:37.048882 master-0 kubenswrapper[4141]: E0312 14:11:37.048372 4141 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 14:11:37.048882 master-0 kubenswrapper[4141]: E0312 14:11:37.048392 4141 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 14:11:37.048882 master-0 kubenswrapper[4141]: E0312 14:11:37.048405 4141 projected.go:194] Error preparing data for projected volume kube-api-access-qvngn for pod openshift-network-diagnostics/network-check-target-8q2fv: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:11:37.048882 master-0 kubenswrapper[4141]: E0312 14:11:37.048448 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn podName:8e733069-752a-4140-83eb-8287f1bce1a7 nodeName:}" failed. No retries permitted until 2026-03-12 14:11:38.048435372 +0000 UTC m=+72.610007621 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qvngn" (UniqueName: "kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn") pod "network-check-target-8q2fv" (UID: "8e733069-752a-4140-83eb-8287f1bce1a7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:11:37.049746 master-0 kubenswrapper[4141]: I0312 14:11:37.048928 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-env-overrides\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:11:37.049746 master-0 kubenswrapper[4141]: I0312 14:11:37.049177 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-ovnkube-identity-cm\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:11:37.055253 master-0 kubenswrapper[4141]: I0312 14:11:37.053507 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-webhook-cert\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:11:37.066427 master-0 kubenswrapper[4141]: I0312 14:11:37.066380 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwtr9\" (UniqueName: \"kubernetes.io/projected/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-kube-api-access-wwtr9\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:11:37.108738 master-0 kubenswrapper[4141]: I0312 14:11:37.108337 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:11:37.131034 master-0 kubenswrapper[4141]: I0312 14:11:37.130982 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:37.131236 master-0 kubenswrapper[4141]: E0312 14:11:37.131193 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:11:37.221577 master-0 kubenswrapper[4141]: W0312 14:11:37.221521 4141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode72c2e9c_978b_4f87_b6e3_6e20d82cc5e9.slice/crio-273deb0b6a9c20f6e288a8f04dbffa2d991224ef0582918efc29bdb17656c1b9 WatchSource:0}: Error finding container 273deb0b6a9c20f6e288a8f04dbffa2d991224ef0582918efc29bdb17656c1b9: Status 404 returned error can't find the container with id 273deb0b6a9c20f6e288a8f04dbffa2d991224ef0582918efc29bdb17656c1b9 Mar 12 14:11:37.378004 master-0 kubenswrapper[4141]: I0312 14:11:37.377098 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zttwz" event={"ID":"95c11263-0d68-4b11-bcfd-bcb0e96a6988","Type":"ContainerStarted","Data":"5b018faa420052ddd30a7440e3b7a6b3748f361b955c0e4528b5de090907c8ec"} Mar 12 14:11:37.378725 master-0 kubenswrapper[4141]: I0312 14:11:37.378671 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-rqq4v" event={"ID":"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9","Type":"ContainerStarted","Data":"273deb0b6a9c20f6e288a8f04dbffa2d991224ef0582918efc29bdb17656c1b9"} Mar 12 14:11:37.395244 master-0 kubenswrapper[4141]: I0312 14:11:37.395172 4141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-zttwz" podStartSLOduration=1.9291785369999999 podStartE2EDuration="19.395152204s" podCreationTimestamp="2026-03-12 14:11:18 +0000 UTC" firstStartedPulling="2026-03-12 14:11:18.925552317 +0000 UTC m=+53.487124606" lastFinishedPulling="2026-03-12 14:11:36.391526014 +0000 UTC m=+70.953098273" observedRunningTime="2026-03-12 14:11:37.395043331 +0000 UTC m=+71.956615590" watchObservedRunningTime="2026-03-12 14:11:37.395152204 +0000 UTC m=+71.956724453" Mar 12 14:11:38.056273 master-0 kubenswrapper[4141]: I0312 14:11:38.056221 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvngn\" (UniqueName: \"kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn\") pod \"network-check-target-8q2fv\" (UID: \"8e733069-752a-4140-83eb-8287f1bce1a7\") " pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:11:38.056586 master-0 kubenswrapper[4141]: E0312 14:11:38.056528 4141 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 14:11:38.056586 master-0 kubenswrapper[4141]: E0312 14:11:38.056587 4141 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 14:11:38.056709 master-0 kubenswrapper[4141]: E0312 14:11:38.056605 4141 projected.go:194] Error preparing data for projected volume kube-api-access-qvngn for pod openshift-network-diagnostics/network-check-target-8q2fv: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:11:38.056709 master-0 kubenswrapper[4141]: E0312 14:11:38.056696 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn podName:8e733069-752a-4140-83eb-8287f1bce1a7 nodeName:}" failed. No retries permitted until 2026-03-12 14:11:40.056672376 +0000 UTC m=+74.618244625 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qvngn" (UniqueName: "kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn") pod "network-check-target-8q2fv" (UID: "8e733069-752a-4140-83eb-8287f1bce1a7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:11:38.132850 master-0 kubenswrapper[4141]: I0312 14:11:38.131075 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:11:38.132850 master-0 kubenswrapper[4141]: E0312 14:11:38.131310 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8q2fv" podUID="8e733069-752a-4140-83eb-8287f1bce1a7" Mar 12 14:11:38.383089 master-0 kubenswrapper[4141]: I0312 14:11:38.382946 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h868v" event={"ID":"9757756c-cb67-4b6f-99c3-dd63f904897a","Type":"ContainerStarted","Data":"affa558e980cee997cdd8182eda2cfef7d818deacab403a1f48e02cffbc1c48b"} Mar 12 14:11:39.131383 master-0 kubenswrapper[4141]: I0312 14:11:39.131303 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:39.131597 master-0 kubenswrapper[4141]: E0312 14:11:39.131470 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:11:39.387462 master-0 kubenswrapper[4141]: I0312 14:11:39.387359 4141 generic.go:334] "Generic (PLEG): container finished" podID="9757756c-cb67-4b6f-99c3-dd63f904897a" containerID="affa558e980cee997cdd8182eda2cfef7d818deacab403a1f48e02cffbc1c48b" exitCode=0 Mar 12 14:11:39.387462 master-0 kubenswrapper[4141]: I0312 14:11:39.387433 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h868v" event={"ID":"9757756c-cb67-4b6f-99c3-dd63f904897a","Type":"ContainerDied","Data":"affa558e980cee997cdd8182eda2cfef7d818deacab403a1f48e02cffbc1c48b"} Mar 12 14:11:40.083961 master-0 kubenswrapper[4141]: I0312 14:11:40.083827 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvngn\" (UniqueName: \"kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn\") pod \"network-check-target-8q2fv\" (UID: \"8e733069-752a-4140-83eb-8287f1bce1a7\") " pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:11:40.084154 master-0 kubenswrapper[4141]: E0312 14:11:40.084075 4141 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 14:11:40.084154 master-0 kubenswrapper[4141]: E0312 14:11:40.084128 4141 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 14:11:40.084154 master-0 kubenswrapper[4141]: E0312 14:11:40.084144 4141 projected.go:194] Error preparing data for projected volume kube-api-access-qvngn for pod openshift-network-diagnostics/network-check-target-8q2fv: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:11:40.084239 master-0 kubenswrapper[4141]: E0312 14:11:40.084201 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn podName:8e733069-752a-4140-83eb-8287f1bce1a7 nodeName:}" failed. No retries permitted until 2026-03-12 14:11:44.084185689 +0000 UTC m=+78.645757938 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qvngn" (UniqueName: "kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn") pod "network-check-target-8q2fv" (UID: "8e733069-752a-4140-83eb-8287f1bce1a7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:11:40.131434 master-0 kubenswrapper[4141]: I0312 14:11:40.131372 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:11:40.131514 master-0 kubenswrapper[4141]: E0312 14:11:40.131492 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8q2fv" podUID="8e733069-752a-4140-83eb-8287f1bce1a7" Mar 12 14:11:40.391683 master-0 kubenswrapper[4141]: I0312 14:11:40.391586 4141 generic.go:334] "Generic (PLEG): container finished" podID="9757756c-cb67-4b6f-99c3-dd63f904897a" containerID="9fbd87c96fccfe4bfad334fd8c3bc1df622b06005839f21efff6ba86833c49f2" exitCode=0 Mar 12 14:11:40.391683 master-0 kubenswrapper[4141]: I0312 14:11:40.391629 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h868v" event={"ID":"9757756c-cb67-4b6f-99c3-dd63f904897a","Type":"ContainerDied","Data":"9fbd87c96fccfe4bfad334fd8c3bc1df622b06005839f21efff6ba86833c49f2"} Mar 12 14:11:41.130479 master-0 kubenswrapper[4141]: I0312 14:11:41.130439 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:41.130684 master-0 kubenswrapper[4141]: E0312 14:11:41.130543 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:11:42.130759 master-0 kubenswrapper[4141]: I0312 14:11:42.130693 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:11:42.131237 master-0 kubenswrapper[4141]: E0312 14:11:42.130908 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8q2fv" podUID="8e733069-752a-4140-83eb-8287f1bce1a7" Mar 12 14:11:42.202968 master-0 kubenswrapper[4141]: I0312 14:11:42.202915 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 12 14:11:43.131006 master-0 kubenswrapper[4141]: I0312 14:11:43.130962 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:43.131450 master-0 kubenswrapper[4141]: E0312 14:11:43.131113 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:11:44.118167 master-0 kubenswrapper[4141]: I0312 14:11:44.118104 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvngn\" (UniqueName: \"kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn\") pod \"network-check-target-8q2fv\" (UID: \"8e733069-752a-4140-83eb-8287f1bce1a7\") " pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:11:44.118369 master-0 kubenswrapper[4141]: E0312 14:11:44.118252 4141 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 14:11:44.118369 master-0 kubenswrapper[4141]: E0312 14:11:44.118270 4141 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 14:11:44.118369 master-0 kubenswrapper[4141]: E0312 14:11:44.118280 4141 projected.go:194] Error preparing data for projected volume kube-api-access-qvngn for pod openshift-network-diagnostics/network-check-target-8q2fv: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:11:44.118369 master-0 kubenswrapper[4141]: E0312 14:11:44.118329 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn podName:8e733069-752a-4140-83eb-8287f1bce1a7 nodeName:}" failed. No retries permitted until 2026-03-12 14:11:52.118316305 +0000 UTC m=+86.679888554 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-qvngn" (UniqueName: "kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn") pod "network-check-target-8q2fv" (UID: "8e733069-752a-4140-83eb-8287f1bce1a7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:11:44.131351 master-0 kubenswrapper[4141]: I0312 14:11:44.131303 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:11:44.131812 master-0 kubenswrapper[4141]: E0312 14:11:44.131403 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8q2fv" podUID="8e733069-752a-4140-83eb-8287f1bce1a7" Mar 12 14:11:45.131787 master-0 kubenswrapper[4141]: I0312 14:11:45.131692 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:45.132412 master-0 kubenswrapper[4141]: E0312 14:11:45.132065 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:11:46.131696 master-0 kubenswrapper[4141]: I0312 14:11:46.131645 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:11:46.132379 master-0 kubenswrapper[4141]: E0312 14:11:46.132336 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8q2fv" podUID="8e733069-752a-4140-83eb-8287f1bce1a7" Mar 12 14:11:46.367887 master-0 kubenswrapper[4141]: I0312 14:11:46.367813 4141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=4.367629433 podStartE2EDuration="4.367629433s" podCreationTimestamp="2026-03-12 14:11:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:11:46.367059959 +0000 UTC m=+80.928632218" watchObservedRunningTime="2026-03-12 14:11:46.367629433 +0000 UTC m=+80.929201682" Mar 12 14:11:47.131075 master-0 kubenswrapper[4141]: I0312 14:11:47.131017 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:47.131269 master-0 kubenswrapper[4141]: E0312 14:11:47.131172 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:11:48.130665 master-0 kubenswrapper[4141]: I0312 14:11:48.130588 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:11:48.131182 master-0 kubenswrapper[4141]: E0312 14:11:48.130736 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8q2fv" podUID="8e733069-752a-4140-83eb-8287f1bce1a7" Mar 12 14:11:49.131185 master-0 kubenswrapper[4141]: I0312 14:11:49.131130 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:49.131649 master-0 kubenswrapper[4141]: E0312 14:11:49.131277 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:11:50.131677 master-0 kubenswrapper[4141]: I0312 14:11:50.131588 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:11:50.132325 master-0 kubenswrapper[4141]: E0312 14:11:50.131707 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8q2fv" podUID="8e733069-752a-4140-83eb-8287f1bce1a7" Mar 12 14:11:51.130544 master-0 kubenswrapper[4141]: I0312 14:11:51.130473 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:51.130739 master-0 kubenswrapper[4141]: E0312 14:11:51.130608 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:11:51.480874 master-0 kubenswrapper[4141]: I0312 14:11:51.480767 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs\") pod \"network-metrics-daemon-n9v7g\" (UID: \"7fdce71e-8085-4316-be40-e535530c2ca4\") " pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:51.481319 master-0 kubenswrapper[4141]: E0312 14:11:51.480976 4141 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:11:51.481319 master-0 kubenswrapper[4141]: E0312 14:11:51.481069 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs podName:7fdce71e-8085-4316-be40-e535530c2ca4 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:23.481051204 +0000 UTC m=+118.042623453 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs") pod "network-metrics-daemon-n9v7g" (UID: "7fdce71e-8085-4316-be40-e535530c2ca4") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 14:11:52.133518 master-0 kubenswrapper[4141]: I0312 14:11:52.133428 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:11:52.133759 master-0 kubenswrapper[4141]: E0312 14:11:52.133553 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8q2fv" podUID="8e733069-752a-4140-83eb-8287f1bce1a7" Mar 12 14:11:52.187881 master-0 kubenswrapper[4141]: I0312 14:11:52.187797 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvngn\" (UniqueName: \"kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn\") pod \"network-check-target-8q2fv\" (UID: \"8e733069-752a-4140-83eb-8287f1bce1a7\") " pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:11:52.188089 master-0 kubenswrapper[4141]: E0312 14:11:52.188045 4141 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 14:11:52.188089 master-0 kubenswrapper[4141]: E0312 14:11:52.188062 4141 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 14:11:52.188089 master-0 kubenswrapper[4141]: E0312 14:11:52.188074 4141 projected.go:194] Error preparing data for projected volume kube-api-access-qvngn for pod openshift-network-diagnostics/network-check-target-8q2fv: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:11:52.188200 master-0 kubenswrapper[4141]: E0312 14:11:52.188129 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn podName:8e733069-752a-4140-83eb-8287f1bce1a7 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:08.188106784 +0000 UTC m=+102.749679033 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-qvngn" (UniqueName: "kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn") pod "network-check-target-8q2fv" (UID: "8e733069-752a-4140-83eb-8287f1bce1a7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:11:53.131015 master-0 kubenswrapper[4141]: I0312 14:11:53.130973 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:53.132911 master-0 kubenswrapper[4141]: E0312 14:11:53.131081 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:11:53.429285 master-0 kubenswrapper[4141]: I0312 14:11:53.428798 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" event={"ID":"6defef79-6058-466a-ae0b-8eb9258126be","Type":"ContainerStarted","Data":"e09e9528f2e667c7ca5a54a2f40134d7a65389dd5410fb6f666432c3167149ba"} Mar 12 14:11:53.431729 master-0 kubenswrapper[4141]: I0312 14:11:53.431684 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-rqq4v" event={"ID":"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9","Type":"ContainerStarted","Data":"4f82da527f459a4e4785bd921abd6a49239f5f19783c788a0d00d2e0b9706a60"} Mar 12 14:11:53.435114 master-0 kubenswrapper[4141]: I0312 14:11:53.435064 4141 generic.go:334] "Generic (PLEG): container finished" podID="9757756c-cb67-4b6f-99c3-dd63f904897a" containerID="d39ce324f3db6164db245417f53b6d8ff38716c386224704af63bf67e207b5f1" exitCode=0 Mar 12 14:11:53.435203 master-0 kubenswrapper[4141]: I0312 14:11:53.435135 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h868v" event={"ID":"9757756c-cb67-4b6f-99c3-dd63f904897a","Type":"ContainerDied","Data":"d39ce324f3db6164db245417f53b6d8ff38716c386224704af63bf67e207b5f1"} Mar 12 14:11:53.436649 master-0 kubenswrapper[4141]: I0312 14:11:53.436567 4141 generic.go:334] "Generic (PLEG): container finished" podID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerID="bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d" exitCode=0 Mar 12 14:11:53.436649 master-0 kubenswrapper[4141]: I0312 14:11:53.436614 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" event={"ID":"86ab127e-897e-48d9-aea7-fd4eec84730f","Type":"ContainerDied","Data":"bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d"} Mar 12 14:11:53.443226 master-0 kubenswrapper[4141]: I0312 14:11:53.441803 4141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" podStartSLOduration=6.650573733 podStartE2EDuration="23.441788641s" podCreationTimestamp="2026-03-12 14:11:30 +0000 UTC" firstStartedPulling="2026-03-12 14:11:36.312939777 +0000 UTC m=+70.874512026" lastFinishedPulling="2026-03-12 14:11:53.104154685 +0000 UTC m=+87.665726934" observedRunningTime="2026-03-12 14:11:53.441215607 +0000 UTC m=+88.002787856" watchObservedRunningTime="2026-03-12 14:11:53.441788641 +0000 UTC m=+88.003360890" Mar 12 14:11:54.132175 master-0 kubenswrapper[4141]: I0312 14:11:54.131621 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:11:54.132175 master-0 kubenswrapper[4141]: E0312 14:11:54.131735 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8q2fv" podUID="8e733069-752a-4140-83eb-8287f1bce1a7" Mar 12 14:11:54.443268 master-0 kubenswrapper[4141]: I0312 14:11:54.443138 4141 generic.go:334] "Generic (PLEG): container finished" podID="9757756c-cb67-4b6f-99c3-dd63f904897a" containerID="f2ba438d34b4b3304e8d60d973e3309595cd9060a2ebe30a5d88db295ad25e25" exitCode=0 Mar 12 14:11:54.443268 master-0 kubenswrapper[4141]: I0312 14:11:54.443253 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h868v" event={"ID":"9757756c-cb67-4b6f-99c3-dd63f904897a","Type":"ContainerDied","Data":"f2ba438d34b4b3304e8d60d973e3309595cd9060a2ebe30a5d88db295ad25e25"} Mar 12 14:11:54.456193 master-0 kubenswrapper[4141]: I0312 14:11:54.448845 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" event={"ID":"86ab127e-897e-48d9-aea7-fd4eec84730f","Type":"ContainerStarted","Data":"caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e"} Mar 12 14:11:54.456193 master-0 kubenswrapper[4141]: I0312 14:11:54.448889 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" event={"ID":"86ab127e-897e-48d9-aea7-fd4eec84730f","Type":"ContainerStarted","Data":"fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624"} Mar 12 14:11:54.456193 master-0 kubenswrapper[4141]: I0312 14:11:54.448932 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" event={"ID":"86ab127e-897e-48d9-aea7-fd4eec84730f","Type":"ContainerStarted","Data":"d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de"} Mar 12 14:11:54.456193 master-0 kubenswrapper[4141]: I0312 14:11:54.448945 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" event={"ID":"86ab127e-897e-48d9-aea7-fd4eec84730f","Type":"ContainerStarted","Data":"8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb"} Mar 12 14:11:54.456193 master-0 kubenswrapper[4141]: I0312 14:11:54.448958 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" event={"ID":"86ab127e-897e-48d9-aea7-fd4eec84730f","Type":"ContainerStarted","Data":"6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa"} Mar 12 14:11:54.456193 master-0 kubenswrapper[4141]: I0312 14:11:54.448970 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" event={"ID":"86ab127e-897e-48d9-aea7-fd4eec84730f","Type":"ContainerStarted","Data":"567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8"} Mar 12 14:11:54.456193 master-0 kubenswrapper[4141]: I0312 14:11:54.454605 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-rqq4v" event={"ID":"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9","Type":"ContainerStarted","Data":"6426a3a4748b7e9d673d2f1d6267439ec1d4e697687aa5758b4c1a8fe5038d99"} Mar 12 14:11:54.476499 master-0 kubenswrapper[4141]: I0312 14:11:54.476423 4141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-rqq4v" podStartSLOduration=2.55352979 podStartE2EDuration="18.476401019s" podCreationTimestamp="2026-03-12 14:11:36 +0000 UTC" firstStartedPulling="2026-03-12 14:11:37.238358649 +0000 UTC m=+71.799930898" lastFinishedPulling="2026-03-12 14:11:53.161229878 +0000 UTC m=+87.722802127" observedRunningTime="2026-03-12 14:11:54.474758448 +0000 UTC m=+89.036330717" watchObservedRunningTime="2026-03-12 14:11:54.476401019 +0000 UTC m=+89.037973308" Mar 12 14:11:55.131241 master-0 kubenswrapper[4141]: I0312 14:11:55.130847 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:55.131477 master-0 kubenswrapper[4141]: E0312 14:11:55.131355 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:11:55.460538 master-0 kubenswrapper[4141]: I0312 14:11:55.460398 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h868v" event={"ID":"9757756c-cb67-4b6f-99c3-dd63f904897a","Type":"ContainerStarted","Data":"ebd95ca3fb2815dc4627a44d443095574f5ee1471a5dae51cc1433a123d8f27b"} Mar 12 14:11:55.482229 master-0 kubenswrapper[4141]: I0312 14:11:55.482144 4141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-h868v" podStartSLOduration=3.5902102940000002 podStartE2EDuration="37.482118761s" podCreationTimestamp="2026-03-12 14:11:18 +0000 UTC" firstStartedPulling="2026-03-12 14:11:19.127079307 +0000 UTC m=+53.688651576" lastFinishedPulling="2026-03-12 14:11:53.018987794 +0000 UTC m=+87.580560043" observedRunningTime="2026-03-12 14:11:55.481108436 +0000 UTC m=+90.042680695" watchObservedRunningTime="2026-03-12 14:11:55.482118761 +0000 UTC m=+90.043691040" Mar 12 14:11:56.132749 master-0 kubenswrapper[4141]: I0312 14:11:56.132705 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:11:56.132985 master-0 kubenswrapper[4141]: E0312 14:11:56.132821 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8q2fv" podUID="8e733069-752a-4140-83eb-8287f1bce1a7" Mar 12 14:11:56.469312 master-0 kubenswrapper[4141]: I0312 14:11:56.469193 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" event={"ID":"86ab127e-897e-48d9-aea7-fd4eec84730f","Type":"ContainerStarted","Data":"1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306"} Mar 12 14:11:56.654188 master-0 kubenswrapper[4141]: I0312 14:11:56.654128 4141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pq7n2"] Mar 12 14:11:57.131057 master-0 kubenswrapper[4141]: I0312 14:11:57.130958 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:57.131263 master-0 kubenswrapper[4141]: E0312 14:11:57.131185 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:11:58.131429 master-0 kubenswrapper[4141]: I0312 14:11:58.131179 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:11:58.131429 master-0 kubenswrapper[4141]: E0312 14:11:58.131325 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8q2fv" podUID="8e733069-752a-4140-83eb-8287f1bce1a7" Mar 12 14:11:58.482514 master-0 kubenswrapper[4141]: I0312 14:11:58.482457 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" event={"ID":"86ab127e-897e-48d9-aea7-fd4eec84730f","Type":"ContainerStarted","Data":"99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd"} Mar 12 14:11:58.482957 master-0 kubenswrapper[4141]: I0312 14:11:58.482839 4141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="ovn-controller" containerID="cri-o://567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8" gracePeriod=30 Mar 12 14:11:58.483111 master-0 kubenswrapper[4141]: I0312 14:11:58.483029 4141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="kube-rbac-proxy-node" containerID="cri-o://8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb" gracePeriod=30 Mar 12 14:11:58.483111 master-0 kubenswrapper[4141]: I0312 14:11:58.483092 4141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="northd" containerID="cri-o://fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624" gracePeriod=30 Mar 12 14:11:58.483276 master-0 kubenswrapper[4141]: I0312 14:11:58.483124 4141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="ovn-acl-logging" containerID="cri-o://6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa" gracePeriod=30 Mar 12 14:11:58.483276 master-0 kubenswrapper[4141]: I0312 14:11:58.483156 4141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="sbdb" containerID="cri-o://1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306" gracePeriod=30 Mar 12 14:11:58.483276 master-0 kubenswrapper[4141]: I0312 14:11:58.482884 4141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="nbdb" containerID="cri-o://caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e" gracePeriod=30 Mar 12 14:11:58.483276 master-0 kubenswrapper[4141]: I0312 14:11:58.483134 4141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de" gracePeriod=30 Mar 12 14:11:58.483594 master-0 kubenswrapper[4141]: I0312 14:11:58.483312 4141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:58.483594 master-0 kubenswrapper[4141]: I0312 14:11:58.483333 4141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:58.483594 master-0 kubenswrapper[4141]: I0312 14:11:58.483344 4141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:58.490621 master-0 kubenswrapper[4141]: E0312 14:11:58.490182 4141 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 12 14:11:58.492072 master-0 kubenswrapper[4141]: E0312 14:11:58.492029 4141 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 12 14:11:58.494212 master-0 kubenswrapper[4141]: E0312 14:11:58.494166 4141 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 12 14:11:58.497273 master-0 kubenswrapper[4141]: E0312 14:11:58.496145 4141 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 12 14:11:58.497273 master-0 kubenswrapper[4141]: E0312 14:11:58.496178 4141 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 12 14:11:58.497273 master-0 kubenswrapper[4141]: E0312 14:11:58.496248 4141 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="nbdb" Mar 12 14:11:58.500489 master-0 kubenswrapper[4141]: E0312 14:11:58.500416 4141 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 12 14:11:58.500489 master-0 kubenswrapper[4141]: E0312 14:11:58.500456 4141 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="sbdb" Mar 12 14:11:58.538965 master-0 kubenswrapper[4141]: I0312 14:11:58.538638 4141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="ovnkube-controller" containerID="cri-o://99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd" gracePeriod=30 Mar 12 14:11:58.748305 master-0 kubenswrapper[4141]: I0312 14:11:58.748267 4141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pq7n2_86ab127e-897e-48d9-aea7-fd4eec84730f/ovnkube-controller/0.log" Mar 12 14:11:58.749849 master-0 kubenswrapper[4141]: I0312 14:11:58.749817 4141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pq7n2_86ab127e-897e-48d9-aea7-fd4eec84730f/kube-rbac-proxy-ovn-metrics/0.log" Mar 12 14:11:58.750276 master-0 kubenswrapper[4141]: I0312 14:11:58.750255 4141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pq7n2_86ab127e-897e-48d9-aea7-fd4eec84730f/kube-rbac-proxy-node/0.log" Mar 12 14:11:58.750676 master-0 kubenswrapper[4141]: I0312 14:11:58.750616 4141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pq7n2_86ab127e-897e-48d9-aea7-fd4eec84730f/ovn-acl-logging/0.log" Mar 12 14:11:58.751039 master-0 kubenswrapper[4141]: I0312 14:11:58.751022 4141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pq7n2_86ab127e-897e-48d9-aea7-fd4eec84730f/ovn-controller/0.log" Mar 12 14:11:58.751479 master-0 kubenswrapper[4141]: I0312 14:11:58.751463 4141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:58.809172 master-0 kubenswrapper[4141]: I0312 14:11:58.807477 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h4b4k"] Mar 12 14:11:58.809172 master-0 kubenswrapper[4141]: E0312 14:11:58.808881 4141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="ovn-acl-logging" Mar 12 14:11:58.809172 master-0 kubenswrapper[4141]: I0312 14:11:58.808922 4141 state_mem.go:107] "Deleted CPUSet assignment" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="ovn-acl-logging" Mar 12 14:11:58.809172 master-0 kubenswrapper[4141]: E0312 14:11:58.808937 4141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="northd" Mar 12 14:11:58.809172 master-0 kubenswrapper[4141]: I0312 14:11:58.808945 4141 state_mem.go:107] "Deleted CPUSet assignment" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="northd" Mar 12 14:11:58.809172 master-0 kubenswrapper[4141]: E0312 14:11:58.808955 4141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="nbdb" Mar 12 14:11:58.809172 master-0 kubenswrapper[4141]: I0312 14:11:58.808962 4141 state_mem.go:107] "Deleted CPUSet assignment" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="nbdb" Mar 12 14:11:58.809172 master-0 kubenswrapper[4141]: E0312 14:11:58.808979 4141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="ovnkube-controller" Mar 12 14:11:58.809172 master-0 kubenswrapper[4141]: I0312 14:11:58.808988 4141 state_mem.go:107] "Deleted CPUSet assignment" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="ovnkube-controller" Mar 12 14:11:58.809172 master-0 kubenswrapper[4141]: E0312 14:11:58.808995 4141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="ovn-controller" Mar 12 14:11:58.809172 master-0 kubenswrapper[4141]: I0312 14:11:58.809003 4141 state_mem.go:107] "Deleted CPUSet assignment" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="ovn-controller" Mar 12 14:11:58.809172 master-0 kubenswrapper[4141]: E0312 14:11:58.809011 4141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="kube-rbac-proxy-node" Mar 12 14:11:58.809172 master-0 kubenswrapper[4141]: I0312 14:11:58.809019 4141 state_mem.go:107] "Deleted CPUSet assignment" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="kube-rbac-proxy-node" Mar 12 14:11:58.809172 master-0 kubenswrapper[4141]: E0312 14:11:58.809043 4141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="kube-rbac-proxy-ovn-metrics" Mar 12 14:11:58.809172 master-0 kubenswrapper[4141]: I0312 14:11:58.809052 4141 state_mem.go:107] "Deleted CPUSet assignment" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="kube-rbac-proxy-ovn-metrics" Mar 12 14:11:58.809172 master-0 kubenswrapper[4141]: E0312 14:11:58.809064 4141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="kubecfg-setup" Mar 12 14:11:58.809172 master-0 kubenswrapper[4141]: I0312 14:11:58.809074 4141 state_mem.go:107] "Deleted CPUSet assignment" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="kubecfg-setup" Mar 12 14:11:58.809172 master-0 kubenswrapper[4141]: E0312 14:11:58.809085 4141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="sbdb" Mar 12 14:11:58.809172 master-0 kubenswrapper[4141]: I0312 14:11:58.809094 4141 state_mem.go:107] "Deleted CPUSet assignment" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="sbdb" Mar 12 14:11:58.810973 master-0 kubenswrapper[4141]: I0312 14:11:58.809241 4141 memory_manager.go:354] "RemoveStaleState removing state" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="ovn-acl-logging" Mar 12 14:11:58.810973 master-0 kubenswrapper[4141]: I0312 14:11:58.809261 4141 memory_manager.go:354] "RemoveStaleState removing state" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="nbdb" Mar 12 14:11:58.810973 master-0 kubenswrapper[4141]: I0312 14:11:58.809269 4141 memory_manager.go:354] "RemoveStaleState removing state" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="kube-rbac-proxy-node" Mar 12 14:11:58.810973 master-0 kubenswrapper[4141]: I0312 14:11:58.809277 4141 memory_manager.go:354] "RemoveStaleState removing state" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="northd" Mar 12 14:11:58.810973 master-0 kubenswrapper[4141]: I0312 14:11:58.809286 4141 memory_manager.go:354] "RemoveStaleState removing state" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="kube-rbac-proxy-ovn-metrics" Mar 12 14:11:58.810973 master-0 kubenswrapper[4141]: I0312 14:11:58.809301 4141 memory_manager.go:354] "RemoveStaleState removing state" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="sbdb" Mar 12 14:11:58.810973 master-0 kubenswrapper[4141]: I0312 14:11:58.809310 4141 memory_manager.go:354] "RemoveStaleState removing state" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="ovnkube-controller" Mar 12 14:11:58.810973 master-0 kubenswrapper[4141]: I0312 14:11:58.809319 4141 memory_manager.go:354] "RemoveStaleState removing state" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerName="ovn-controller" Mar 12 14:11:58.810973 master-0 kubenswrapper[4141]: I0312 14:11:58.810774 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.845291 master-0 kubenswrapper[4141]: I0312 14:11:58.845247 4141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-cni-bin\") pod \"86ab127e-897e-48d9-aea7-fd4eec84730f\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " Mar 12 14:11:58.845291 master-0 kubenswrapper[4141]: I0312 14:11:58.845284 4141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-run-systemd\") pod \"86ab127e-897e-48d9-aea7-fd4eec84730f\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " Mar 12 14:11:58.845291 master-0 kubenswrapper[4141]: I0312 14:11:58.845310 4141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-systemd-units\") pod \"86ab127e-897e-48d9-aea7-fd4eec84730f\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " Mar 12 14:11:58.845291 master-0 kubenswrapper[4141]: I0312 14:11:58.845333 4141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/86ab127e-897e-48d9-aea7-fd4eec84730f-env-overrides\") pod \"86ab127e-897e-48d9-aea7-fd4eec84730f\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " Mar 12 14:11:58.845291 master-0 kubenswrapper[4141]: I0312 14:11:58.845354 4141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"86ab127e-897e-48d9-aea7-fd4eec84730f\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " Mar 12 14:11:58.845291 master-0 kubenswrapper[4141]: I0312 14:11:58.845373 4141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "86ab127e-897e-48d9-aea7-fd4eec84730f" (UID: "86ab127e-897e-48d9-aea7-fd4eec84730f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:11:58.845291 master-0 kubenswrapper[4141]: I0312 14:11:58.845383 4141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/86ab127e-897e-48d9-aea7-fd4eec84730f-ovnkube-config\") pod \"86ab127e-897e-48d9-aea7-fd4eec84730f\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " Mar 12 14:11:58.845291 master-0 kubenswrapper[4141]: I0312 14:11:58.845425 4141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-slash\") pod \"86ab127e-897e-48d9-aea7-fd4eec84730f\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " Mar 12 14:11:58.845291 master-0 kubenswrapper[4141]: I0312 14:11:58.845443 4141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-etc-openvswitch\") pod \"86ab127e-897e-48d9-aea7-fd4eec84730f\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " Mar 12 14:11:58.845291 master-0 kubenswrapper[4141]: I0312 14:11:58.845470 4141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "86ab127e-897e-48d9-aea7-fd4eec84730f" (UID: "86ab127e-897e-48d9-aea7-fd4eec84730f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:11:58.845291 master-0 kubenswrapper[4141]: I0312 14:11:58.845489 4141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "86ab127e-897e-48d9-aea7-fd4eec84730f" (UID: "86ab127e-897e-48d9-aea7-fd4eec84730f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:11:58.845291 master-0 kubenswrapper[4141]: I0312 14:11:58.845505 4141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-node-log\") pod \"86ab127e-897e-48d9-aea7-fd4eec84730f\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " Mar 12 14:11:58.845291 master-0 kubenswrapper[4141]: I0312 14:11:58.845524 4141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/86ab127e-897e-48d9-aea7-fd4eec84730f-ovn-node-metrics-cert\") pod \"86ab127e-897e-48d9-aea7-fd4eec84730f\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " Mar 12 14:11:58.845291 master-0 kubenswrapper[4141]: I0312 14:11:58.845540 4141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-run-openvswitch\") pod \"86ab127e-897e-48d9-aea7-fd4eec84730f\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " Mar 12 14:11:58.845291 master-0 kubenswrapper[4141]: I0312 14:11:58.845528 4141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-slash" (OuterVolumeSpecName: "host-slash") pod "86ab127e-897e-48d9-aea7-fd4eec84730f" (UID: "86ab127e-897e-48d9-aea7-fd4eec84730f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:11:58.845291 master-0 kubenswrapper[4141]: I0312 14:11:58.845569 4141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "86ab127e-897e-48d9-aea7-fd4eec84730f" (UID: "86ab127e-897e-48d9-aea7-fd4eec84730f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:11:58.845291 master-0 kubenswrapper[4141]: I0312 14:11:58.845555 4141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-run-ovn\") pod \"86ab127e-897e-48d9-aea7-fd4eec84730f\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " Mar 12 14:11:58.846559 master-0 kubenswrapper[4141]: I0312 14:11:58.845548 4141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "86ab127e-897e-48d9-aea7-fd4eec84730f" (UID: "86ab127e-897e-48d9-aea7-fd4eec84730f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:11:58.846559 master-0 kubenswrapper[4141]: I0312 14:11:58.845587 4141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-node-log" (OuterVolumeSpecName: "node-log") pod "86ab127e-897e-48d9-aea7-fd4eec84730f" (UID: "86ab127e-897e-48d9-aea7-fd4eec84730f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:11:58.846559 master-0 kubenswrapper[4141]: I0312 14:11:58.845619 4141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-run-netns\") pod \"86ab127e-897e-48d9-aea7-fd4eec84730f\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " Mar 12 14:11:58.846559 master-0 kubenswrapper[4141]: I0312 14:11:58.845640 4141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "86ab127e-897e-48d9-aea7-fd4eec84730f" (UID: "86ab127e-897e-48d9-aea7-fd4eec84730f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:11:58.846559 master-0 kubenswrapper[4141]: I0312 14:11:58.845664 4141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "86ab127e-897e-48d9-aea7-fd4eec84730f" (UID: "86ab127e-897e-48d9-aea7-fd4eec84730f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:11:58.846559 master-0 kubenswrapper[4141]: I0312 14:11:58.845728 4141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-run-ovn-kubernetes\") pod \"86ab127e-897e-48d9-aea7-fd4eec84730f\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " Mar 12 14:11:58.846559 master-0 kubenswrapper[4141]: I0312 14:11:58.845777 4141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-kubelet\") pod \"86ab127e-897e-48d9-aea7-fd4eec84730f\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " Mar 12 14:11:58.846559 master-0 kubenswrapper[4141]: I0312 14:11:58.845824 4141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/86ab127e-897e-48d9-aea7-fd4eec84730f-ovnkube-script-lib\") pod \"86ab127e-897e-48d9-aea7-fd4eec84730f\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " Mar 12 14:11:58.846559 master-0 kubenswrapper[4141]: I0312 14:11:58.845823 4141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "86ab127e-897e-48d9-aea7-fd4eec84730f" (UID: "86ab127e-897e-48d9-aea7-fd4eec84730f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:11:58.846559 master-0 kubenswrapper[4141]: I0312 14:11:58.845858 4141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-cni-netd\") pod \"86ab127e-897e-48d9-aea7-fd4eec84730f\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " Mar 12 14:11:58.846559 master-0 kubenswrapper[4141]: I0312 14:11:58.845831 4141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86ab127e-897e-48d9-aea7-fd4eec84730f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "86ab127e-897e-48d9-aea7-fd4eec84730f" (UID: "86ab127e-897e-48d9-aea7-fd4eec84730f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:11:58.846559 master-0 kubenswrapper[4141]: I0312 14:11:58.845845 4141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "86ab127e-897e-48d9-aea7-fd4eec84730f" (UID: "86ab127e-897e-48d9-aea7-fd4eec84730f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:11:58.846559 master-0 kubenswrapper[4141]: I0312 14:11:58.845850 4141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86ab127e-897e-48d9-aea7-fd4eec84730f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "86ab127e-897e-48d9-aea7-fd4eec84730f" (UID: "86ab127e-897e-48d9-aea7-fd4eec84730f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:11:58.846559 master-0 kubenswrapper[4141]: I0312 14:11:58.845930 4141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr8bw\" (UniqueName: \"kubernetes.io/projected/86ab127e-897e-48d9-aea7-fd4eec84730f-kube-api-access-lr8bw\") pod \"86ab127e-897e-48d9-aea7-fd4eec84730f\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " Mar 12 14:11:58.846559 master-0 kubenswrapper[4141]: I0312 14:11:58.845940 4141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "86ab127e-897e-48d9-aea7-fd4eec84730f" (UID: "86ab127e-897e-48d9-aea7-fd4eec84730f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:11:58.846559 master-0 kubenswrapper[4141]: I0312 14:11:58.845967 4141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-log-socket\") pod \"86ab127e-897e-48d9-aea7-fd4eec84730f\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " Mar 12 14:11:58.847021 master-0 kubenswrapper[4141]: I0312 14:11:58.845997 4141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-var-lib-openvswitch\") pod \"86ab127e-897e-48d9-aea7-fd4eec84730f\" (UID: \"86ab127e-897e-48d9-aea7-fd4eec84730f\") " Mar 12 14:11:58.847021 master-0 kubenswrapper[4141]: I0312 14:11:58.846067 4141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-log-socket" (OuterVolumeSpecName: "log-socket") pod "86ab127e-897e-48d9-aea7-fd4eec84730f" (UID: "86ab127e-897e-48d9-aea7-fd4eec84730f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:11:58.847021 master-0 kubenswrapper[4141]: I0312 14:11:58.846140 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-var-lib-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.847021 master-0 kubenswrapper[4141]: I0312 14:11:58.846181 4141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "86ab127e-897e-48d9-aea7-fd4eec84730f" (UID: "86ab127e-897e-48d9-aea7-fd4eec84730f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:11:58.847021 master-0 kubenswrapper[4141]: I0312 14:11:58.846184 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-env-overrides\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.847021 master-0 kubenswrapper[4141]: I0312 14:11:58.846221 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-ovnkube-config\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.847021 master-0 kubenswrapper[4141]: I0312 14:11:58.846236 4141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86ab127e-897e-48d9-aea7-fd4eec84730f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "86ab127e-897e-48d9-aea7-fd4eec84730f" (UID: "86ab127e-897e-48d9-aea7-fd4eec84730f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:11:58.847021 master-0 kubenswrapper[4141]: I0312 14:11:58.846252 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-systemd-units\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.847021 master-0 kubenswrapper[4141]: I0312 14:11:58.846285 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-node-log\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.847021 master-0 kubenswrapper[4141]: I0312 14:11:58.846318 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-kubelet\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.847021 master-0 kubenswrapper[4141]: I0312 14:11:58.846347 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-etc-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.847021 master-0 kubenswrapper[4141]: I0312 14:11:58.846379 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-ovnkube-script-lib\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.847021 master-0 kubenswrapper[4141]: I0312 14:11:58.846418 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-cni-bin\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.847021 master-0 kubenswrapper[4141]: I0312 14:11:58.846473 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.847021 master-0 kubenswrapper[4141]: I0312 14:11:58.846539 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.847021 master-0 kubenswrapper[4141]: I0312 14:11:58.846575 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/761993bb-2cba-4e1a-b304-36a24817af94-ovn-node-metrics-cert\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.848303 master-0 kubenswrapper[4141]: I0312 14:11:58.846608 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-log-socket\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.848303 master-0 kubenswrapper[4141]: I0312 14:11:58.846638 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k4mx\" (UniqueName: \"kubernetes.io/projected/761993bb-2cba-4e1a-b304-36a24817af94-kube-api-access-2k4mx\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.848303 master-0 kubenswrapper[4141]: I0312 14:11:58.846667 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-run-netns\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.848303 master-0 kubenswrapper[4141]: I0312 14:11:58.846699 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-run-ovn-kubernetes\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.848303 master-0 kubenswrapper[4141]: I0312 14:11:58.846728 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-cni-netd\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.848303 master-0 kubenswrapper[4141]: I0312 14:11:58.846787 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-ovn\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.848303 master-0 kubenswrapper[4141]: I0312 14:11:58.846830 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-systemd\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.848303 master-0 kubenswrapper[4141]: I0312 14:11:58.846860 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-slash\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.848303 master-0 kubenswrapper[4141]: I0312 14:11:58.846970 4141 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/86ab127e-897e-48d9-aea7-fd4eec84730f-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:11:58.848303 master-0 kubenswrapper[4141]: I0312 14:11:58.846993 4141 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-slash\") on node \"master-0\" DevicePath \"\"" Mar 12 14:11:58.848303 master-0 kubenswrapper[4141]: I0312 14:11:58.847012 4141 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 12 14:11:58.848303 master-0 kubenswrapper[4141]: I0312 14:11:58.847032 4141 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-node-log\") on node \"master-0\" DevicePath \"\"" Mar 12 14:11:58.848303 master-0 kubenswrapper[4141]: I0312 14:11:58.847051 4141 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 12 14:11:58.848303 master-0 kubenswrapper[4141]: I0312 14:11:58.847069 4141 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 12 14:11:58.848303 master-0 kubenswrapper[4141]: I0312 14:11:58.847084 4141 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-run-netns\") on node \"master-0\" DevicePath \"\"" Mar 12 14:11:58.848303 master-0 kubenswrapper[4141]: I0312 14:11:58.847101 4141 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 12 14:11:58.848303 master-0 kubenswrapper[4141]: I0312 14:11:58.847119 4141 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-kubelet\") on node \"master-0\" DevicePath \"\"" Mar 12 14:11:58.848303 master-0 kubenswrapper[4141]: I0312 14:11:58.847136 4141 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/86ab127e-897e-48d9-aea7-fd4eec84730f-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Mar 12 14:11:58.848303 master-0 kubenswrapper[4141]: I0312 14:11:58.847153 4141 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Mar 12 14:11:58.848303 master-0 kubenswrapper[4141]: I0312 14:11:58.847170 4141 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 12 14:11:58.848303 master-0 kubenswrapper[4141]: I0312 14:11:58.847187 4141 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-log-socket\") on node \"master-0\" DevicePath \"\"" Mar 12 14:11:58.848303 master-0 kubenswrapper[4141]: I0312 14:11:58.847203 4141 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Mar 12 14:11:58.849199 master-0 kubenswrapper[4141]: I0312 14:11:58.847221 4141 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-systemd-units\") on node \"master-0\" DevicePath \"\"" Mar 12 14:11:58.849199 master-0 kubenswrapper[4141]: I0312 14:11:58.847237 4141 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/86ab127e-897e-48d9-aea7-fd4eec84730f-env-overrides\") on node \"master-0\" DevicePath \"\"" Mar 12 14:11:58.849199 master-0 kubenswrapper[4141]: I0312 14:11:58.847257 4141 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 12 14:11:58.849199 master-0 kubenswrapper[4141]: I0312 14:11:58.848853 4141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86ab127e-897e-48d9-aea7-fd4eec84730f-kube-api-access-lr8bw" (OuterVolumeSpecName: "kube-api-access-lr8bw") pod "86ab127e-897e-48d9-aea7-fd4eec84730f" (UID: "86ab127e-897e-48d9-aea7-fd4eec84730f"). InnerVolumeSpecName "kube-api-access-lr8bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:11:58.849199 master-0 kubenswrapper[4141]: I0312 14:11:58.848971 4141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86ab127e-897e-48d9-aea7-fd4eec84730f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "86ab127e-897e-48d9-aea7-fd4eec84730f" (UID: "86ab127e-897e-48d9-aea7-fd4eec84730f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:11:58.853276 master-0 kubenswrapper[4141]: I0312 14:11:58.853230 4141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "86ab127e-897e-48d9-aea7-fd4eec84730f" (UID: "86ab127e-897e-48d9-aea7-fd4eec84730f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:11:58.947522 master-0 kubenswrapper[4141]: I0312 14:11:58.947413 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-var-lib-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.947522 master-0 kubenswrapper[4141]: I0312 14:11:58.947460 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-ovnkube-config\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.947522 master-0 kubenswrapper[4141]: I0312 14:11:58.947480 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-env-overrides\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.947522 master-0 kubenswrapper[4141]: I0312 14:11:58.947500 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-systemd-units\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.947522 master-0 kubenswrapper[4141]: I0312 14:11:58.947522 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-node-log\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.947522 master-0 kubenswrapper[4141]: I0312 14:11:58.947543 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-kubelet\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.948031 master-0 kubenswrapper[4141]: I0312 14:11:58.947562 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-etc-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.948031 master-0 kubenswrapper[4141]: I0312 14:11:58.947581 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-ovnkube-script-lib\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.948031 master-0 kubenswrapper[4141]: I0312 14:11:58.947602 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-cni-bin\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.948031 master-0 kubenswrapper[4141]: I0312 14:11:58.947641 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.948031 master-0 kubenswrapper[4141]: I0312 14:11:58.947664 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.948031 master-0 kubenswrapper[4141]: I0312 14:11:58.947685 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/761993bb-2cba-4e1a-b304-36a24817af94-ovn-node-metrics-cert\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.948031 master-0 kubenswrapper[4141]: I0312 14:11:58.947705 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-log-socket\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.948031 master-0 kubenswrapper[4141]: I0312 14:11:58.947726 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k4mx\" (UniqueName: \"kubernetes.io/projected/761993bb-2cba-4e1a-b304-36a24817af94-kube-api-access-2k4mx\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.948031 master-0 kubenswrapper[4141]: I0312 14:11:58.947746 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-run-netns\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.948031 master-0 kubenswrapper[4141]: I0312 14:11:58.947768 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-run-ovn-kubernetes\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.948031 master-0 kubenswrapper[4141]: I0312 14:11:58.947786 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-cni-netd\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.948031 master-0 kubenswrapper[4141]: I0312 14:11:58.947814 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-ovn\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.948031 master-0 kubenswrapper[4141]: I0312 14:11:58.947836 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-slash\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.948031 master-0 kubenswrapper[4141]: I0312 14:11:58.947855 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-systemd\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.948031 master-0 kubenswrapper[4141]: I0312 14:11:58.947886 4141 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/86ab127e-897e-48d9-aea7-fd4eec84730f-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:11:58.948031 master-0 kubenswrapper[4141]: I0312 14:11:58.947923 4141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr8bw\" (UniqueName: \"kubernetes.io/projected/86ab127e-897e-48d9-aea7-fd4eec84730f-kube-api-access-lr8bw\") on node \"master-0\" DevicePath \"\"" Mar 12 14:11:58.948031 master-0 kubenswrapper[4141]: I0312 14:11:58.947935 4141 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/86ab127e-897e-48d9-aea7-fd4eec84730f-run-systemd\") on node \"master-0\" DevicePath \"\"" Mar 12 14:11:58.948031 master-0 kubenswrapper[4141]: I0312 14:11:58.947985 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-systemd\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.950146 master-0 kubenswrapper[4141]: I0312 14:11:58.948021 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-var-lib-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.950146 master-0 kubenswrapper[4141]: I0312 14:11:58.948052 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.950146 master-0 kubenswrapper[4141]: I0312 14:11:58.948106 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-cni-netd\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.950146 master-0 kubenswrapper[4141]: I0312 14:11:58.948137 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-ovn\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.950146 master-0 kubenswrapper[4141]: I0312 14:11:58.948145 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-node-log\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.950146 master-0 kubenswrapper[4141]: I0312 14:11:58.948144 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.950146 master-0 kubenswrapper[4141]: I0312 14:11:58.948167 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-slash\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.950146 master-0 kubenswrapper[4141]: I0312 14:11:58.948263 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-kubelet\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.950146 master-0 kubenswrapper[4141]: I0312 14:11:58.948336 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-etc-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.950146 master-0 kubenswrapper[4141]: I0312 14:11:58.948369 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-log-socket\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.950146 master-0 kubenswrapper[4141]: I0312 14:11:58.948508 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-run-netns\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.950146 master-0 kubenswrapper[4141]: I0312 14:11:58.948549 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-run-ovn-kubernetes\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.950146 master-0 kubenswrapper[4141]: I0312 14:11:58.948730 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-env-overrides\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.950146 master-0 kubenswrapper[4141]: I0312 14:11:58.949280 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-cni-bin\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.950146 master-0 kubenswrapper[4141]: I0312 14:11:58.949333 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-systemd-units\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.950146 master-0 kubenswrapper[4141]: I0312 14:11:58.949383 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-ovnkube-script-lib\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.950146 master-0 kubenswrapper[4141]: I0312 14:11:58.949844 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-ovnkube-config\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.952476 master-0 kubenswrapper[4141]: I0312 14:11:58.952418 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/761993bb-2cba-4e1a-b304-36a24817af94-ovn-node-metrics-cert\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:58.964087 master-0 kubenswrapper[4141]: I0312 14:11:58.963997 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k4mx\" (UniqueName: \"kubernetes.io/projected/761993bb-2cba-4e1a-b304-36a24817af94-kube-api-access-2k4mx\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:59.128079 master-0 kubenswrapper[4141]: I0312 14:11:59.128011 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:11:59.131633 master-0 kubenswrapper[4141]: I0312 14:11:59.131585 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:11:59.132503 master-0 kubenswrapper[4141]: E0312 14:11:59.131709 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:11:59.141737 master-0 kubenswrapper[4141]: W0312 14:11:59.141471 4141 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 12 14:11:59.143618 master-0 kubenswrapper[4141]: I0312 14:11:59.143583 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 12 14:11:59.486974 master-0 kubenswrapper[4141]: I0312 14:11:59.486877 4141 generic.go:334] "Generic (PLEG): container finished" podID="761993bb-2cba-4e1a-b304-36a24817af94" containerID="e511180297e76f6a11f5330905f38a15021808c15b34dd938afb52d0fc965c91" exitCode=0 Mar 12 14:11:59.487275 master-0 kubenswrapper[4141]: I0312 14:11:59.487002 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" event={"ID":"761993bb-2cba-4e1a-b304-36a24817af94","Type":"ContainerDied","Data":"e511180297e76f6a11f5330905f38a15021808c15b34dd938afb52d0fc965c91"} Mar 12 14:11:59.487275 master-0 kubenswrapper[4141]: I0312 14:11:59.487030 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" event={"ID":"761993bb-2cba-4e1a-b304-36a24817af94","Type":"ContainerStarted","Data":"ba6778d1fdc6908e0a785cdabed807cc4f2dd052e1c7ef6d135e92d89f5e89d1"} Mar 12 14:11:59.493705 master-0 kubenswrapper[4141]: I0312 14:11:59.493053 4141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pq7n2_86ab127e-897e-48d9-aea7-fd4eec84730f/ovnkube-controller/0.log" Mar 12 14:11:59.495687 master-0 kubenswrapper[4141]: I0312 14:11:59.495642 4141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pq7n2_86ab127e-897e-48d9-aea7-fd4eec84730f/kube-rbac-proxy-ovn-metrics/0.log" Mar 12 14:11:59.496342 master-0 kubenswrapper[4141]: I0312 14:11:59.496074 4141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pq7n2_86ab127e-897e-48d9-aea7-fd4eec84730f/kube-rbac-proxy-node/0.log" Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.496515 4141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pq7n2_86ab127e-897e-48d9-aea7-fd4eec84730f/ovn-acl-logging/0.log" Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.497031 4141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pq7n2_86ab127e-897e-48d9-aea7-fd4eec84730f/ovn-controller/0.log" Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.497557 4141 generic.go:334] "Generic (PLEG): container finished" podID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerID="99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd" exitCode=143 Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.497583 4141 generic.go:334] "Generic (PLEG): container finished" podID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerID="1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306" exitCode=0 Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.497594 4141 generic.go:334] "Generic (PLEG): container finished" podID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerID="caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e" exitCode=0 Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.497604 4141 generic.go:334] "Generic (PLEG): container finished" podID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerID="fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624" exitCode=0 Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.497615 4141 generic.go:334] "Generic (PLEG): container finished" podID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerID="d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de" exitCode=143 Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.497627 4141 generic.go:334] "Generic (PLEG): container finished" podID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerID="8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb" exitCode=143 Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.497634 4141 generic.go:334] "Generic (PLEG): container finished" podID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerID="6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa" exitCode=143 Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.497641 4141 generic.go:334] "Generic (PLEG): container finished" podID="86ab127e-897e-48d9-aea7-fd4eec84730f" containerID="567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8" exitCode=143 Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.497724 4141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.497791 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" event={"ID":"86ab127e-897e-48d9-aea7-fd4eec84730f","Type":"ContainerDied","Data":"99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd"} Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.497841 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" event={"ID":"86ab127e-897e-48d9-aea7-fd4eec84730f","Type":"ContainerDied","Data":"1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306"} Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.497865 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" event={"ID":"86ab127e-897e-48d9-aea7-fd4eec84730f","Type":"ContainerDied","Data":"caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e"} Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.497885 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" event={"ID":"86ab127e-897e-48d9-aea7-fd4eec84730f","Type":"ContainerDied","Data":"fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624"} Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.497929 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" event={"ID":"86ab127e-897e-48d9-aea7-fd4eec84730f","Type":"ContainerDied","Data":"d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de"} Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.497950 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" event={"ID":"86ab127e-897e-48d9-aea7-fd4eec84730f","Type":"ContainerDied","Data":"8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb"} Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.497971 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa"} Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.498133 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8"} Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.498146 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d"} Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.498161 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" event={"ID":"86ab127e-897e-48d9-aea7-fd4eec84730f","Type":"ContainerDied","Data":"6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa"} Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.498178 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd"} Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.498191 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306"} Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.498202 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e"} Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.498213 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624"} Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.498224 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de"} Mar 12 14:11:59.498729 master-0 kubenswrapper[4141]: I0312 14:11:59.498235 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb"} Mar 12 14:11:59.502165 master-0 kubenswrapper[4141]: I0312 14:11:59.498245 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa"} Mar 12 14:11:59.502165 master-0 kubenswrapper[4141]: I0312 14:11:59.498255 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8"} Mar 12 14:11:59.502165 master-0 kubenswrapper[4141]: I0312 14:11:59.498266 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d"} Mar 12 14:11:59.502165 master-0 kubenswrapper[4141]: I0312 14:11:59.498280 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" event={"ID":"86ab127e-897e-48d9-aea7-fd4eec84730f","Type":"ContainerDied","Data":"567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8"} Mar 12 14:11:59.502165 master-0 kubenswrapper[4141]: I0312 14:11:59.498296 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd"} Mar 12 14:11:59.502165 master-0 kubenswrapper[4141]: I0312 14:11:59.498307 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306"} Mar 12 14:11:59.502165 master-0 kubenswrapper[4141]: I0312 14:11:59.498318 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e"} Mar 12 14:11:59.502165 master-0 kubenswrapper[4141]: I0312 14:11:59.498328 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624"} Mar 12 14:11:59.502165 master-0 kubenswrapper[4141]: I0312 14:11:59.498338 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de"} Mar 12 14:11:59.502165 master-0 kubenswrapper[4141]: I0312 14:11:59.498349 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb"} Mar 12 14:11:59.502165 master-0 kubenswrapper[4141]: I0312 14:11:59.498359 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa"} Mar 12 14:11:59.502165 master-0 kubenswrapper[4141]: I0312 14:11:59.498370 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8"} Mar 12 14:11:59.502165 master-0 kubenswrapper[4141]: I0312 14:11:59.498380 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d"} Mar 12 14:11:59.502165 master-0 kubenswrapper[4141]: I0312 14:11:59.498399 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pq7n2" event={"ID":"86ab127e-897e-48d9-aea7-fd4eec84730f","Type":"ContainerDied","Data":"b89d1bc2f4a8ea2138bad228a8f181af661c81a072e9cd06792d7137bd4ebc43"} Mar 12 14:11:59.502165 master-0 kubenswrapper[4141]: I0312 14:11:59.498413 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd"} Mar 12 14:11:59.502165 master-0 kubenswrapper[4141]: I0312 14:11:59.498426 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306"} Mar 12 14:11:59.502165 master-0 kubenswrapper[4141]: I0312 14:11:59.498437 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e"} Mar 12 14:11:59.502165 master-0 kubenswrapper[4141]: I0312 14:11:59.498448 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624"} Mar 12 14:11:59.502165 master-0 kubenswrapper[4141]: I0312 14:11:59.498459 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de"} Mar 12 14:11:59.502165 master-0 kubenswrapper[4141]: I0312 14:11:59.498470 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb"} Mar 12 14:11:59.502165 master-0 kubenswrapper[4141]: I0312 14:11:59.498481 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa"} Mar 12 14:11:59.502165 master-0 kubenswrapper[4141]: I0312 14:11:59.498491 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8"} Mar 12 14:11:59.502165 master-0 kubenswrapper[4141]: I0312 14:11:59.498502 4141 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d"} Mar 12 14:11:59.502165 master-0 kubenswrapper[4141]: I0312 14:11:59.498524 4141 scope.go:117] "RemoveContainer" containerID="99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd" Mar 12 14:11:59.504292 master-0 kubenswrapper[4141]: I0312 14:11:59.504208 4141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=0.504189088 podStartE2EDuration="504.189088ms" podCreationTimestamp="2026-03-12 14:11:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:11:59.50348041 +0000 UTC m=+94.065052669" watchObservedRunningTime="2026-03-12 14:11:59.504189088 +0000 UTC m=+94.065761337" Mar 12 14:11:59.534398 master-0 kubenswrapper[4141]: I0312 14:11:59.534336 4141 scope.go:117] "RemoveContainer" containerID="1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306" Mar 12 14:11:59.547983 master-0 kubenswrapper[4141]: I0312 14:11:59.547869 4141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pq7n2"] Mar 12 14:11:59.556398 master-0 kubenswrapper[4141]: I0312 14:11:59.556119 4141 scope.go:117] "RemoveContainer" containerID="caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e" Mar 12 14:11:59.565343 master-0 kubenswrapper[4141]: I0312 14:11:59.562210 4141 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pq7n2"] Mar 12 14:11:59.574572 master-0 kubenswrapper[4141]: I0312 14:11:59.574492 4141 scope.go:117] "RemoveContainer" containerID="fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624" Mar 12 14:11:59.590350 master-0 kubenswrapper[4141]: I0312 14:11:59.590292 4141 scope.go:117] "RemoveContainer" containerID="d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de" Mar 12 14:11:59.600182 master-0 kubenswrapper[4141]: I0312 14:11:59.600150 4141 scope.go:117] "RemoveContainer" containerID="8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb" Mar 12 14:11:59.610333 master-0 kubenswrapper[4141]: I0312 14:11:59.609523 4141 scope.go:117] "RemoveContainer" containerID="6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa" Mar 12 14:11:59.624247 master-0 kubenswrapper[4141]: I0312 14:11:59.624155 4141 scope.go:117] "RemoveContainer" containerID="567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8" Mar 12 14:11:59.639513 master-0 kubenswrapper[4141]: I0312 14:11:59.639462 4141 scope.go:117] "RemoveContainer" containerID="bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d" Mar 12 14:11:59.651156 master-0 kubenswrapper[4141]: I0312 14:11:59.651112 4141 scope.go:117] "RemoveContainer" containerID="99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd" Mar 12 14:11:59.651698 master-0 kubenswrapper[4141]: E0312 14:11:59.651653 4141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd\": container with ID starting with 99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd not found: ID does not exist" containerID="99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd" Mar 12 14:11:59.651765 master-0 kubenswrapper[4141]: I0312 14:11:59.651711 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd"} err="failed to get container status \"99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd\": rpc error: code = NotFound desc = could not find container \"99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd\": container with ID starting with 99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd not found: ID does not exist" Mar 12 14:11:59.651765 master-0 kubenswrapper[4141]: I0312 14:11:59.651747 4141 scope.go:117] "RemoveContainer" containerID="1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306" Mar 12 14:11:59.652394 master-0 kubenswrapper[4141]: E0312 14:11:59.652362 4141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306\": container with ID starting with 1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306 not found: ID does not exist" containerID="1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306" Mar 12 14:11:59.652472 master-0 kubenswrapper[4141]: I0312 14:11:59.652406 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306"} err="failed to get container status \"1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306\": rpc error: code = NotFound desc = could not find container \"1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306\": container with ID starting with 1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306 not found: ID does not exist" Mar 12 14:11:59.652472 master-0 kubenswrapper[4141]: I0312 14:11:59.652441 4141 scope.go:117] "RemoveContainer" containerID="caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e" Mar 12 14:11:59.653049 master-0 kubenswrapper[4141]: E0312 14:11:59.653027 4141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e\": container with ID starting with caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e not found: ID does not exist" containerID="caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e" Mar 12 14:11:59.653119 master-0 kubenswrapper[4141]: I0312 14:11:59.653063 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e"} err="failed to get container status \"caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e\": rpc error: code = NotFound desc = could not find container \"caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e\": container with ID starting with caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e not found: ID does not exist" Mar 12 14:11:59.653119 master-0 kubenswrapper[4141]: I0312 14:11:59.653087 4141 scope.go:117] "RemoveContainer" containerID="fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624" Mar 12 14:11:59.654022 master-0 kubenswrapper[4141]: E0312 14:11:59.653393 4141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624\": container with ID starting with fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624 not found: ID does not exist" containerID="fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624" Mar 12 14:11:59.654022 master-0 kubenswrapper[4141]: I0312 14:11:59.653424 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624"} err="failed to get container status \"fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624\": rpc error: code = NotFound desc = could not find container \"fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624\": container with ID starting with fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624 not found: ID does not exist" Mar 12 14:11:59.654022 master-0 kubenswrapper[4141]: I0312 14:11:59.653444 4141 scope.go:117] "RemoveContainer" containerID="d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de" Mar 12 14:11:59.654022 master-0 kubenswrapper[4141]: E0312 14:11:59.653685 4141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de\": container with ID starting with d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de not found: ID does not exist" containerID="d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de" Mar 12 14:11:59.654022 master-0 kubenswrapper[4141]: I0312 14:11:59.653715 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de"} err="failed to get container status \"d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de\": rpc error: code = NotFound desc = could not find container \"d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de\": container with ID starting with d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de not found: ID does not exist" Mar 12 14:11:59.654022 master-0 kubenswrapper[4141]: I0312 14:11:59.653731 4141 scope.go:117] "RemoveContainer" containerID="8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb" Mar 12 14:11:59.654404 master-0 kubenswrapper[4141]: E0312 14:11:59.654063 4141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb\": container with ID starting with 8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb not found: ID does not exist" containerID="8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb" Mar 12 14:11:59.654404 master-0 kubenswrapper[4141]: I0312 14:11:59.654089 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb"} err="failed to get container status \"8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb\": rpc error: code = NotFound desc = could not find container \"8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb\": container with ID starting with 8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb not found: ID does not exist" Mar 12 14:11:59.654404 master-0 kubenswrapper[4141]: I0312 14:11:59.654110 4141 scope.go:117] "RemoveContainer" containerID="6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa" Mar 12 14:11:59.654508 master-0 kubenswrapper[4141]: E0312 14:11:59.654429 4141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa\": container with ID starting with 6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa not found: ID does not exist" containerID="6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa" Mar 12 14:11:59.654508 master-0 kubenswrapper[4141]: I0312 14:11:59.654455 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa"} err="failed to get container status \"6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa\": rpc error: code = NotFound desc = could not find container \"6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa\": container with ID starting with 6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa not found: ID does not exist" Mar 12 14:11:59.654508 master-0 kubenswrapper[4141]: I0312 14:11:59.654474 4141 scope.go:117] "RemoveContainer" containerID="567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8" Mar 12 14:11:59.654826 master-0 kubenswrapper[4141]: E0312 14:11:59.654781 4141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8\": container with ID starting with 567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8 not found: ID does not exist" containerID="567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8" Mar 12 14:11:59.654826 master-0 kubenswrapper[4141]: I0312 14:11:59.654801 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8"} err="failed to get container status \"567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8\": rpc error: code = NotFound desc = could not find container \"567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8\": container with ID starting with 567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8 not found: ID does not exist" Mar 12 14:11:59.654826 master-0 kubenswrapper[4141]: I0312 14:11:59.654819 4141 scope.go:117] "RemoveContainer" containerID="bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d" Mar 12 14:11:59.655150 master-0 kubenswrapper[4141]: E0312 14:11:59.655117 4141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d\": container with ID starting with bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d not found: ID does not exist" containerID="bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d" Mar 12 14:11:59.655394 master-0 kubenswrapper[4141]: I0312 14:11:59.655144 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d"} err="failed to get container status \"bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d\": rpc error: code = NotFound desc = could not find container \"bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d\": container with ID starting with bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d not found: ID does not exist" Mar 12 14:11:59.655394 master-0 kubenswrapper[4141]: I0312 14:11:59.655163 4141 scope.go:117] "RemoveContainer" containerID="99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd" Mar 12 14:11:59.655823 master-0 kubenswrapper[4141]: I0312 14:11:59.655796 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd"} err="failed to get container status \"99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd\": rpc error: code = NotFound desc = could not find container \"99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd\": container with ID starting with 99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd not found: ID does not exist" Mar 12 14:11:59.655880 master-0 kubenswrapper[4141]: I0312 14:11:59.655820 4141 scope.go:117] "RemoveContainer" containerID="1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306" Mar 12 14:11:59.656147 master-0 kubenswrapper[4141]: I0312 14:11:59.656120 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306"} err="failed to get container status \"1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306\": rpc error: code = NotFound desc = could not find container \"1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306\": container with ID starting with 1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306 not found: ID does not exist" Mar 12 14:11:59.656582 master-0 kubenswrapper[4141]: I0312 14:11:59.656146 4141 scope.go:117] "RemoveContainer" containerID="caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e" Mar 12 14:11:59.656815 master-0 kubenswrapper[4141]: I0312 14:11:59.656766 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e"} err="failed to get container status \"caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e\": rpc error: code = NotFound desc = could not find container \"caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e\": container with ID starting with caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e not found: ID does not exist" Mar 12 14:11:59.656815 master-0 kubenswrapper[4141]: I0312 14:11:59.656811 4141 scope.go:117] "RemoveContainer" containerID="fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624" Mar 12 14:11:59.657418 master-0 kubenswrapper[4141]: I0312 14:11:59.657375 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624"} err="failed to get container status \"fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624\": rpc error: code = NotFound desc = could not find container \"fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624\": container with ID starting with fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624 not found: ID does not exist" Mar 12 14:11:59.657418 master-0 kubenswrapper[4141]: I0312 14:11:59.657404 4141 scope.go:117] "RemoveContainer" containerID="d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de" Mar 12 14:11:59.659475 master-0 kubenswrapper[4141]: I0312 14:11:59.657675 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de"} err="failed to get container status \"d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de\": rpc error: code = NotFound desc = could not find container \"d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de\": container with ID starting with d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de not found: ID does not exist" Mar 12 14:11:59.659475 master-0 kubenswrapper[4141]: I0312 14:11:59.657697 4141 scope.go:117] "RemoveContainer" containerID="8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb" Mar 12 14:11:59.659475 master-0 kubenswrapper[4141]: I0312 14:11:59.658149 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb"} err="failed to get container status \"8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb\": rpc error: code = NotFound desc = could not find container \"8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb\": container with ID starting with 8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb not found: ID does not exist" Mar 12 14:11:59.659475 master-0 kubenswrapper[4141]: I0312 14:11:59.658195 4141 scope.go:117] "RemoveContainer" containerID="6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa" Mar 12 14:11:59.659475 master-0 kubenswrapper[4141]: I0312 14:11:59.658598 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa"} err="failed to get container status \"6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa\": rpc error: code = NotFound desc = could not find container \"6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa\": container with ID starting with 6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa not found: ID does not exist" Mar 12 14:11:59.659475 master-0 kubenswrapper[4141]: I0312 14:11:59.658621 4141 scope.go:117] "RemoveContainer" containerID="567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8" Mar 12 14:11:59.659475 master-0 kubenswrapper[4141]: I0312 14:11:59.658927 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8"} err="failed to get container status \"567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8\": rpc error: code = NotFound desc = could not find container \"567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8\": container with ID starting with 567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8 not found: ID does not exist" Mar 12 14:11:59.659475 master-0 kubenswrapper[4141]: I0312 14:11:59.658950 4141 scope.go:117] "RemoveContainer" containerID="bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d" Mar 12 14:11:59.659475 master-0 kubenswrapper[4141]: I0312 14:11:59.659311 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d"} err="failed to get container status \"bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d\": rpc error: code = NotFound desc = could not find container \"bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d\": container with ID starting with bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d not found: ID does not exist" Mar 12 14:11:59.659475 master-0 kubenswrapper[4141]: I0312 14:11:59.659365 4141 scope.go:117] "RemoveContainer" containerID="99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd" Mar 12 14:11:59.660194 master-0 kubenswrapper[4141]: I0312 14:11:59.659718 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd"} err="failed to get container status \"99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd\": rpc error: code = NotFound desc = could not find container \"99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd\": container with ID starting with 99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd not found: ID does not exist" Mar 12 14:11:59.660194 master-0 kubenswrapper[4141]: I0312 14:11:59.659747 4141 scope.go:117] "RemoveContainer" containerID="1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306" Mar 12 14:11:59.660194 master-0 kubenswrapper[4141]: I0312 14:11:59.660179 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306"} err="failed to get container status \"1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306\": rpc error: code = NotFound desc = could not find container \"1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306\": container with ID starting with 1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306 not found: ID does not exist" Mar 12 14:11:59.660297 master-0 kubenswrapper[4141]: I0312 14:11:59.660248 4141 scope.go:117] "RemoveContainer" containerID="caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e" Mar 12 14:11:59.661238 master-0 kubenswrapper[4141]: I0312 14:11:59.660746 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e"} err="failed to get container status \"caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e\": rpc error: code = NotFound desc = could not find container \"caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e\": container with ID starting with caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e not found: ID does not exist" Mar 12 14:11:59.661238 master-0 kubenswrapper[4141]: I0312 14:11:59.660775 4141 scope.go:117] "RemoveContainer" containerID="fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624" Mar 12 14:11:59.661347 master-0 kubenswrapper[4141]: I0312 14:11:59.661259 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624"} err="failed to get container status \"fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624\": rpc error: code = NotFound desc = could not find container \"fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624\": container with ID starting with fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624 not found: ID does not exist" Mar 12 14:11:59.661347 master-0 kubenswrapper[4141]: I0312 14:11:59.661300 4141 scope.go:117] "RemoveContainer" containerID="d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de" Mar 12 14:11:59.661810 master-0 kubenswrapper[4141]: I0312 14:11:59.661775 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de"} err="failed to get container status \"d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de\": rpc error: code = NotFound desc = could not find container \"d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de\": container with ID starting with d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de not found: ID does not exist" Mar 12 14:11:59.661810 master-0 kubenswrapper[4141]: I0312 14:11:59.661800 4141 scope.go:117] "RemoveContainer" containerID="8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb" Mar 12 14:11:59.662323 master-0 kubenswrapper[4141]: I0312 14:11:59.662293 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb"} err="failed to get container status \"8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb\": rpc error: code = NotFound desc = could not find container \"8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb\": container with ID starting with 8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb not found: ID does not exist" Mar 12 14:11:59.662323 master-0 kubenswrapper[4141]: I0312 14:11:59.662318 4141 scope.go:117] "RemoveContainer" containerID="6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa" Mar 12 14:11:59.662787 master-0 kubenswrapper[4141]: I0312 14:11:59.662757 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa"} err="failed to get container status \"6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa\": rpc error: code = NotFound desc = could not find container \"6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa\": container with ID starting with 6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa not found: ID does not exist" Mar 12 14:11:59.662844 master-0 kubenswrapper[4141]: I0312 14:11:59.662785 4141 scope.go:117] "RemoveContainer" containerID="567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8" Mar 12 14:11:59.663288 master-0 kubenswrapper[4141]: I0312 14:11:59.663251 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8"} err="failed to get container status \"567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8\": rpc error: code = NotFound desc = could not find container \"567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8\": container with ID starting with 567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8 not found: ID does not exist" Mar 12 14:11:59.663288 master-0 kubenswrapper[4141]: I0312 14:11:59.663280 4141 scope.go:117] "RemoveContainer" containerID="bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d" Mar 12 14:11:59.663681 master-0 kubenswrapper[4141]: I0312 14:11:59.663652 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d"} err="failed to get container status \"bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d\": rpc error: code = NotFound desc = could not find container \"bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d\": container with ID starting with bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d not found: ID does not exist" Mar 12 14:11:59.663740 master-0 kubenswrapper[4141]: I0312 14:11:59.663679 4141 scope.go:117] "RemoveContainer" containerID="99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd" Mar 12 14:11:59.664188 master-0 kubenswrapper[4141]: I0312 14:11:59.664156 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd"} err="failed to get container status \"99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd\": rpc error: code = NotFound desc = could not find container \"99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd\": container with ID starting with 99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd not found: ID does not exist" Mar 12 14:11:59.664188 master-0 kubenswrapper[4141]: I0312 14:11:59.664182 4141 scope.go:117] "RemoveContainer" containerID="1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306" Mar 12 14:11:59.664616 master-0 kubenswrapper[4141]: I0312 14:11:59.664587 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306"} err="failed to get container status \"1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306\": rpc error: code = NotFound desc = could not find container \"1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306\": container with ID starting with 1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306 not found: ID does not exist" Mar 12 14:11:59.664681 master-0 kubenswrapper[4141]: I0312 14:11:59.664614 4141 scope.go:117] "RemoveContainer" containerID="caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e" Mar 12 14:11:59.665050 master-0 kubenswrapper[4141]: I0312 14:11:59.665006 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e"} err="failed to get container status \"caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e\": rpc error: code = NotFound desc = could not find container \"caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e\": container with ID starting with caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e not found: ID does not exist" Mar 12 14:11:59.665050 master-0 kubenswrapper[4141]: I0312 14:11:59.665046 4141 scope.go:117] "RemoveContainer" containerID="fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624" Mar 12 14:11:59.665419 master-0 kubenswrapper[4141]: I0312 14:11:59.665388 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624"} err="failed to get container status \"fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624\": rpc error: code = NotFound desc = could not find container \"fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624\": container with ID starting with fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624 not found: ID does not exist" Mar 12 14:11:59.665419 master-0 kubenswrapper[4141]: I0312 14:11:59.665417 4141 scope.go:117] "RemoveContainer" containerID="d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de" Mar 12 14:11:59.665870 master-0 kubenswrapper[4141]: I0312 14:11:59.665839 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de"} err="failed to get container status \"d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de\": rpc error: code = NotFound desc = could not find container \"d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de\": container with ID starting with d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de not found: ID does not exist" Mar 12 14:11:59.665939 master-0 kubenswrapper[4141]: I0312 14:11:59.665868 4141 scope.go:117] "RemoveContainer" containerID="8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb" Mar 12 14:11:59.666330 master-0 kubenswrapper[4141]: I0312 14:11:59.666286 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb"} err="failed to get container status \"8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb\": rpc error: code = NotFound desc = could not find container \"8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb\": container with ID starting with 8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb not found: ID does not exist" Mar 12 14:11:59.666330 master-0 kubenswrapper[4141]: I0312 14:11:59.666329 4141 scope.go:117] "RemoveContainer" containerID="6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa" Mar 12 14:11:59.666744 master-0 kubenswrapper[4141]: I0312 14:11:59.666673 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa"} err="failed to get container status \"6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa\": rpc error: code = NotFound desc = could not find container \"6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa\": container with ID starting with 6a308ae2415ebee16148d586c458843fba594055eb7e3e6b606a1a7e76f1a5aa not found: ID does not exist" Mar 12 14:11:59.666744 master-0 kubenswrapper[4141]: I0312 14:11:59.666715 4141 scope.go:117] "RemoveContainer" containerID="567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8" Mar 12 14:11:59.667098 master-0 kubenswrapper[4141]: I0312 14:11:59.667062 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8"} err="failed to get container status \"567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8\": rpc error: code = NotFound desc = could not find container \"567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8\": container with ID starting with 567d9842518db408bbec5e364e1f8f1ec1ff01e9a9d1daa8da526f56701dc7d8 not found: ID does not exist" Mar 12 14:11:59.667098 master-0 kubenswrapper[4141]: I0312 14:11:59.667094 4141 scope.go:117] "RemoveContainer" containerID="bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d" Mar 12 14:11:59.667522 master-0 kubenswrapper[4141]: I0312 14:11:59.667490 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d"} err="failed to get container status \"bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d\": rpc error: code = NotFound desc = could not find container \"bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d\": container with ID starting with bae8c933f135ba9e504360de26dcec1bce5b47c0f54c3987146fd60ab46c404d not found: ID does not exist" Mar 12 14:11:59.667575 master-0 kubenswrapper[4141]: I0312 14:11:59.667519 4141 scope.go:117] "RemoveContainer" containerID="99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd" Mar 12 14:11:59.667993 master-0 kubenswrapper[4141]: I0312 14:11:59.667878 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd"} err="failed to get container status \"99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd\": rpc error: code = NotFound desc = could not find container \"99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd\": container with ID starting with 99f907b30bbaf5193dcfdf88b0682ee7a01781acce763009cd830889d710aabd not found: ID does not exist" Mar 12 14:11:59.668370 master-0 kubenswrapper[4141]: I0312 14:11:59.667994 4141 scope.go:117] "RemoveContainer" containerID="1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306" Mar 12 14:11:59.668454 master-0 kubenswrapper[4141]: I0312 14:11:59.668419 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306"} err="failed to get container status \"1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306\": rpc error: code = NotFound desc = could not find container \"1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306\": container with ID starting with 1ebb7aa2191080f6bda05b93cbe1894465ad7e71185535c180aa29638ca0d306 not found: ID does not exist" Mar 12 14:11:59.668454 master-0 kubenswrapper[4141]: I0312 14:11:59.668450 4141 scope.go:117] "RemoveContainer" containerID="caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e" Mar 12 14:11:59.668806 master-0 kubenswrapper[4141]: I0312 14:11:59.668773 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e"} err="failed to get container status \"caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e\": rpc error: code = NotFound desc = could not find container \"caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e\": container with ID starting with caa40c63544963168e96bf7603f960592315dfb7ccd5efd78e1313024b4e706e not found: ID does not exist" Mar 12 14:11:59.668806 master-0 kubenswrapper[4141]: I0312 14:11:59.668805 4141 scope.go:117] "RemoveContainer" containerID="fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624" Mar 12 14:11:59.669200 master-0 kubenswrapper[4141]: I0312 14:11:59.669159 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624"} err="failed to get container status \"fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624\": rpc error: code = NotFound desc = could not find container \"fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624\": container with ID starting with fb80a3386fb05769b3cb2366e85fd3ea7d2373b26d3e843a2b9807aca1200624 not found: ID does not exist" Mar 12 14:11:59.669267 master-0 kubenswrapper[4141]: I0312 14:11:59.669200 4141 scope.go:117] "RemoveContainer" containerID="d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de" Mar 12 14:11:59.669672 master-0 kubenswrapper[4141]: I0312 14:11:59.669636 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de"} err="failed to get container status \"d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de\": rpc error: code = NotFound desc = could not find container \"d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de\": container with ID starting with d3bfaab587f27adf5fb59dd49f34edd272b76492a558d7f6738fd906fd1511de not found: ID does not exist" Mar 12 14:11:59.669672 master-0 kubenswrapper[4141]: I0312 14:11:59.669661 4141 scope.go:117] "RemoveContainer" containerID="8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb" Mar 12 14:11:59.670268 master-0 kubenswrapper[4141]: I0312 14:11:59.670156 4141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb"} err="failed to get container status \"8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb\": rpc error: code = NotFound desc = could not find container \"8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb\": container with ID starting with 8bd9a6a3046ed6831bd096d30834febacdf9b816e5f61fb10776a7446317edfb not found: ID does not exist" Mar 12 14:12:00.131617 master-0 kubenswrapper[4141]: I0312 14:12:00.131282 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:12:00.131617 master-0 kubenswrapper[4141]: E0312 14:12:00.131505 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8q2fv" podUID="8e733069-752a-4140-83eb-8287f1bce1a7" Mar 12 14:12:00.139043 master-0 kubenswrapper[4141]: I0312 14:12:00.138192 4141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86ab127e-897e-48d9-aea7-fd4eec84730f" path="/var/lib/kubelet/pods/86ab127e-897e-48d9-aea7-fd4eec84730f/volumes" Mar 12 14:12:00.504188 master-0 kubenswrapper[4141]: I0312 14:12:00.504087 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" event={"ID":"761993bb-2cba-4e1a-b304-36a24817af94","Type":"ContainerStarted","Data":"8f0a5e87a171e977245e106bcb0d14b6d01585868818b13d263c2d666131b999"} Mar 12 14:12:00.504188 master-0 kubenswrapper[4141]: I0312 14:12:00.504132 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" event={"ID":"761993bb-2cba-4e1a-b304-36a24817af94","Type":"ContainerStarted","Data":"e4481bd232d92f3a57b8f7787193a01f1bf071df01fa34ce50980d73d202ef3b"} Mar 12 14:12:00.504188 master-0 kubenswrapper[4141]: I0312 14:12:00.504147 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" event={"ID":"761993bb-2cba-4e1a-b304-36a24817af94","Type":"ContainerStarted","Data":"27a2d14c0a647584e5f7a6024a2a5900646e402a88a0ad1b289750c901a9138e"} Mar 12 14:12:00.504188 master-0 kubenswrapper[4141]: I0312 14:12:00.504159 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" event={"ID":"761993bb-2cba-4e1a-b304-36a24817af94","Type":"ContainerStarted","Data":"79327b58d378e43070a170da3c3a5c7f6760dc0eb1a55c38ce78fc4548e93dd8"} Mar 12 14:12:00.504188 master-0 kubenswrapper[4141]: I0312 14:12:00.504170 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" event={"ID":"761993bb-2cba-4e1a-b304-36a24817af94","Type":"ContainerStarted","Data":"6e495ef489b9ca0f05277f0691c4af4c593cd41786f5ce51a937f04016e8aa5d"} Mar 12 14:12:00.504188 master-0 kubenswrapper[4141]: I0312 14:12:00.504181 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" event={"ID":"761993bb-2cba-4e1a-b304-36a24817af94","Type":"ContainerStarted","Data":"378c98f107287fc6c6428ceb6468176d0f8fb0ce32f629fb877669840b856fb3"} Mar 12 14:12:01.131020 master-0 kubenswrapper[4141]: I0312 14:12:01.130884 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:12:01.131377 master-0 kubenswrapper[4141]: E0312 14:12:01.131057 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:12:02.130661 master-0 kubenswrapper[4141]: I0312 14:12:02.130624 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:12:02.130991 master-0 kubenswrapper[4141]: E0312 14:12:02.130729 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8q2fv" podUID="8e733069-752a-4140-83eb-8287f1bce1a7" Mar 12 14:12:02.514338 master-0 kubenswrapper[4141]: I0312 14:12:02.514268 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" event={"ID":"761993bb-2cba-4e1a-b304-36a24817af94","Type":"ContainerStarted","Data":"c15c1e9e16b30d85f0885585e2fe098199ed4e7cc955b4ed8774d188c849fa6e"} Mar 12 14:12:03.131227 master-0 kubenswrapper[4141]: I0312 14:12:03.131135 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:12:03.131724 master-0 kubenswrapper[4141]: E0312 14:12:03.131259 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:12:04.131143 master-0 kubenswrapper[4141]: I0312 14:12:04.131056 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:12:04.131312 master-0 kubenswrapper[4141]: E0312 14:12:04.131246 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8q2fv" podUID="8e733069-752a-4140-83eb-8287f1bce1a7" Mar 12 14:12:04.527705 master-0 kubenswrapper[4141]: I0312 14:12:04.527638 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" event={"ID":"761993bb-2cba-4e1a-b304-36a24817af94","Type":"ContainerStarted","Data":"5a171a4570a54c3f9188c37293065f2c1387a33c9d0045159c6fe79364d2cedb"} Mar 12 14:12:04.528001 master-0 kubenswrapper[4141]: I0312 14:12:04.527947 4141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:04.528053 master-0 kubenswrapper[4141]: I0312 14:12:04.528004 4141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:04.528053 master-0 kubenswrapper[4141]: I0312 14:12:04.528031 4141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:04.550077 master-0 kubenswrapper[4141]: I0312 14:12:04.550035 4141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:04.550389 master-0 kubenswrapper[4141]: I0312 14:12:04.550368 4141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:04.648785 master-0 kubenswrapper[4141]: I0312 14:12:04.648618 4141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" podStartSLOduration=6.648597245 podStartE2EDuration="6.648597245s" podCreationTimestamp="2026-03-12 14:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:12:04.646302508 +0000 UTC m=+99.207874757" watchObservedRunningTime="2026-03-12 14:12:04.648597245 +0000 UTC m=+99.210169494" Mar 12 14:12:05.130914 master-0 kubenswrapper[4141]: I0312 14:12:05.130818 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:12:05.131097 master-0 kubenswrapper[4141]: E0312 14:12:05.130988 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:12:05.507757 master-0 kubenswrapper[4141]: I0312 14:12:05.506383 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:12:05.507757 master-0 kubenswrapper[4141]: E0312 14:12:05.506669 4141 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 14:12:05.507757 master-0 kubenswrapper[4141]: E0312 14:12:05.506823 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert podName:29ab0e68-ebc6-48a3-b234-e1794c4c5ad6 nodeName:}" failed. No retries permitted until 2026-03-12 14:13:09.506782701 +0000 UTC m=+164.068354990 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert") pod "cluster-version-operator-745944c6b7-vs878" (UID: "29ab0e68-ebc6-48a3-b234-e1794c4c5ad6") : secret "cluster-version-operator-serving-cert" not found Mar 12 14:12:06.131245 master-0 kubenswrapper[4141]: I0312 14:12:06.131200 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:12:06.134955 master-0 kubenswrapper[4141]: E0312 14:12:06.132370 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8q2fv" podUID="8e733069-752a-4140-83eb-8287f1bce1a7" Mar 12 14:12:06.374299 master-0 kubenswrapper[4141]: I0312 14:12:06.374249 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-n9v7g"] Mar 12 14:12:06.374507 master-0 kubenswrapper[4141]: I0312 14:12:06.374413 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:12:06.374629 master-0 kubenswrapper[4141]: E0312 14:12:06.374583 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:12:06.374728 master-0 kubenswrapper[4141]: I0312 14:12:06.374636 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-8q2fv"] Mar 12 14:12:06.533654 master-0 kubenswrapper[4141]: I0312 14:12:06.533607 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:12:06.534392 master-0 kubenswrapper[4141]: E0312 14:12:06.533937 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8q2fv" podUID="8e733069-752a-4140-83eb-8287f1bce1a7" Mar 12 14:12:08.131408 master-0 kubenswrapper[4141]: I0312 14:12:08.131108 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:12:08.132050 master-0 kubenswrapper[4141]: I0312 14:12:08.131434 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:12:08.132050 master-0 kubenswrapper[4141]: E0312 14:12:08.131475 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8q2fv" podUID="8e733069-752a-4140-83eb-8287f1bce1a7" Mar 12 14:12:08.132050 master-0 kubenswrapper[4141]: E0312 14:12:08.131653 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:12:08.229168 master-0 kubenswrapper[4141]: I0312 14:12:08.229100 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvngn\" (UniqueName: \"kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn\") pod \"network-check-target-8q2fv\" (UID: \"8e733069-752a-4140-83eb-8287f1bce1a7\") " pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:12:08.229349 master-0 kubenswrapper[4141]: E0312 14:12:08.229303 4141 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 14:12:08.229349 master-0 kubenswrapper[4141]: E0312 14:12:08.229330 4141 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 14:12:08.229349 master-0 kubenswrapper[4141]: E0312 14:12:08.229342 4141 projected.go:194] Error preparing data for projected volume kube-api-access-qvngn for pod openshift-network-diagnostics/network-check-target-8q2fv: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:12:08.229548 master-0 kubenswrapper[4141]: E0312 14:12:08.229395 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn podName:8e733069-752a-4140-83eb-8287f1bce1a7 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:40.229375597 +0000 UTC m=+134.790947916 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-qvngn" (UniqueName: "kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn") pod "network-check-target-8q2fv" (UID: "8e733069-752a-4140-83eb-8287f1bce1a7") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 14:12:10.131022 master-0 kubenswrapper[4141]: I0312 14:12:10.130547 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:12:10.131022 master-0 kubenswrapper[4141]: I0312 14:12:10.130556 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:12:10.131022 master-0 kubenswrapper[4141]: E0312 14:12:10.130875 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8q2fv" podUID="8e733069-752a-4140-83eb-8287f1bce1a7" Mar 12 14:12:10.131954 master-0 kubenswrapper[4141]: E0312 14:12:10.131166 4141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n9v7g" podUID="7fdce71e-8085-4316-be40-e535530c2ca4" Mar 12 14:12:10.145225 master-0 kubenswrapper[4141]: I0312 14:12:10.145166 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 12 14:12:10.188521 master-0 kubenswrapper[4141]: I0312 14:12:10.188418 4141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Mar 12 14:12:10.188662 master-0 kubenswrapper[4141]: I0312 14:12:10.188570 4141 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Mar 12 14:12:10.224886 master-0 kubenswrapper[4141]: I0312 14:12:10.224784 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6"] Mar 12 14:12:10.224886 master-0 kubenswrapper[4141]: I0312 14:12:10.225195 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:10.242976 master-0 kubenswrapper[4141]: I0312 14:12:10.232231 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv"] Mar 12 14:12:10.242976 master-0 kubenswrapper[4141]: I0312 14:12:10.232685 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv" Mar 12 14:12:10.242976 master-0 kubenswrapper[4141]: I0312 14:12:10.232915 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 12 14:12:10.242976 master-0 kubenswrapper[4141]: I0312 14:12:10.233562 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 12 14:12:10.242976 master-0 kubenswrapper[4141]: I0312 14:12:10.233706 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 12 14:12:10.242976 master-0 kubenswrapper[4141]: I0312 14:12:10.233810 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 12 14:12:10.242976 master-0 kubenswrapper[4141]: I0312 14:12:10.239542 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw"] Mar 12 14:12:10.242976 master-0 kubenswrapper[4141]: I0312 14:12:10.239757 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 12 14:12:10.242976 master-0 kubenswrapper[4141]: I0312 14:12:10.239889 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" Mar 12 14:12:10.242976 master-0 kubenswrapper[4141]: I0312 14:12:10.240651 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 12 14:12:10.244491 master-0 kubenswrapper[4141]: I0312 14:12:10.244282 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clj2j\" (UniqueName: \"kubernetes.io/projected/8660cef9-0ab3-453e-a4b9-c243daa6ddb0-kube-api-access-clj2j\") pod \"csi-snapshot-controller-operator-5685fbc7d-ckmlv\" (UID: \"8660cef9-0ab3-453e-a4b9-c243daa6ddb0\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv" Mar 12 14:12:10.244491 master-0 kubenswrapper[4141]: I0312 14:12:10.244338 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:10.244491 master-0 kubenswrapper[4141]: I0312 14:12:10.244358 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:10.244491 master-0 kubenswrapper[4141]: I0312 14:12:10.244375 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z8pd\" (UniqueName: \"kubernetes.io/projected/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-kube-api-access-2z8pd\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:10.244491 master-0 kubenswrapper[4141]: I0312 14:12:10.244400 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:10.247655 master-0 kubenswrapper[4141]: I0312 14:12:10.247333 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 12 14:12:10.247655 master-0 kubenswrapper[4141]: I0312 14:12:10.247369 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 12 14:12:10.247655 master-0 kubenswrapper[4141]: I0312 14:12:10.247472 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 12 14:12:10.247655 master-0 kubenswrapper[4141]: I0312 14:12:10.247544 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 12 14:12:10.251694 master-0 kubenswrapper[4141]: I0312 14:12:10.251506 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4"] Mar 12 14:12:10.257060 master-0 kubenswrapper[4141]: I0312 14:12:10.255999 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 12 14:12:10.261509 master-0 kubenswrapper[4141]: I0312 14:12:10.258795 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj"] Mar 12 14:12:10.261509 master-0 kubenswrapper[4141]: I0312 14:12:10.259449 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv"] Mar 12 14:12:10.267316 master-0 kubenswrapper[4141]: I0312 14:12:10.265984 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:12:10.274018 master-0 kubenswrapper[4141]: I0312 14:12:10.270017 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v"] Mar 12 14:12:10.274018 master-0 kubenswrapper[4141]: I0312 14:12:10.270353 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk"] Mar 12 14:12:10.274018 master-0 kubenswrapper[4141]: I0312 14:12:10.270595 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5"] Mar 12 14:12:10.274018 master-0 kubenswrapper[4141]: I0312 14:12:10.270705 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:10.274018 master-0 kubenswrapper[4141]: I0312 14:12:10.270968 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" Mar 12 14:12:10.274018 master-0 kubenswrapper[4141]: I0312 14:12:10.271001 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" Mar 12 14:12:10.274018 master-0 kubenswrapper[4141]: I0312 14:12:10.271128 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" Mar 12 14:12:10.274018 master-0 kubenswrapper[4141]: I0312 14:12:10.271244 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" Mar 12 14:12:10.274018 master-0 kubenswrapper[4141]: I0312 14:12:10.272238 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 12 14:12:10.274018 master-0 kubenswrapper[4141]: I0312 14:12:10.272256 4141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=0.272241239 podStartE2EDuration="272.241239ms" podCreationTimestamp="2026-03-12 14:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:12:10.271597423 +0000 UTC m=+104.833169692" watchObservedRunningTime="2026-03-12 14:12:10.272241239 +0000 UTC m=+104.833813508" Mar 12 14:12:10.274713 master-0 kubenswrapper[4141]: I0312 14:12:10.274154 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 12 14:12:10.276714 master-0 kubenswrapper[4141]: I0312 14:12:10.276664 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 14:12:10.276965 master-0 kubenswrapper[4141]: I0312 14:12:10.276943 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 12 14:12:10.277115 master-0 kubenswrapper[4141]: I0312 14:12:10.277092 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 12 14:12:10.277268 master-0 kubenswrapper[4141]: I0312 14:12:10.277247 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 12 14:12:10.277418 master-0 kubenswrapper[4141]: I0312 14:12:10.277395 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 12 14:12:10.277671 master-0 kubenswrapper[4141]: I0312 14:12:10.277646 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 12 14:12:10.277826 master-0 kubenswrapper[4141]: I0312 14:12:10.277804 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 12 14:12:10.278031 master-0 kubenswrapper[4141]: I0312 14:12:10.278008 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 12 14:12:10.278401 master-0 kubenswrapper[4141]: I0312 14:12:10.278367 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79"] Mar 12 14:12:10.278966 master-0 kubenswrapper[4141]: I0312 14:12:10.278942 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9"] Mar 12 14:12:10.279366 master-0 kubenswrapper[4141]: I0312 14:12:10.279343 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-44hhf"] Mar 12 14:12:10.282052 master-0 kubenswrapper[4141]: I0312 14:12:10.278386 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 12 14:12:10.282052 master-0 kubenswrapper[4141]: I0312 14:12:10.278499 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 12 14:12:10.282052 master-0 kubenswrapper[4141]: I0312 14:12:10.278547 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 12 14:12:10.282052 master-0 kubenswrapper[4141]: I0312 14:12:10.278580 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 12 14:12:10.282052 master-0 kubenswrapper[4141]: I0312 14:12:10.278702 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 12 14:12:10.282052 master-0 kubenswrapper[4141]: I0312 14:12:10.278802 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 12 14:12:10.282458 master-0 kubenswrapper[4141]: I0312 14:12:10.278946 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 12 14:12:10.282458 master-0 kubenswrapper[4141]: I0312 14:12:10.278992 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 12 14:12:10.282559 master-0 kubenswrapper[4141]: I0312 14:12:10.279051 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 12 14:12:10.282671 master-0 kubenswrapper[4141]: I0312 14:12:10.279080 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 12 14:12:10.283045 master-0 kubenswrapper[4141]: I0312 14:12:10.283013 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:12:10.283321 master-0 kubenswrapper[4141]: I0312 14:12:10.283292 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78"] Mar 12 14:12:10.283650 master-0 kubenswrapper[4141]: I0312 14:12:10.283619 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:10.283940 master-0 kubenswrapper[4141]: I0312 14:12:10.283893 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv"] Mar 12 14:12:10.284269 master-0 kubenswrapper[4141]: I0312 14:12:10.284239 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:10.284492 master-0 kubenswrapper[4141]: I0312 14:12:10.284470 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t"] Mar 12 14:12:10.284985 master-0 kubenswrapper[4141]: I0312 14:12:10.284965 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:12:10.285193 master-0 kubenswrapper[4141]: I0312 14:12:10.285168 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-qzdff"] Mar 12 14:12:10.285411 master-0 kubenswrapper[4141]: I0312 14:12:10.285274 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" Mar 12 14:12:10.285728 master-0 kubenswrapper[4141]: I0312 14:12:10.285701 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv"] Mar 12 14:12:10.286073 master-0 kubenswrapper[4141]: I0312 14:12:10.286038 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:10.286311 master-0 kubenswrapper[4141]: I0312 14:12:10.286287 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47"] Mar 12 14:12:10.288991 master-0 kubenswrapper[4141]: I0312 14:12:10.285271 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:12:10.288991 master-0 kubenswrapper[4141]: I0312 14:12:10.287952 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:10.289638 master-0 kubenswrapper[4141]: I0312 14:12:10.289608 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:10.297618 master-0 kubenswrapper[4141]: I0312 14:12:10.290819 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5"] Mar 12 14:12:10.297618 master-0 kubenswrapper[4141]: I0312 14:12:10.291330 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" Mar 12 14:12:10.297618 master-0 kubenswrapper[4141]: I0312 14:12:10.291361 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-q4wwv"] Mar 12 14:12:10.297618 master-0 kubenswrapper[4141]: I0312 14:12:10.291760 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" Mar 12 14:12:10.297618 master-0 kubenswrapper[4141]: I0312 14:12:10.293244 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-sm9nb"] Mar 12 14:12:10.297618 master-0 kubenswrapper[4141]: I0312 14:12:10.293590 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" Mar 12 14:12:10.297618 master-0 kubenswrapper[4141]: I0312 14:12:10.294002 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp"] Mar 12 14:12:10.297618 master-0 kubenswrapper[4141]: I0312 14:12:10.294277 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" Mar 12 14:12:10.297618 master-0 kubenswrapper[4141]: I0312 14:12:10.295830 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 12 14:12:10.297618 master-0 kubenswrapper[4141]: I0312 14:12:10.296055 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 12 14:12:10.297618 master-0 kubenswrapper[4141]: I0312 14:12:10.296363 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 12 14:12:10.297618 master-0 kubenswrapper[4141]: I0312 14:12:10.296787 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6"] Mar 12 14:12:10.297618 master-0 kubenswrapper[4141]: I0312 14:12:10.297210 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 12 14:12:10.298446 master-0 kubenswrapper[4141]: I0312 14:12:10.297819 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 12 14:12:10.298446 master-0 kubenswrapper[4141]: I0312 14:12:10.297852 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 12 14:12:10.298446 master-0 kubenswrapper[4141]: I0312 14:12:10.298029 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 12 14:12:10.298446 master-0 kubenswrapper[4141]: I0312 14:12:10.298190 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 12 14:12:10.298829 master-0 kubenswrapper[4141]: I0312 14:12:10.298797 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 12 14:12:10.299150 master-0 kubenswrapper[4141]: I0312 14:12:10.299127 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 12 14:12:10.299483 master-0 kubenswrapper[4141]: I0312 14:12:10.299449 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 12 14:12:10.299567 master-0 kubenswrapper[4141]: I0312 14:12:10.299518 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 12 14:12:10.299622 master-0 kubenswrapper[4141]: I0312 14:12:10.299592 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 12 14:12:10.299680 master-0 kubenswrapper[4141]: I0312 14:12:10.299627 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 12 14:12:10.299680 master-0 kubenswrapper[4141]: I0312 14:12:10.299593 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 12 14:12:10.299781 master-0 kubenswrapper[4141]: I0312 14:12:10.299698 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 12 14:12:10.299781 master-0 kubenswrapper[4141]: I0312 14:12:10.299760 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 12 14:12:10.299884 master-0 kubenswrapper[4141]: I0312 14:12:10.299803 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 12 14:12:10.299966 master-0 kubenswrapper[4141]: I0312 14:12:10.299914 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 12 14:12:10.300036 master-0 kubenswrapper[4141]: I0312 14:12:10.299978 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 12 14:12:10.300097 master-0 kubenswrapper[4141]: I0312 14:12:10.300044 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 12 14:12:10.300097 master-0 kubenswrapper[4141]: I0312 14:12:10.300071 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 12 14:12:10.300783 master-0 kubenswrapper[4141]: I0312 14:12:10.300489 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 12 14:12:10.300783 master-0 kubenswrapper[4141]: I0312 14:12:10.300592 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 12 14:12:10.300783 master-0 kubenswrapper[4141]: I0312 14:12:10.300728 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 12 14:12:10.301099 master-0 kubenswrapper[4141]: I0312 14:12:10.301074 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 12 14:12:10.301589 master-0 kubenswrapper[4141]: I0312 14:12:10.301555 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 12 14:12:10.301748 master-0 kubenswrapper[4141]: I0312 14:12:10.301719 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 12 14:12:10.301929 master-0 kubenswrapper[4141]: I0312 14:12:10.301872 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 12 14:12:10.302067 master-0 kubenswrapper[4141]: I0312 14:12:10.302038 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 12 14:12:10.302167 master-0 kubenswrapper[4141]: I0312 14:12:10.302048 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 12 14:12:10.302394 master-0 kubenswrapper[4141]: I0312 14:12:10.302365 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 12 14:12:10.302518 master-0 kubenswrapper[4141]: I0312 14:12:10.302497 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 12 14:12:10.302709 master-0 kubenswrapper[4141]: I0312 14:12:10.302499 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 12 14:12:10.302977 master-0 kubenswrapper[4141]: I0312 14:12:10.301511 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 12 14:12:10.303210 master-0 kubenswrapper[4141]: I0312 14:12:10.302162 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 12 14:12:10.303402 master-0 kubenswrapper[4141]: I0312 14:12:10.302216 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 12 14:12:10.303551 master-0 kubenswrapper[4141]: I0312 14:12:10.301425 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 12 14:12:10.303690 master-0 kubenswrapper[4141]: I0312 14:12:10.301474 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 12 14:12:10.303836 master-0 kubenswrapper[4141]: I0312 14:12:10.302592 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 12 14:12:10.318463 master-0 kubenswrapper[4141]: I0312 14:12:10.318423 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 12 14:12:10.320762 master-0 kubenswrapper[4141]: I0312 14:12:10.318784 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv"] Mar 12 14:12:10.320975 master-0 kubenswrapper[4141]: I0312 14:12:10.320948 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw"] Mar 12 14:12:10.321289 master-0 kubenswrapper[4141]: I0312 14:12:10.321257 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 12 14:12:10.322374 master-0 kubenswrapper[4141]: I0312 14:12:10.322352 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v"] Mar 12 14:12:10.324643 master-0 kubenswrapper[4141]: I0312 14:12:10.324604 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv"] Mar 12 14:12:10.327129 master-0 kubenswrapper[4141]: I0312 14:12:10.327100 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79"] Mar 12 14:12:10.327477 master-0 kubenswrapper[4141]: I0312 14:12:10.327449 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 12 14:12:10.328227 master-0 kubenswrapper[4141]: I0312 14:12:10.328159 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4"] Mar 12 14:12:10.329032 master-0 kubenswrapper[4141]: I0312 14:12:10.329007 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78"] Mar 12 14:12:10.329949 master-0 kubenswrapper[4141]: I0312 14:12:10.329926 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-q4wwv"] Mar 12 14:12:10.331335 master-0 kubenswrapper[4141]: I0312 14:12:10.331312 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp"] Mar 12 14:12:10.331974 master-0 kubenswrapper[4141]: I0312 14:12:10.331923 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 12 14:12:10.332604 master-0 kubenswrapper[4141]: I0312 14:12:10.332578 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 12 14:12:10.333398 master-0 kubenswrapper[4141]: I0312 14:12:10.333368 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv"] Mar 12 14:12:10.334146 master-0 kubenswrapper[4141]: I0312 14:12:10.334121 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv"] Mar 12 14:12:10.335098 master-0 kubenswrapper[4141]: I0312 14:12:10.335067 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj"] Mar 12 14:12:10.336039 master-0 kubenswrapper[4141]: I0312 14:12:10.336014 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5"] Mar 12 14:12:10.337281 master-0 kubenswrapper[4141]: I0312 14:12:10.337231 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-vb4v5"] Mar 12 14:12:10.337819 master-0 kubenswrapper[4141]: I0312 14:12:10.337793 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-vb4v5" Mar 12 14:12:10.338012 master-0 kubenswrapper[4141]: I0312 14:12:10.337984 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t"] Mar 12 14:12:10.339369 master-0 kubenswrapper[4141]: I0312 14:12:10.339338 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47"] Mar 12 14:12:10.340160 master-0 kubenswrapper[4141]: I0312 14:12:10.340136 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 12 14:12:10.340319 master-0 kubenswrapper[4141]: I0312 14:12:10.340299 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-44hhf"] Mar 12 14:12:10.341166 master-0 kubenswrapper[4141]: I0312 14:12:10.341143 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk"] Mar 12 14:12:10.342109 master-0 kubenswrapper[4141]: I0312 14:12:10.342075 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9"] Mar 12 14:12:10.343085 master-0 kubenswrapper[4141]: I0312 14:12:10.343033 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-qzdff"] Mar 12 14:12:10.344719 master-0 kubenswrapper[4141]: I0312 14:12:10.344686 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls\") pod \"dns-operator-589895fbb7-q4wwv\" (UID: \"8c6b9f13-4a3a-4920-a84b-f76516501f81\") " pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" Mar 12 14:12:10.344793 master-0 kubenswrapper[4141]: I0312 14:12:10.344725 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcwrv\" (UniqueName: \"kubernetes.io/projected/8d775283-2696-4411-8ddf-d4e6000f0a0c-kube-api-access-lcwrv\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:10.344833 master-0 kubenswrapper[4141]: I0312 14:12:10.344799 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/8106d14a-b448-4dd1-bccd-926f85394b5d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-h8sq4\" (UID: \"8106d14a-b448-4dd1-bccd-926f85394b5d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" Mar 12 14:12:10.344874 master-0 kubenswrapper[4141]: I0312 14:12:10.344829 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-zwdgk\" (UID: \"d00a8cc7-7774-40bd-94a1-9ac2d0f63234\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" Mar 12 14:12:10.344874 master-0 kubenswrapper[4141]: I0312 14:12:10.344851 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a898118-6d01-4211-92f0-43967b75405c-serving-cert\") pod \"openshift-config-operator-64488f9d78-ljnjj\" (UID: \"0a898118-6d01-4211-92f0-43967b75405c\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:12:10.344963 master-0 kubenswrapper[4141]: I0312 14:12:10.344872 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76d596c0-6a41-43e1-9516-aee9ad834ec2-serving-cert\") pod \"service-ca-operator-69b6fc6b88-fv6pp\" (UID: \"76d596c0-6a41-43e1-9516-aee9ad834ec2\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" Mar 12 14:12:10.344963 master-0 kubenswrapper[4141]: I0312 14:12:10.344913 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bba274a-38c7-4d13-88a5-6bc39228416c-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-qtql5\" (UID: \"1bba274a-38c7-4d13-88a5-6bc39228416c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" Mar 12 14:12:10.344963 master-0 kubenswrapper[4141]: I0312 14:12:10.344940 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:10.345084 master-0 kubenswrapper[4141]: I0312 14:12:10.344963 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vnhl\" (UniqueName: \"kubernetes.io/projected/8c6b9f13-4a3a-4920-a84b-f76516501f81-kube-api-access-2vnhl\") pod \"dns-operator-589895fbb7-q4wwv\" (UID: \"8c6b9f13-4a3a-4920-a84b-f76516501f81\") " pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" Mar 12 14:12:10.345084 master-0 kubenswrapper[4141]: I0312 14:12:10.345008 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/42dbcb8f-e8c4-413e-977d-40aa6df226aa-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:10.345084 master-0 kubenswrapper[4141]: I0312 14:12:10.345072 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rfxl\" (UniqueName: \"kubernetes.io/projected/0a898118-6d01-4211-92f0-43967b75405c-kube-api-access-8rfxl\") pod \"openshift-config-operator-64488f9d78-ljnjj\" (UID: \"0a898118-6d01-4211-92f0-43967b75405c\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:12:10.345190 master-0 kubenswrapper[4141]: I0312 14:12:10.345126 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d775283-2696-4411-8ddf-d4e6000f0a0c-serving-cert\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:10.345190 master-0 kubenswrapper[4141]: I0312 14:12:10.345156 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-client\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:10.345334 master-0 kubenswrapper[4141]: I0312 14:12:10.345274 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-dvv78\" (UID: \"85459175-2c9c-425d-bdfb-0a79c92ed110\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:12:10.345334 master-0 kubenswrapper[4141]: I0312 14:12:10.345314 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:10.345409 master-0 kubenswrapper[4141]: I0312 14:12:10.345342 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z8pd\" (UniqueName: \"kubernetes.io/projected/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-kube-api-access-2z8pd\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:10.345452 master-0 kubenswrapper[4141]: I0312 14:12:10.345419 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dc73c14-852d-4957-b6ac-84366ba0594f-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hkf2t\" (UID: \"3dc73c14-852d-4957-b6ac-84366ba0594f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" Mar 12 14:12:10.345501 master-0 kubenswrapper[4141]: I0312 14:12:10.345469 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f72fbbe-69f0-4622-be05-b839ff9b4d45-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-gt2tw\" (UID: \"3f72fbbe-69f0-4622-be05-b839ff9b4d45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" Mar 12 14:12:10.345547 master-0 kubenswrapper[4141]: I0312 14:12:10.345499 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d596c0-6a41-43e1-9516-aee9ad834ec2-config\") pod \"service-ca-operator-69b6fc6b88-fv6pp\" (UID: \"76d596c0-6a41-43e1-9516-aee9ad834ec2\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" Mar 12 14:12:10.345547 master-0 kubenswrapper[4141]: I0312 14:12:10.345528 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbv7q\" (UniqueName: \"kubernetes.io/projected/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-kube-api-access-bbv7q\") pod \"openshift-controller-manager-operator-8565d84698-zwdgk\" (UID: \"d00a8cc7-7774-40bd-94a1-9ac2d0f63234\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" Mar 12 14:12:10.345627 master-0 kubenswrapper[4141]: I0312 14:12:10.345553 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/8106d14a-b448-4dd1-bccd-926f85394b5d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-h8sq4\" (UID: \"8106d14a-b448-4dd1-bccd-926f85394b5d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" Mar 12 14:12:10.345627 master-0 kubenswrapper[4141]: E0312 14:12:10.345553 4141 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 14:12:10.345627 master-0 kubenswrapper[4141]: E0312 14:12:10.345611 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls podName:879e9bf1-ce4a-40b7-a72c-fe4c61e96cea nodeName:}" failed. No retries permitted until 2026-03-12 14:12:10.845594307 +0000 UTC m=+105.407166556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-zghs6" (UID: "879e9bf1-ce4a-40b7-a72c-fe4c61e96cea") : secret "node-tuning-operator-tls" not found Mar 12 14:12:10.345627 master-0 kubenswrapper[4141]: I0312 14:12:10.345563 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-sm9nb"] Mar 12 14:12:10.345771 master-0 kubenswrapper[4141]: I0312 14:12:10.345637 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:10.345771 master-0 kubenswrapper[4141]: I0312 14:12:10.345663 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:10.345771 master-0 kubenswrapper[4141]: I0312 14:12:10.345688 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert\") pod \"catalog-operator-7d9c49f57b-whr79\" (UID: \"272b53c4-134c-404d-9a27-c7371415b1f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:12:10.345771 master-0 kubenswrapper[4141]: I0312 14:12:10.345712 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm476\" (UniqueName: \"kubernetes.io/projected/7023af8b-bfcc-4253-85cd-d891dff1c86e-kube-api-access-dm476\") pod \"multus-admission-controller-8d675b596-sm9nb\" (UID: \"7023af8b-bfcc-4253-85cd-d891dff1c86e\") " pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" Mar 12 14:12:10.345771 master-0 kubenswrapper[4141]: I0312 14:12:10.345762 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhdq5\" (UniqueName: \"kubernetes.io/projected/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-kube-api-access-qhdq5\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:10.345981 master-0 kubenswrapper[4141]: I0312 14:12:10.345797 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:10.345981 master-0 kubenswrapper[4141]: I0312 14:12:10.345820 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-config\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:10.345981 master-0 kubenswrapper[4141]: I0312 14:12:10.345847 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8tts\" (UniqueName: \"kubernetes.io/projected/85459175-2c9c-425d-bdfb-0a79c92ed110-kube-api-access-v8tts\") pod \"package-server-manager-854648ff6d-dvv78\" (UID: \"85459175-2c9c-425d-bdfb-0a79c92ed110\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:12:10.345981 master-0 kubenswrapper[4141]: I0312 14:12:10.345877 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j47xv\" (UniqueName: \"kubernetes.io/projected/42dbcb8f-e8c4-413e-977d-40aa6df226aa-kube-api-access-j47xv\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:10.345981 master-0 kubenswrapper[4141]: I0312 14:12:10.345917 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-serving-cert\") pod \"kube-apiserver-operator-68bd585b-smpl5\" (UID: \"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" Mar 12 14:12:10.345981 master-0 kubenswrapper[4141]: I0312 14:12:10.345938 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-bound-sa-token\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:10.345981 master-0 kubenswrapper[4141]: I0312 14:12:10.345960 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08ea0d9f-0635-4759-803e-572eca2f2d34-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-vpn8v\" (UID: \"08ea0d9f-0635-4759-803e-572eca2f2d34\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" Mar 12 14:12:10.346230 master-0 kubenswrapper[4141]: I0312 14:12:10.345983 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mbjg\" (UniqueName: \"kubernetes.io/projected/3f72fbbe-69f0-4622-be05-b839ff9b4d45-kube-api-access-2mbjg\") pod \"openshift-apiserver-operator-799b6db4d7-gt2tw\" (UID: \"3f72fbbe-69f0-4622-be05-b839ff9b4d45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" Mar 12 14:12:10.346230 master-0 kubenswrapper[4141]: I0312 14:12:10.346024 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-smpl5\" (UID: \"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" Mar 12 14:12:10.346230 master-0 kubenswrapper[4141]: I0312 14:12:10.346058 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-trusted-ca\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:10.346230 master-0 kubenswrapper[4141]: I0312 14:12:10.346085 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqh9t\" (UniqueName: \"kubernetes.io/projected/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-kube-api-access-cqh9t\") pod \"olm-operator-d64cfc9db-f48hv\" (UID: \"07a6a1d6-fecf-4847-b7c1-160d5d7320fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:12:10.346230 master-0 kubenswrapper[4141]: I0312 14:12:10.346111 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc9zd\" (UniqueName: \"kubernetes.io/projected/3dc73c14-852d-4957-b6ac-84366ba0594f-kube-api-access-sc9zd\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hkf2t\" (UID: \"3dc73c14-852d-4957-b6ac-84366ba0594f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" Mar 12 14:12:10.346230 master-0 kubenswrapper[4141]: I0312 14:12:10.346134 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:10.346230 master-0 kubenswrapper[4141]: I0312 14:12:10.346156 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-config\") pod \"kube-apiserver-operator-68bd585b-smpl5\" (UID: \"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" Mar 12 14:12:10.346230 master-0 kubenswrapper[4141]: I0312 14:12:10.346178 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dc73c14-852d-4957-b6ac-84366ba0594f-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hkf2t\" (UID: \"3dc73c14-852d-4957-b6ac-84366ba0594f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" Mar 12 14:12:10.346230 master-0 kubenswrapper[4141]: I0312 14:12:10.346202 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:10.346550 master-0 kubenswrapper[4141]: I0312 14:12:10.346327 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bba274a-38c7-4d13-88a5-6bc39228416c-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-qtql5\" (UID: \"1bba274a-38c7-4d13-88a5-6bc39228416c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" Mar 12 14:12:10.346550 master-0 kubenswrapper[4141]: I0312 14:12:10.346417 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08ea0d9f-0635-4759-803e-572eca2f2d34-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-vpn8v\" (UID: \"08ea0d9f-0635-4759-803e-572eca2f2d34\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" Mar 12 14:12:10.346550 master-0 kubenswrapper[4141]: I0312 14:12:10.346465 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57930a54-89ab-4ec8-a504-74035bb74d63-serving-cert\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:10.346550 master-0 kubenswrapper[4141]: I0312 14:12:10.346491 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2435b91-86d6-415b-a978-34cc859e74f2-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:10.346550 master-0 kubenswrapper[4141]: I0312 14:12:10.346533 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-config\") pod \"openshift-controller-manager-operator-8565d84698-zwdgk\" (UID: \"d00a8cc7-7774-40bd-94a1-9ac2d0f63234\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" Mar 12 14:12:10.346734 master-0 kubenswrapper[4141]: I0312 14:12:10.346561 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:10.346734 master-0 kubenswrapper[4141]: I0312 14:12:10.346589 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkmrv\" (UniqueName: \"kubernetes.io/projected/a2435b91-86d6-415b-a978-34cc859e74f2-kube-api-access-qkmrv\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:10.346734 master-0 kubenswrapper[4141]: I0312 14:12:10.346613 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:10.346734 master-0 kubenswrapper[4141]: I0312 14:12:10.346636 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f72fbbe-69f0-4622-be05-b839ff9b4d45-config\") pod \"openshift-apiserver-operator-799b6db4d7-gt2tw\" (UID: \"3f72fbbe-69f0-4622-be05-b839ff9b4d45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" Mar 12 14:12:10.346734 master-0 kubenswrapper[4141]: I0312 14:12:10.346657 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtqp6\" (UniqueName: \"kubernetes.io/projected/8106d14a-b448-4dd1-bccd-926f85394b5d-kube-api-access-jtqp6\") pod \"cluster-olm-operator-77899cf6d-h8sq4\" (UID: \"8106d14a-b448-4dd1-bccd-926f85394b5d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" Mar 12 14:12:10.346734 master-0 kubenswrapper[4141]: E0312 14:12:10.346664 4141 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 14:12:10.346734 master-0 kubenswrapper[4141]: I0312 14:12:10.346676 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert\") pod \"olm-operator-d64cfc9db-f48hv\" (UID: \"07a6a1d6-fecf-4847-b7c1-160d5d7320fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:12:10.346734 master-0 kubenswrapper[4141]: I0312 14:12:10.346699 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2435b91-86d6-415b-a978-34cc859e74f2-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:10.346734 master-0 kubenswrapper[4141]: E0312 14:12:10.346709 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert podName:879e9bf1-ce4a-40b7-a72c-fe4c61e96cea nodeName:}" failed. No retries permitted until 2026-03-12 14:12:10.846696874 +0000 UTC m=+105.408269233 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-zghs6" (UID: "879e9bf1-ce4a-40b7-a72c-fe4c61e96cea") : secret "performance-addon-operator-webhook-cert" not found Mar 12 14:12:10.346734 master-0 kubenswrapper[4141]: I0312 14:12:10.346732 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:10.347115 master-0 kubenswrapper[4141]: I0312 14:12:10.346765 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4pvp\" (UniqueName: \"kubernetes.io/projected/76d596c0-6a41-43e1-9516-aee9ad834ec2-kube-api-access-c4pvp\") pod \"service-ca-operator-69b6fc6b88-fv6pp\" (UID: \"76d596c0-6a41-43e1-9516-aee9ad834ec2\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" Mar 12 14:12:10.347115 master-0 kubenswrapper[4141]: I0312 14:12:10.346855 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-ca\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:10.347115 master-0 kubenswrapper[4141]: I0312 14:12:10.346882 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bba274a-38c7-4d13-88a5-6bc39228416c-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-qtql5\" (UID: \"1bba274a-38c7-4d13-88a5-6bc39228416c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" Mar 12 14:12:10.347115 master-0 kubenswrapper[4141]: I0312 14:12:10.346940 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpq4d\" (UniqueName: \"kubernetes.io/projected/1bc0d552-01c7-4212-a551-d16419f2dc80-kube-api-access-vpq4d\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:10.347115 master-0 kubenswrapper[4141]: I0312 14:12:10.346965 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0a898118-6d01-4211-92f0-43967b75405c-available-featuregates\") pod \"openshift-config-operator-64488f9d78-ljnjj\" (UID: \"0a898118-6d01-4211-92f0-43967b75405c\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:12:10.347115 master-0 kubenswrapper[4141]: I0312 14:12:10.346991 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqqcc\" (UniqueName: \"kubernetes.io/projected/272b53c4-134c-404d-9a27-c7371415b1f7-kube-api-access-nqqcc\") pod \"catalog-operator-7d9c49f57b-whr79\" (UID: \"272b53c4-134c-404d-9a27-c7371415b1f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:12:10.347115 master-0 kubenswrapper[4141]: I0312 14:12:10.346997 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5"] Mar 12 14:12:10.347115 master-0 kubenswrapper[4141]: I0312 14:12:10.347027 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-config\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:10.347115 master-0 kubenswrapper[4141]: I0312 14:12:10.347055 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clj2j\" (UniqueName: \"kubernetes.io/projected/8660cef9-0ab3-453e-a4b9-c243daa6ddb0-kube-api-access-clj2j\") pod \"csi-snapshot-controller-operator-5685fbc7d-ckmlv\" (UID: \"8660cef9-0ab3-453e-a4b9-c243daa6ddb0\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv" Mar 12 14:12:10.347471 master-0 kubenswrapper[4141]: I0312 14:12:10.347357 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08ea0d9f-0635-4759-803e-572eca2f2d34-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-vpn8v\" (UID: \"08ea0d9f-0635-4759-803e-572eca2f2d34\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" Mar 12 14:12:10.348815 master-0 kubenswrapper[4141]: I0312 14:12:10.348767 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:10.348925 master-0 kubenswrapper[4141]: I0312 14:12:10.348836 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6z8v\" (UniqueName: \"kubernetes.io/projected/57930a54-89ab-4ec8-a504-74035bb74d63-kube-api-access-d6z8v\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:10.348925 master-0 kubenswrapper[4141]: I0312 14:12:10.348862 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs\") pod \"multus-admission-controller-8d675b596-sm9nb\" (UID: \"7023af8b-bfcc-4253-85cd-d891dff1c86e\") " pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" Mar 12 14:12:10.349540 master-0 kubenswrapper[4141]: I0312 14:12:10.349500 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:10.368454 master-0 kubenswrapper[4141]: I0312 14:12:10.368412 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z8pd\" (UniqueName: \"kubernetes.io/projected/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-kube-api-access-2z8pd\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:10.369111 master-0 kubenswrapper[4141]: I0312 14:12:10.369077 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clj2j\" (UniqueName: \"kubernetes.io/projected/8660cef9-0ab3-453e-a4b9-c243daa6ddb0-kube-api-access-clj2j\") pod \"csi-snapshot-controller-operator-5685fbc7d-ckmlv\" (UID: \"8660cef9-0ab3-453e-a4b9-c243daa6ddb0\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv" Mar 12 14:12:10.449532 master-0 kubenswrapper[4141]: I0312 14:12:10.449493 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqqcc\" (UniqueName: \"kubernetes.io/projected/272b53c4-134c-404d-9a27-c7371415b1f7-kube-api-access-nqqcc\") pod \"catalog-operator-7d9c49f57b-whr79\" (UID: \"272b53c4-134c-404d-9a27-c7371415b1f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:12:10.449683 master-0 kubenswrapper[4141]: I0312 14:12:10.449537 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-config\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:10.449683 master-0 kubenswrapper[4141]: I0312 14:12:10.449566 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0a898118-6d01-4211-92f0-43967b75405c-available-featuregates\") pod \"openshift-config-operator-64488f9d78-ljnjj\" (UID: \"0a898118-6d01-4211-92f0-43967b75405c\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:12:10.450315 master-0 kubenswrapper[4141]: I0312 14:12:10.450278 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-config\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:10.450377 master-0 kubenswrapper[4141]: I0312 14:12:10.450327 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08ea0d9f-0635-4759-803e-572eca2f2d34-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-vpn8v\" (UID: \"08ea0d9f-0635-4759-803e-572eca2f2d34\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" Mar 12 14:12:10.450377 master-0 kubenswrapper[4141]: I0312 14:12:10.450355 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:10.450377 master-0 kubenswrapper[4141]: I0312 14:12:10.450379 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6z8v\" (UniqueName: \"kubernetes.io/projected/57930a54-89ab-4ec8-a504-74035bb74d63-kube-api-access-d6z8v\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:10.450477 master-0 kubenswrapper[4141]: I0312 14:12:10.450401 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs\") pod \"multus-admission-controller-8d675b596-sm9nb\" (UID: \"7023af8b-bfcc-4253-85cd-d891dff1c86e\") " pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" Mar 12 14:12:10.450477 master-0 kubenswrapper[4141]: I0312 14:12:10.450437 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcwrv\" (UniqueName: \"kubernetes.io/projected/8d775283-2696-4411-8ddf-d4e6000f0a0c-kube-api-access-lcwrv\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:10.450477 master-0 kubenswrapper[4141]: I0312 14:12:10.450459 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/8106d14a-b448-4dd1-bccd-926f85394b5d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-h8sq4\" (UID: \"8106d14a-b448-4dd1-bccd-926f85394b5d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" Mar 12 14:12:10.450553 master-0 kubenswrapper[4141]: I0312 14:12:10.450484 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-zwdgk\" (UID: \"d00a8cc7-7774-40bd-94a1-9ac2d0f63234\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" Mar 12 14:12:10.450553 master-0 kubenswrapper[4141]: I0312 14:12:10.450507 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls\") pod \"dns-operator-589895fbb7-q4wwv\" (UID: \"8c6b9f13-4a3a-4920-a84b-f76516501f81\") " pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" Mar 12 14:12:10.450553 master-0 kubenswrapper[4141]: I0312 14:12:10.450528 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bba274a-38c7-4d13-88a5-6bc39228416c-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-qtql5\" (UID: \"1bba274a-38c7-4d13-88a5-6bc39228416c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" Mar 12 14:12:10.450729 master-0 kubenswrapper[4141]: I0312 14:12:10.450684 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0a898118-6d01-4211-92f0-43967b75405c-available-featuregates\") pod \"openshift-config-operator-64488f9d78-ljnjj\" (UID: \"0a898118-6d01-4211-92f0-43967b75405c\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:12:10.450729 master-0 kubenswrapper[4141]: I0312 14:12:10.450702 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:10.451060 master-0 kubenswrapper[4141]: I0312 14:12:10.450749 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a898118-6d01-4211-92f0-43967b75405c-serving-cert\") pod \"openshift-config-operator-64488f9d78-ljnjj\" (UID: \"0a898118-6d01-4211-92f0-43967b75405c\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:12:10.451060 master-0 kubenswrapper[4141]: I0312 14:12:10.450777 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76d596c0-6a41-43e1-9516-aee9ad834ec2-serving-cert\") pod \"service-ca-operator-69b6fc6b88-fv6pp\" (UID: \"76d596c0-6a41-43e1-9516-aee9ad834ec2\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" Mar 12 14:12:10.451060 master-0 kubenswrapper[4141]: I0312 14:12:10.450803 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/42dbcb8f-e8c4-413e-977d-40aa6df226aa-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:10.451060 master-0 kubenswrapper[4141]: E0312 14:12:10.450936 4141 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 14:12:10.451060 master-0 kubenswrapper[4141]: E0312 14:12:10.450998 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls podName:8c6b9f13-4a3a-4920-a84b-f76516501f81 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:10.950978448 +0000 UTC m=+105.512550767 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls") pod "dns-operator-589895fbb7-q4wwv" (UID: "8c6b9f13-4a3a-4920-a84b-f76516501f81") : secret "metrics-tls" not found Mar 12 14:12:10.451333 master-0 kubenswrapper[4141]: I0312 14:12:10.451123 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08ea0d9f-0635-4759-803e-572eca2f2d34-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-vpn8v\" (UID: \"08ea0d9f-0635-4759-803e-572eca2f2d34\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" Mar 12 14:12:10.451333 master-0 kubenswrapper[4141]: E0312 14:12:10.451238 4141 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 14:12:10.451395 master-0 kubenswrapper[4141]: E0312 14:12:10.451347 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics podName:1bc0d552-01c7-4212-a551-d16419f2dc80 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:10.951329197 +0000 UTC m=+105.512901516 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-qzdff" (UID: "1bc0d552-01c7-4212-a551-d16419f2dc80") : secret "marketplace-operator-metrics" not found Mar 12 14:12:10.452226 master-0 kubenswrapper[4141]: I0312 14:12:10.452189 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:10.452293 master-0 kubenswrapper[4141]: I0312 14:12:10.452249 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vnhl\" (UniqueName: \"kubernetes.io/projected/8c6b9f13-4a3a-4920-a84b-f76516501f81-kube-api-access-2vnhl\") pod \"dns-operator-589895fbb7-q4wwv\" (UID: \"8c6b9f13-4a3a-4920-a84b-f76516501f81\") " pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" Mar 12 14:12:10.452710 master-0 kubenswrapper[4141]: I0312 14:12:10.452295 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rfxl\" (UniqueName: \"kubernetes.io/projected/0a898118-6d01-4211-92f0-43967b75405c-kube-api-access-8rfxl\") pod \"openshift-config-operator-64488f9d78-ljnjj\" (UID: \"0a898118-6d01-4211-92f0-43967b75405c\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:12:10.452710 master-0 kubenswrapper[4141]: I0312 14:12:10.452315 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-client\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:10.452710 master-0 kubenswrapper[4141]: I0312 14:12:10.452334 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d775283-2696-4411-8ddf-d4e6000f0a0c-serving-cert\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:10.452710 master-0 kubenswrapper[4141]: I0312 14:12:10.452366 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-dvv78\" (UID: \"85459175-2c9c-425d-bdfb-0a79c92ed110\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:12:10.452710 master-0 kubenswrapper[4141]: I0312 14:12:10.452440 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b9d51570-06dd-4e2f-9c19-07fb694279ae-iptables-alerter-script\") pod \"iptables-alerter-vb4v5\" (UID: \"b9d51570-06dd-4e2f-9c19-07fb694279ae\") " pod="openshift-network-operator/iptables-alerter-vb4v5" Mar 12 14:12:10.452710 master-0 kubenswrapper[4141]: I0312 14:12:10.452463 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dc73c14-852d-4957-b6ac-84366ba0594f-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hkf2t\" (UID: \"3dc73c14-852d-4957-b6ac-84366ba0594f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" Mar 12 14:12:10.453029 master-0 kubenswrapper[4141]: I0312 14:12:10.452935 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f72fbbe-69f0-4622-be05-b839ff9b4d45-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-gt2tw\" (UID: \"3f72fbbe-69f0-4622-be05-b839ff9b4d45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" Mar 12 14:12:10.453029 master-0 kubenswrapper[4141]: I0312 14:12:10.452982 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d596c0-6a41-43e1-9516-aee9ad834ec2-config\") pod \"service-ca-operator-69b6fc6b88-fv6pp\" (UID: \"76d596c0-6a41-43e1-9516-aee9ad834ec2\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" Mar 12 14:12:10.453029 master-0 kubenswrapper[4141]: I0312 14:12:10.453025 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbv7q\" (UniqueName: \"kubernetes.io/projected/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-kube-api-access-bbv7q\") pod \"openshift-controller-manager-operator-8565d84698-zwdgk\" (UID: \"d00a8cc7-7774-40bd-94a1-9ac2d0f63234\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" Mar 12 14:12:10.453134 master-0 kubenswrapper[4141]: I0312 14:12:10.453058 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/8106d14a-b448-4dd1-bccd-926f85394b5d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-h8sq4\" (UID: \"8106d14a-b448-4dd1-bccd-926f85394b5d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" Mar 12 14:12:10.454185 master-0 kubenswrapper[4141]: I0312 14:12:10.453318 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b9d51570-06dd-4e2f-9c19-07fb694279ae-host-slash\") pod \"iptables-alerter-vb4v5\" (UID: \"b9d51570-06dd-4e2f-9c19-07fb694279ae\") " pod="openshift-network-operator/iptables-alerter-vb4v5" Mar 12 14:12:10.454185 master-0 kubenswrapper[4141]: E0312 14:12:10.453298 4141 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 14:12:10.454185 master-0 kubenswrapper[4141]: I0312 14:12:10.453353 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:10.454185 master-0 kubenswrapper[4141]: I0312 14:12:10.453374 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert\") pod \"catalog-operator-7d9c49f57b-whr79\" (UID: \"272b53c4-134c-404d-9a27-c7371415b1f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:12:10.454185 master-0 kubenswrapper[4141]: E0312 14:12:10.453379 4141 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 14:12:10.454185 master-0 kubenswrapper[4141]: I0312 14:12:10.453390 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm476\" (UniqueName: \"kubernetes.io/projected/7023af8b-bfcc-4253-85cd-d891dff1c86e-kube-api-access-dm476\") pod \"multus-admission-controller-8d675b596-sm9nb\" (UID: \"7023af8b-bfcc-4253-85cd-d891dff1c86e\") " pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" Mar 12 14:12:10.454185 master-0 kubenswrapper[4141]: E0312 14:12:10.453428 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert podName:85459175-2c9c-425d-bdfb-0a79c92ed110 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:10.953407729 +0000 UTC m=+105.514979968 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-dvv78" (UID: "85459175-2c9c-425d-bdfb-0a79c92ed110") : secret "package-server-manager-serving-cert" not found Mar 12 14:12:10.454185 master-0 kubenswrapper[4141]: I0312 14:12:10.453455 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhdq5\" (UniqueName: \"kubernetes.io/projected/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-kube-api-access-qhdq5\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:10.454185 master-0 kubenswrapper[4141]: I0312 14:12:10.453491 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:10.454185 master-0 kubenswrapper[4141]: I0312 14:12:10.453500 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/42dbcb8f-e8c4-413e-977d-40aa6df226aa-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:10.454185 master-0 kubenswrapper[4141]: I0312 14:12:10.453512 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-config\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:10.454185 master-0 kubenswrapper[4141]: E0312 14:12:10.453572 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs podName:7023af8b-bfcc-4253-85cd-d891dff1c86e nodeName:}" failed. No retries permitted until 2026-03-12 14:12:10.953554872 +0000 UTC m=+105.515127191 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs") pod "multus-admission-controller-8d675b596-sm9nb" (UID: "7023af8b-bfcc-4253-85cd-d891dff1c86e") : secret "multus-admission-controller-secret" not found Mar 12 14:12:10.454185 master-0 kubenswrapper[4141]: I0312 14:12:10.453603 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8tts\" (UniqueName: \"kubernetes.io/projected/85459175-2c9c-425d-bdfb-0a79c92ed110-kube-api-access-v8tts\") pod \"package-server-manager-854648ff6d-dvv78\" (UID: \"85459175-2c9c-425d-bdfb-0a79c92ed110\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:12:10.454185 master-0 kubenswrapper[4141]: E0312 14:12:10.453652 4141 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 14:12:10.454185 master-0 kubenswrapper[4141]: E0312 14:12:10.453683 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls podName:4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:10.953674575 +0000 UTC m=+105.515246824 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls") pod "ingress-operator-677db989d6-44hhf" (UID: "4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6") : secret "metrics-tls" not found Mar 12 14:12:10.454742 master-0 kubenswrapper[4141]: I0312 14:12:10.453650 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j47xv\" (UniqueName: \"kubernetes.io/projected/42dbcb8f-e8c4-413e-977d-40aa6df226aa-kube-api-access-j47xv\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:10.454742 master-0 kubenswrapper[4141]: I0312 14:12:10.453743 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-serving-cert\") pod \"kube-apiserver-operator-68bd585b-smpl5\" (UID: \"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" Mar 12 14:12:10.454742 master-0 kubenswrapper[4141]: I0312 14:12:10.453803 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-bound-sa-token\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:10.454742 master-0 kubenswrapper[4141]: I0312 14:12:10.453834 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08ea0d9f-0635-4759-803e-572eca2f2d34-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-vpn8v\" (UID: \"08ea0d9f-0635-4759-803e-572eca2f2d34\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" Mar 12 14:12:10.454742 master-0 kubenswrapper[4141]: I0312 14:12:10.453862 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mbjg\" (UniqueName: \"kubernetes.io/projected/3f72fbbe-69f0-4622-be05-b839ff9b4d45-kube-api-access-2mbjg\") pod \"openshift-apiserver-operator-799b6db4d7-gt2tw\" (UID: \"3f72fbbe-69f0-4622-be05-b839ff9b4d45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" Mar 12 14:12:10.454742 master-0 kubenswrapper[4141]: I0312 14:12:10.453888 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-smpl5\" (UID: \"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" Mar 12 14:12:10.454742 master-0 kubenswrapper[4141]: I0312 14:12:10.454341 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-trusted-ca\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:10.454742 master-0 kubenswrapper[4141]: I0312 14:12:10.454370 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqh9t\" (UniqueName: \"kubernetes.io/projected/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-kube-api-access-cqh9t\") pod \"olm-operator-d64cfc9db-f48hv\" (UID: \"07a6a1d6-fecf-4847-b7c1-160d5d7320fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:12:10.454742 master-0 kubenswrapper[4141]: I0312 14:12:10.454396 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc9zd\" (UniqueName: \"kubernetes.io/projected/3dc73c14-852d-4957-b6ac-84366ba0594f-kube-api-access-sc9zd\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hkf2t\" (UID: \"3dc73c14-852d-4957-b6ac-84366ba0594f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" Mar 12 14:12:10.454742 master-0 kubenswrapper[4141]: I0312 14:12:10.454421 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:10.454742 master-0 kubenswrapper[4141]: I0312 14:12:10.454445 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-config\") pod \"kube-apiserver-operator-68bd585b-smpl5\" (UID: \"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" Mar 12 14:12:10.454742 master-0 kubenswrapper[4141]: I0312 14:12:10.454470 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:10.454742 master-0 kubenswrapper[4141]: I0312 14:12:10.454493 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bba274a-38c7-4d13-88a5-6bc39228416c-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-qtql5\" (UID: \"1bba274a-38c7-4d13-88a5-6bc39228416c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" Mar 12 14:12:10.454742 master-0 kubenswrapper[4141]: I0312 14:12:10.454514 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08ea0d9f-0635-4759-803e-572eca2f2d34-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-vpn8v\" (UID: \"08ea0d9f-0635-4759-803e-572eca2f2d34\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" Mar 12 14:12:10.454742 master-0 kubenswrapper[4141]: I0312 14:12:10.454550 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57930a54-89ab-4ec8-a504-74035bb74d63-serving-cert\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:10.455255 master-0 kubenswrapper[4141]: I0312 14:12:10.454575 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dc73c14-852d-4957-b6ac-84366ba0594f-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hkf2t\" (UID: \"3dc73c14-852d-4957-b6ac-84366ba0594f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" Mar 12 14:12:10.455255 master-0 kubenswrapper[4141]: I0312 14:12:10.454589 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-zwdgk\" (UID: \"d00a8cc7-7774-40bd-94a1-9ac2d0f63234\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" Mar 12 14:12:10.455255 master-0 kubenswrapper[4141]: I0312 14:12:10.454599 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2435b91-86d6-415b-a978-34cc859e74f2-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:10.455255 master-0 kubenswrapper[4141]: I0312 14:12:10.454625 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-config\") pod \"openshift-controller-manager-operator-8565d84698-zwdgk\" (UID: \"d00a8cc7-7774-40bd-94a1-9ac2d0f63234\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" Mar 12 14:12:10.455255 master-0 kubenswrapper[4141]: I0312 14:12:10.455001 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:10.455255 master-0 kubenswrapper[4141]: I0312 14:12:10.455046 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkmrv\" (UniqueName: \"kubernetes.io/projected/a2435b91-86d6-415b-a978-34cc859e74f2-kube-api-access-qkmrv\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:10.455255 master-0 kubenswrapper[4141]: I0312 14:12:10.455113 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:10.455255 master-0 kubenswrapper[4141]: I0312 14:12:10.455135 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f72fbbe-69f0-4622-be05-b839ff9b4d45-config\") pod \"openshift-apiserver-operator-799b6db4d7-gt2tw\" (UID: \"3f72fbbe-69f0-4622-be05-b839ff9b4d45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" Mar 12 14:12:10.455255 master-0 kubenswrapper[4141]: I0312 14:12:10.455154 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtqp6\" (UniqueName: \"kubernetes.io/projected/8106d14a-b448-4dd1-bccd-926f85394b5d-kube-api-access-jtqp6\") pod \"cluster-olm-operator-77899cf6d-h8sq4\" (UID: \"8106d14a-b448-4dd1-bccd-926f85394b5d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" Mar 12 14:12:10.455255 master-0 kubenswrapper[4141]: I0312 14:12:10.455174 4141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cqkl\" (UniqueName: \"kubernetes.io/projected/b9d51570-06dd-4e2f-9c19-07fb694279ae-kube-api-access-2cqkl\") pod \"iptables-alerter-vb4v5\" (UID: \"b9d51570-06dd-4e2f-9c19-07fb694279ae\") " pod="openshift-network-operator/iptables-alerter-vb4v5" Mar 12 14:12:10.455255 master-0 kubenswrapper[4141]: I0312 14:12:10.455193 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2435b91-86d6-415b-a978-34cc859e74f2-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:10.455255 master-0 kubenswrapper[4141]: I0312 14:12:10.455209 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:10.455255 master-0 kubenswrapper[4141]: I0312 14:12:10.455221 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-config\") pod \"openshift-controller-manager-operator-8565d84698-zwdgk\" (UID: \"d00a8cc7-7774-40bd-94a1-9ac2d0f63234\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" Mar 12 14:12:10.455255 master-0 kubenswrapper[4141]: I0312 14:12:10.455225 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert\") pod \"olm-operator-d64cfc9db-f48hv\" (UID: \"07a6a1d6-fecf-4847-b7c1-160d5d7320fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:12:10.455255 master-0 kubenswrapper[4141]: E0312 14:12:10.455267 4141 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 14:12:10.455856 master-0 kubenswrapper[4141]: I0312 14:12:10.455283 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-ca\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:10.455856 master-0 kubenswrapper[4141]: E0312 14:12:10.455301 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert podName:07a6a1d6-fecf-4847-b7c1-160d5d7320fb nodeName:}" failed. No retries permitted until 2026-03-12 14:12:10.955289395 +0000 UTC m=+105.516861644 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert") pod "olm-operator-d64cfc9db-f48hv" (UID: "07a6a1d6-fecf-4847-b7c1-160d5d7320fb") : secret "olm-operator-serving-cert" not found Mar 12 14:12:10.455856 master-0 kubenswrapper[4141]: I0312 14:12:10.455315 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bba274a-38c7-4d13-88a5-6bc39228416c-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-qtql5\" (UID: \"1bba274a-38c7-4d13-88a5-6bc39228416c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" Mar 12 14:12:10.455856 master-0 kubenswrapper[4141]: I0312 14:12:10.455336 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4pvp\" (UniqueName: \"kubernetes.io/projected/76d596c0-6a41-43e1-9516-aee9ad834ec2-kube-api-access-c4pvp\") pod \"service-ca-operator-69b6fc6b88-fv6pp\" (UID: \"76d596c0-6a41-43e1-9516-aee9ad834ec2\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" Mar 12 14:12:10.455856 master-0 kubenswrapper[4141]: I0312 14:12:10.455392 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpq4d\" (UniqueName: \"kubernetes.io/projected/1bc0d552-01c7-4212-a551-d16419f2dc80-kube-api-access-vpq4d\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:10.455856 master-0 kubenswrapper[4141]: I0312 14:12:10.454003 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-config\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:10.455856 master-0 kubenswrapper[4141]: I0312 14:12:10.455717 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-ca\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:10.456176 master-0 kubenswrapper[4141]: I0312 14:12:10.456123 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dc73c14-852d-4957-b6ac-84366ba0594f-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hkf2t\" (UID: \"3dc73c14-852d-4957-b6ac-84366ba0594f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" Mar 12 14:12:10.456225 master-0 kubenswrapper[4141]: I0312 14:12:10.456189 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76d596c0-6a41-43e1-9516-aee9ad834ec2-serving-cert\") pod \"service-ca-operator-69b6fc6b88-fv6pp\" (UID: \"76d596c0-6a41-43e1-9516-aee9ad834ec2\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" Mar 12 14:12:10.456556 master-0 kubenswrapper[4141]: I0312 14:12:10.456510 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-client\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:10.456888 master-0 kubenswrapper[4141]: I0312 14:12:10.456669 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:10.457584 master-0 kubenswrapper[4141]: I0312 14:12:10.457122 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dc73c14-852d-4957-b6ac-84366ba0594f-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hkf2t\" (UID: \"3dc73c14-852d-4957-b6ac-84366ba0594f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" Mar 12 14:12:10.457969 master-0 kubenswrapper[4141]: E0312 14:12:10.457527 4141 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 14:12:10.457969 master-0 kubenswrapper[4141]: I0312 14:12:10.457842 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f72fbbe-69f0-4622-be05-b839ff9b4d45-config\") pod \"openshift-apiserver-operator-799b6db4d7-gt2tw\" (UID: \"3f72fbbe-69f0-4622-be05-b839ff9b4d45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" Mar 12 14:12:10.457969 master-0 kubenswrapper[4141]: E0312 14:12:10.457850 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls podName:a2435b91-86d6-415b-a978-34cc859e74f2 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:10.957838198 +0000 UTC m=+105.519410447 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-54cr9" (UID: "a2435b91-86d6-415b-a978-34cc859e74f2") : secret "image-registry-operator-tls" not found Mar 12 14:12:10.457969 master-0 kubenswrapper[4141]: E0312 14:12:10.457852 4141 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 14:12:10.457969 master-0 kubenswrapper[4141]: E0312 14:12:10.457941 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls podName:42dbcb8f-e8c4-413e-977d-40aa6df226aa nodeName:}" failed. No retries permitted until 2026-03-12 14:12:10.957923461 +0000 UTC m=+105.519495820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-6w5nv" (UID: "42dbcb8f-e8c4-413e-977d-40aa6df226aa") : secret "cluster-monitoring-operator-tls" not found Mar 12 14:12:10.458191 master-0 kubenswrapper[4141]: I0312 14:12:10.457975 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-config\") pod \"kube-apiserver-operator-68bd585b-smpl5\" (UID: \"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" Mar 12 14:12:10.458191 master-0 kubenswrapper[4141]: I0312 14:12:10.458106 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d596c0-6a41-43e1-9516-aee9ad834ec2-config\") pod \"service-ca-operator-69b6fc6b88-fv6pp\" (UID: \"76d596c0-6a41-43e1-9516-aee9ad834ec2\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" Mar 12 14:12:10.458302 master-0 kubenswrapper[4141]: I0312 14:12:10.458276 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/8106d14a-b448-4dd1-bccd-926f85394b5d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-h8sq4\" (UID: \"8106d14a-b448-4dd1-bccd-926f85394b5d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" Mar 12 14:12:10.458302 master-0 kubenswrapper[4141]: I0312 14:12:10.458291 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f72fbbe-69f0-4622-be05-b839ff9b4d45-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-gt2tw\" (UID: \"3f72fbbe-69f0-4622-be05-b839ff9b4d45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" Mar 12 14:12:10.458409 master-0 kubenswrapper[4141]: I0312 14:12:10.458318 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2435b91-86d6-415b-a978-34cc859e74f2-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:10.458409 master-0 kubenswrapper[4141]: I0312 14:12:10.458384 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bba274a-38c7-4d13-88a5-6bc39228416c-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-qtql5\" (UID: \"1bba274a-38c7-4d13-88a5-6bc39228416c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" Mar 12 14:12:10.458409 master-0 kubenswrapper[4141]: E0312 14:12:10.458392 4141 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 14:12:10.458409 master-0 kubenswrapper[4141]: I0312 14:12:10.458392 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a898118-6d01-4211-92f0-43967b75405c-serving-cert\") pod \"openshift-config-operator-64488f9d78-ljnjj\" (UID: \"0a898118-6d01-4211-92f0-43967b75405c\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:12:10.458548 master-0 kubenswrapper[4141]: E0312 14:12:10.458431 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert podName:272b53c4-134c-404d-9a27-c7371415b1f7 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:10.958418982 +0000 UTC m=+105.519991351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert") pod "catalog-operator-7d9c49f57b-whr79" (UID: "272b53c4-134c-404d-9a27-c7371415b1f7") : secret "catalog-operator-serving-cert" not found Mar 12 14:12:10.458548 master-0 kubenswrapper[4141]: I0312 14:12:10.458510 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-serving-cert\") pod \"kube-apiserver-operator-68bd585b-smpl5\" (UID: \"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" Mar 12 14:12:10.458625 master-0 kubenswrapper[4141]: I0312 14:12:10.458612 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:10.458849 master-0 kubenswrapper[4141]: I0312 14:12:10.458803 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d775283-2696-4411-8ddf-d4e6000f0a0c-serving-cert\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:10.459057 master-0 kubenswrapper[4141]: I0312 14:12:10.459032 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-trusted-ca\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:10.460099 master-0 kubenswrapper[4141]: I0312 14:12:10.460079 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/8106d14a-b448-4dd1-bccd-926f85394b5d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-h8sq4\" (UID: \"8106d14a-b448-4dd1-bccd-926f85394b5d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" Mar 12 14:12:10.460349 master-0 kubenswrapper[4141]: I0312 14:12:10.460323 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57930a54-89ab-4ec8-a504-74035bb74d63-serving-cert\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:10.460953 master-0 kubenswrapper[4141]: I0312 14:12:10.460887 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08ea0d9f-0635-4759-803e-572eca2f2d34-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-vpn8v\" (UID: \"08ea0d9f-0635-4759-803e-572eca2f2d34\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" Mar 12 14:12:10.474493 master-0 kubenswrapper[4141]: I0312 14:12:10.466759 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bba274a-38c7-4d13-88a5-6bc39228416c-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-qtql5\" (UID: \"1bba274a-38c7-4d13-88a5-6bc39228416c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" Mar 12 14:12:10.478842 master-0 kubenswrapper[4141]: I0312 14:12:10.478782 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6z8v\" (UniqueName: \"kubernetes.io/projected/57930a54-89ab-4ec8-a504-74035bb74d63-kube-api-access-d6z8v\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:10.480042 master-0 kubenswrapper[4141]: I0312 14:12:10.480010 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j47xv\" (UniqueName: \"kubernetes.io/projected/42dbcb8f-e8c4-413e-977d-40aa6df226aa-kube-api-access-j47xv\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:10.480860 master-0 kubenswrapper[4141]: I0312 14:12:10.480837 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bba274a-38c7-4d13-88a5-6bc39228416c-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-qtql5\" (UID: \"1bba274a-38c7-4d13-88a5-6bc39228416c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" Mar 12 14:12:10.483093 master-0 kubenswrapper[4141]: I0312 14:12:10.483059 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vnhl\" (UniqueName: \"kubernetes.io/projected/8c6b9f13-4a3a-4920-a84b-f76516501f81-kube-api-access-2vnhl\") pod \"dns-operator-589895fbb7-q4wwv\" (UID: \"8c6b9f13-4a3a-4920-a84b-f76516501f81\") " pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" Mar 12 14:12:10.483178 master-0 kubenswrapper[4141]: I0312 14:12:10.483134 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm476\" (UniqueName: \"kubernetes.io/projected/7023af8b-bfcc-4253-85cd-d891dff1c86e-kube-api-access-dm476\") pod \"multus-admission-controller-8d675b596-sm9nb\" (UID: \"7023af8b-bfcc-4253-85cd-d891dff1c86e\") " pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" Mar 12 14:12:10.484939 master-0 kubenswrapper[4141]: I0312 14:12:10.484890 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8tts\" (UniqueName: \"kubernetes.io/projected/85459175-2c9c-425d-bdfb-0a79c92ed110-kube-api-access-v8tts\") pod \"package-server-manager-854648ff6d-dvv78\" (UID: \"85459175-2c9c-425d-bdfb-0a79c92ed110\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:12:10.485332 master-0 kubenswrapper[4141]: I0312 14:12:10.485312 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-bound-sa-token\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:10.485456 master-0 kubenswrapper[4141]: I0312 14:12:10.485426 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcwrv\" (UniqueName: \"kubernetes.io/projected/8d775283-2696-4411-8ddf-d4e6000f0a0c-kube-api-access-lcwrv\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:10.486967 master-0 kubenswrapper[4141]: I0312 14:12:10.486209 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqqcc\" (UniqueName: \"kubernetes.io/projected/272b53c4-134c-404d-9a27-c7371415b1f7-kube-api-access-nqqcc\") pod \"catalog-operator-7d9c49f57b-whr79\" (UID: \"272b53c4-134c-404d-9a27-c7371415b1f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:12:10.500101 master-0 kubenswrapper[4141]: I0312 14:12:10.500075 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08ea0d9f-0635-4759-803e-572eca2f2d34-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-vpn8v\" (UID: \"08ea0d9f-0635-4759-803e-572eca2f2d34\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" Mar 12 14:12:10.524338 master-0 kubenswrapper[4141]: I0312 14:12:10.524312 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpq4d\" (UniqueName: \"kubernetes.io/projected/1bc0d552-01c7-4212-a551-d16419f2dc80-kube-api-access-vpq4d\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:10.537461 master-0 kubenswrapper[4141]: I0312 14:12:10.537426 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:10.543649 master-0 kubenswrapper[4141]: I0312 14:12:10.543616 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mbjg\" (UniqueName: \"kubernetes.io/projected/3f72fbbe-69f0-4622-be05-b839ff9b4d45-kube-api-access-2mbjg\") pod \"openshift-apiserver-operator-799b6db4d7-gt2tw\" (UID: \"3f72fbbe-69f0-4622-be05-b839ff9b4d45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" Mar 12 14:12:10.544321 master-0 kubenswrapper[4141]: I0312 14:12:10.544301 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" Mar 12 14:12:10.558504 master-0 kubenswrapper[4141]: I0312 14:12:10.558452 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cqkl\" (UniqueName: \"kubernetes.io/projected/b9d51570-06dd-4e2f-9c19-07fb694279ae-kube-api-access-2cqkl\") pod \"iptables-alerter-vb4v5\" (UID: \"b9d51570-06dd-4e2f-9c19-07fb694279ae\") " pod="openshift-network-operator/iptables-alerter-vb4v5" Mar 12 14:12:10.559033 master-0 kubenswrapper[4141]: I0312 14:12:10.558970 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b9d51570-06dd-4e2f-9c19-07fb694279ae-iptables-alerter-script\") pod \"iptables-alerter-vb4v5\" (UID: \"b9d51570-06dd-4e2f-9c19-07fb694279ae\") " pod="openshift-network-operator/iptables-alerter-vb4v5" Mar 12 14:12:10.559181 master-0 kubenswrapper[4141]: I0312 14:12:10.559145 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b9d51570-06dd-4e2f-9c19-07fb694279ae-host-slash\") pod \"iptables-alerter-vb4v5\" (UID: \"b9d51570-06dd-4e2f-9c19-07fb694279ae\") " pod="openshift-network-operator/iptables-alerter-vb4v5" Mar 12 14:12:10.559249 master-0 kubenswrapper[4141]: I0312 14:12:10.559223 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b9d51570-06dd-4e2f-9c19-07fb694279ae-host-slash\") pod \"iptables-alerter-vb4v5\" (UID: \"b9d51570-06dd-4e2f-9c19-07fb694279ae\") " pod="openshift-network-operator/iptables-alerter-vb4v5" Mar 12 14:12:10.560119 master-0 kubenswrapper[4141]: I0312 14:12:10.560084 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b9d51570-06dd-4e2f-9c19-07fb694279ae-iptables-alerter-script\") pod \"iptables-alerter-vb4v5\" (UID: \"b9d51570-06dd-4e2f-9c19-07fb694279ae\") " pod="openshift-network-operator/iptables-alerter-vb4v5" Mar 12 14:12:10.563344 master-0 kubenswrapper[4141]: I0312 14:12:10.563323 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-smpl5\" (UID: \"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" Mar 12 14:12:10.579525 master-0 kubenswrapper[4141]: I0312 14:12:10.579482 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtqp6\" (UniqueName: \"kubernetes.io/projected/8106d14a-b448-4dd1-bccd-926f85394b5d-kube-api-access-jtqp6\") pod \"cluster-olm-operator-77899cf6d-h8sq4\" (UID: \"8106d14a-b448-4dd1-bccd-926f85394b5d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" Mar 12 14:12:10.601057 master-0 kubenswrapper[4141]: I0312 14:12:10.600960 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkmrv\" (UniqueName: \"kubernetes.io/projected/a2435b91-86d6-415b-a978-34cc859e74f2-kube-api-access-qkmrv\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:10.624066 master-0 kubenswrapper[4141]: I0312 14:12:10.622957 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv" Mar 12 14:12:10.638327 master-0 kubenswrapper[4141]: I0312 14:12:10.630083 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" Mar 12 14:12:10.647449 master-0 kubenswrapper[4141]: I0312 14:12:10.638651 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqh9t\" (UniqueName: \"kubernetes.io/projected/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-kube-api-access-cqh9t\") pod \"olm-operator-d64cfc9db-f48hv\" (UID: \"07a6a1d6-fecf-4847-b7c1-160d5d7320fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:12:10.652454 master-0 kubenswrapper[4141]: I0312 14:12:10.650634 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc9zd\" (UniqueName: \"kubernetes.io/projected/3dc73c14-852d-4957-b6ac-84366ba0594f-kube-api-access-sc9zd\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hkf2t\" (UID: \"3dc73c14-852d-4957-b6ac-84366ba0594f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" Mar 12 14:12:10.660558 master-0 kubenswrapper[4141]: I0312 14:12:10.660522 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:10.671182 master-0 kubenswrapper[4141]: I0312 14:12:10.668367 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rfxl\" (UniqueName: \"kubernetes.io/projected/0a898118-6d01-4211-92f0-43967b75405c-kube-api-access-8rfxl\") pod \"openshift-config-operator-64488f9d78-ljnjj\" (UID: \"0a898118-6d01-4211-92f0-43967b75405c\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:12:10.671182 master-0 kubenswrapper[4141]: I0312 14:12:10.670999 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" Mar 12 14:12:10.687343 master-0 kubenswrapper[4141]: I0312 14:12:10.686071 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbv7q\" (UniqueName: \"kubernetes.io/projected/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-kube-api-access-bbv7q\") pod \"openshift-controller-manager-operator-8565d84698-zwdgk\" (UID: \"d00a8cc7-7774-40bd-94a1-9ac2d0f63234\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" Mar 12 14:12:10.693872 master-0 kubenswrapper[4141]: I0312 14:12:10.693121 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" Mar 12 14:12:10.702543 master-0 kubenswrapper[4141]: I0312 14:12:10.702484 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" Mar 12 14:12:10.717968 master-0 kubenswrapper[4141]: I0312 14:12:10.714603 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhdq5\" (UniqueName: \"kubernetes.io/projected/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-kube-api-access-qhdq5\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:10.730215 master-0 kubenswrapper[4141]: I0312 14:12:10.730075 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2435b91-86d6-415b-a978-34cc859e74f2-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:10.748860 master-0 kubenswrapper[4141]: I0312 14:12:10.748279 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4pvp\" (UniqueName: \"kubernetes.io/projected/76d596c0-6a41-43e1-9516-aee9ad834ec2-kube-api-access-c4pvp\") pod \"service-ca-operator-69b6fc6b88-fv6pp\" (UID: \"76d596c0-6a41-43e1-9516-aee9ad834ec2\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" Mar 12 14:12:10.781038 master-0 kubenswrapper[4141]: I0312 14:12:10.780936 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47"] Mar 12 14:12:10.783182 master-0 kubenswrapper[4141]: I0312 14:12:10.783138 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5"] Mar 12 14:12:10.792543 master-0 kubenswrapper[4141]: I0312 14:12:10.792492 4141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cqkl\" (UniqueName: \"kubernetes.io/projected/b9d51570-06dd-4e2f-9c19-07fb694279ae-kube-api-access-2cqkl\") pod \"iptables-alerter-vb4v5\" (UID: \"b9d51570-06dd-4e2f-9c19-07fb694279ae\") " pod="openshift-network-operator/iptables-alerter-vb4v5" Mar 12 14:12:10.818165 master-0 kubenswrapper[4141]: I0312 14:12:10.815994 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" Mar 12 14:12:10.863689 master-0 kubenswrapper[4141]: I0312 14:12:10.863581 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:10.865662 master-0 kubenswrapper[4141]: I0312 14:12:10.865463 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:10.865811 master-0 kubenswrapper[4141]: E0312 14:12:10.865671 4141 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 14:12:10.865811 master-0 kubenswrapper[4141]: E0312 14:12:10.865733 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert podName:879e9bf1-ce4a-40b7-a72c-fe4c61e96cea nodeName:}" failed. No retries permitted until 2026-03-12 14:12:11.865717005 +0000 UTC m=+106.427289254 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-zghs6" (UID: "879e9bf1-ce4a-40b7-a72c-fe4c61e96cea") : secret "performance-addon-operator-webhook-cert" not found Mar 12 14:12:10.866022 master-0 kubenswrapper[4141]: E0312 14:12:10.865946 4141 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 14:12:10.866022 master-0 kubenswrapper[4141]: E0312 14:12:10.865983 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls podName:879e9bf1-ce4a-40b7-a72c-fe4c61e96cea nodeName:}" failed. No retries permitted until 2026-03-12 14:12:11.865971181 +0000 UTC m=+106.427543430 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-zghs6" (UID: "879e9bf1-ce4a-40b7-a72c-fe4c61e96cea") : secret "node-tuning-operator-tls" not found Mar 12 14:12:10.871471 master-0 kubenswrapper[4141]: I0312 14:12:10.871430 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" Mar 12 14:12:10.873756 master-0 kubenswrapper[4141]: I0312 14:12:10.873337 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv"] Mar 12 14:12:10.876735 master-0 kubenswrapper[4141]: I0312 14:12:10.876701 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-vb4v5" Mar 12 14:12:10.952065 master-0 kubenswrapper[4141]: I0312 14:12:10.950417 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:12:10.967917 master-0 kubenswrapper[4141]: I0312 14:12:10.967280 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs\") pod \"multus-admission-controller-8d675b596-sm9nb\" (UID: \"7023af8b-bfcc-4253-85cd-d891dff1c86e\") " pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" Mar 12 14:12:10.967917 master-0 kubenswrapper[4141]: I0312 14:12:10.967329 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls\") pod \"dns-operator-589895fbb7-q4wwv\" (UID: \"8c6b9f13-4a3a-4920-a84b-f76516501f81\") " pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" Mar 12 14:12:10.967917 master-0 kubenswrapper[4141]: I0312 14:12:10.967354 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:10.967917 master-0 kubenswrapper[4141]: I0312 14:12:10.967379 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-dvv78\" (UID: \"85459175-2c9c-425d-bdfb-0a79c92ed110\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:12:10.967917 master-0 kubenswrapper[4141]: I0312 14:12:10.967434 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:10.967917 master-0 kubenswrapper[4141]: I0312 14:12:10.967458 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert\") pod \"catalog-operator-7d9c49f57b-whr79\" (UID: \"272b53c4-134c-404d-9a27-c7371415b1f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:12:10.967917 master-0 kubenswrapper[4141]: I0312 14:12:10.967483 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:10.967917 master-0 kubenswrapper[4141]: I0312 14:12:10.967514 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:10.967917 master-0 kubenswrapper[4141]: E0312 14:12:10.967529 4141 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 14:12:10.967917 master-0 kubenswrapper[4141]: I0312 14:12:10.967551 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert\") pod \"olm-operator-d64cfc9db-f48hv\" (UID: \"07a6a1d6-fecf-4847-b7c1-160d5d7320fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:12:10.967917 master-0 kubenswrapper[4141]: E0312 14:12:10.967583 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls podName:8c6b9f13-4a3a-4920-a84b-f76516501f81 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:11.96756724 +0000 UTC m=+106.529139489 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls") pod "dns-operator-589895fbb7-q4wwv" (UID: "8c6b9f13-4a3a-4920-a84b-f76516501f81") : secret "metrics-tls" not found Mar 12 14:12:10.967917 master-0 kubenswrapper[4141]: E0312 14:12:10.967674 4141 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 14:12:10.967917 master-0 kubenswrapper[4141]: E0312 14:12:10.967707 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert podName:07a6a1d6-fecf-4847-b7c1-160d5d7320fb nodeName:}" failed. No retries permitted until 2026-03-12 14:12:11.967696853 +0000 UTC m=+106.529269102 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert") pod "olm-operator-d64cfc9db-f48hv" (UID: "07a6a1d6-fecf-4847-b7c1-160d5d7320fb") : secret "olm-operator-serving-cert" not found Mar 12 14:12:10.967917 master-0 kubenswrapper[4141]: E0312 14:12:10.967759 4141 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 14:12:10.967917 master-0 kubenswrapper[4141]: E0312 14:12:10.967785 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics podName:1bc0d552-01c7-4212-a551-d16419f2dc80 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:11.967776575 +0000 UTC m=+106.529348824 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-qzdff" (UID: "1bc0d552-01c7-4212-a551-d16419f2dc80") : secret "marketplace-operator-metrics" not found Mar 12 14:12:10.967917 master-0 kubenswrapper[4141]: E0312 14:12:10.967832 4141 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 14:12:10.968750 master-0 kubenswrapper[4141]: E0312 14:12:10.967860 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert podName:85459175-2c9c-425d-bdfb-0a79c92ed110 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:11.967851506 +0000 UTC m=+106.529423765 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-dvv78" (UID: "85459175-2c9c-425d-bdfb-0a79c92ed110") : secret "package-server-manager-serving-cert" not found Mar 12 14:12:10.968750 master-0 kubenswrapper[4141]: E0312 14:12:10.967923 4141 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 14:12:10.968750 master-0 kubenswrapper[4141]: E0312 14:12:10.967954 4141 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 14:12:10.968750 master-0 kubenswrapper[4141]: E0312 14:12:10.967958 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls podName:42dbcb8f-e8c4-413e-977d-40aa6df226aa nodeName:}" failed. No retries permitted until 2026-03-12 14:12:11.967948009 +0000 UTC m=+106.529520258 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-6w5nv" (UID: "42dbcb8f-e8c4-413e-977d-40aa6df226aa") : secret "cluster-monitoring-operator-tls" not found Mar 12 14:12:10.968750 master-0 kubenswrapper[4141]: E0312 14:12:10.968003 4141 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 14:12:10.968750 master-0 kubenswrapper[4141]: E0312 14:12:10.968008 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs podName:7023af8b-bfcc-4253-85cd-d891dff1c86e nodeName:}" failed. No retries permitted until 2026-03-12 14:12:11.96799757 +0000 UTC m=+106.529569819 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs") pod "multus-admission-controller-8d675b596-sm9nb" (UID: "7023af8b-bfcc-4253-85cd-d891dff1c86e") : secret "multus-admission-controller-secret" not found Mar 12 14:12:10.968750 master-0 kubenswrapper[4141]: E0312 14:12:10.968031 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert podName:272b53c4-134c-404d-9a27-c7371415b1f7 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:11.96802269 +0000 UTC m=+106.529594939 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert") pod "catalog-operator-7d9c49f57b-whr79" (UID: "272b53c4-134c-404d-9a27-c7371415b1f7") : secret "catalog-operator-serving-cert" not found Mar 12 14:12:10.968750 master-0 kubenswrapper[4141]: E0312 14:12:10.968061 4141 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 14:12:10.968750 master-0 kubenswrapper[4141]: E0312 14:12:10.968080 4141 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 14:12:10.968750 master-0 kubenswrapper[4141]: E0312 14:12:10.968091 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls podName:4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:11.968082632 +0000 UTC m=+106.529654881 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls") pod "ingress-operator-677db989d6-44hhf" (UID: "4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6") : secret "metrics-tls" not found Mar 12 14:12:10.968750 master-0 kubenswrapper[4141]: E0312 14:12:10.968109 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls podName:a2435b91-86d6-415b-a978-34cc859e74f2 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:11.968099872 +0000 UTC m=+106.529672121 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-54cr9" (UID: "a2435b91-86d6-415b-a978-34cc859e74f2") : secret "image-registry-operator-tls" not found Mar 12 14:12:10.986163 master-0 kubenswrapper[4141]: I0312 14:12:10.985683 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" Mar 12 14:12:11.065509 master-0 kubenswrapper[4141]: I0312 14:12:11.064717 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4"] Mar 12 14:12:11.065509 master-0 kubenswrapper[4141]: I0312 14:12:11.064934 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t"] Mar 12 14:12:11.126303 master-0 kubenswrapper[4141]: I0312 14:12:11.126258 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp"] Mar 12 14:12:11.144120 master-0 kubenswrapper[4141]: W0312 14:12:11.143369 4141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76d596c0_6a41_43e1_9516_aee9ad834ec2.slice/crio-b4d899998f745455ee9f9d0e86782192bfb9c3fa197ad167b3e3e1e3896ea9e7 WatchSource:0}: Error finding container b4d899998f745455ee9f9d0e86782192bfb9c3fa197ad167b3e3e1e3896ea9e7: Status 404 returned error can't find the container with id b4d899998f745455ee9f9d0e86782192bfb9c3fa197ad167b3e3e1e3896ea9e7 Mar 12 14:12:11.148272 master-0 kubenswrapper[4141]: I0312 14:12:11.148216 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj"] Mar 12 14:12:11.180100 master-0 kubenswrapper[4141]: I0312 14:12:11.180045 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk"] Mar 12 14:12:11.190995 master-0 kubenswrapper[4141]: W0312 14:12:11.186403 4141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd00a8cc7_7774_40bd_94a1_9ac2d0f63234.slice/crio-59d708b78a7b260fc1f5fce51861156cd584df9875d86be3a6175021610d5f66 WatchSource:0}: Error finding container 59d708b78a7b260fc1f5fce51861156cd584df9875d86be3a6175021610d5f66: Status 404 returned error can't find the container with id 59d708b78a7b260fc1f5fce51861156cd584df9875d86be3a6175021610d5f66 Mar 12 14:12:11.215558 master-0 kubenswrapper[4141]: I0312 14:12:11.215509 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv"] Mar 12 14:12:11.218013 master-0 kubenswrapper[4141]: I0312 14:12:11.217842 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5"] Mar 12 14:12:11.218312 master-0 kubenswrapper[4141]: I0312 14:12:11.218277 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw"] Mar 12 14:12:11.219665 master-0 kubenswrapper[4141]: I0312 14:12:11.219263 4141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v"] Mar 12 14:12:11.222657 master-0 kubenswrapper[4141]: W0312 14:12:11.222579 4141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1ed125c_cbc0_4dfd_b006_f8d8bce3adb2.slice/crio-643a9eb1fc3e8f464aba2201dd6fa47d57c365903e1554bd77d2fd4b8d623917 WatchSource:0}: Error finding container 643a9eb1fc3e8f464aba2201dd6fa47d57c365903e1554bd77d2fd4b8d623917: Status 404 returned error can't find the container with id 643a9eb1fc3e8f464aba2201dd6fa47d57c365903e1554bd77d2fd4b8d623917 Mar 12 14:12:11.224685 master-0 kubenswrapper[4141]: W0312 14:12:11.224633 4141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f72fbbe_69f0_4622_be05_b839ff9b4d45.slice/crio-84ea14c79c9435282226e3a70b4b302086d9d4276408c71b8e887b9f85e1f795 WatchSource:0}: Error finding container 84ea14c79c9435282226e3a70b4b302086d9d4276408c71b8e887b9f85e1f795: Status 404 returned error can't find the container with id 84ea14c79c9435282226e3a70b4b302086d9d4276408c71b8e887b9f85e1f795 Mar 12 14:12:11.552916 master-0 kubenswrapper[4141]: I0312 14:12:11.552592 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" event={"ID":"3dc73c14-852d-4957-b6ac-84366ba0594f","Type":"ContainerStarted","Data":"1ba5c83b988cf94fb241db9240f0b33554a204e49670a14cf13953d488a8abe8"} Mar 12 14:12:11.558178 master-0 kubenswrapper[4141]: I0312 14:12:11.558152 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" event={"ID":"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2","Type":"ContainerStarted","Data":"4bcb9b48cc8fca228497ac0b2a61db8d6fd6ac7df91adf72143bbed36d3bb12a"} Mar 12 14:12:11.558178 master-0 kubenswrapper[4141]: I0312 14:12:11.558180 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" event={"ID":"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2","Type":"ContainerStarted","Data":"643a9eb1fc3e8f464aba2201dd6fa47d57c365903e1554bd77d2fd4b8d623917"} Mar 12 14:12:11.564325 master-0 kubenswrapper[4141]: I0312 14:12:11.564297 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" event={"ID":"d00a8cc7-7774-40bd-94a1-9ac2d0f63234","Type":"ContainerStarted","Data":"59d708b78a7b260fc1f5fce51861156cd584df9875d86be3a6175021610d5f66"} Mar 12 14:12:11.566107 master-0 kubenswrapper[4141]: I0312 14:12:11.566080 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" event={"ID":"8106d14a-b448-4dd1-bccd-926f85394b5d","Type":"ContainerStarted","Data":"667a33334db41ad265e60ff8664b098419b2a584d575b100118b0dcbbdce439e"} Mar 12 14:12:11.567885 master-0 kubenswrapper[4141]: I0312 14:12:11.567856 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" event={"ID":"3f72fbbe-69f0-4622-be05-b839ff9b4d45","Type":"ContainerStarted","Data":"84ea14c79c9435282226e3a70b4b302086d9d4276408c71b8e887b9f85e1f795"} Mar 12 14:12:11.569126 master-0 kubenswrapper[4141]: I0312 14:12:11.569087 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" event={"ID":"0a898118-6d01-4211-92f0-43967b75405c","Type":"ContainerStarted","Data":"7bbac52760e3fcba097d54391f795f027fe56fcf9f7e33e8c515250455992a3b"} Mar 12 14:12:11.570349 master-0 kubenswrapper[4141]: I0312 14:12:11.570318 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" event={"ID":"08ea0d9f-0635-4759-803e-572eca2f2d34","Type":"ContainerStarted","Data":"43ed8c1a4973dd17aafd4ecf7a139cc5fe9ab8ae42ddeb20c5c40716650f035f"} Mar 12 14:12:11.571442 master-0 kubenswrapper[4141]: I0312 14:12:11.571373 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" event={"ID":"57930a54-89ab-4ec8-a504-74035bb74d63","Type":"ContainerStarted","Data":"bb2ba7d0c1c51336231f0b223ca71f794a5f473f0c46059600789cebd6ae818f"} Mar 12 14:12:11.572404 master-0 kubenswrapper[4141]: I0312 14:12:11.572381 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv" event={"ID":"8660cef9-0ab3-453e-a4b9-c243daa6ddb0","Type":"ContainerStarted","Data":"2ed4af146d2bc6a8dae65fe67eb8f5e0b4dce64f0e0b6991bdd46a09447f48de"} Mar 12 14:12:11.573503 master-0 kubenswrapper[4141]: I0312 14:12:11.573445 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-vb4v5" event={"ID":"b9d51570-06dd-4e2f-9c19-07fb694279ae","Type":"ContainerStarted","Data":"e3ded18e3d6f447b9e66f1d69e24e4a3db671b9e96141bd007fb10aec777b522"} Mar 12 14:12:11.580443 master-0 kubenswrapper[4141]: I0312 14:12:11.580376 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" event={"ID":"76d596c0-6a41-43e1-9516-aee9ad834ec2","Type":"ContainerStarted","Data":"b4d899998f745455ee9f9d0e86782192bfb9c3fa197ad167b3e3e1e3896ea9e7"} Mar 12 14:12:11.587567 master-0 kubenswrapper[4141]: I0312 14:12:11.583852 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" event={"ID":"1bba274a-38c7-4d13-88a5-6bc39228416c","Type":"ContainerStarted","Data":"1cc258e5add24f89b3e9a9a1502a4d4f7e01fa0c35af8f6d6a9076b7b4e48345"} Mar 12 14:12:11.587567 master-0 kubenswrapper[4141]: I0312 14:12:11.585553 4141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" event={"ID":"8d775283-2696-4411-8ddf-d4e6000f0a0c","Type":"ContainerStarted","Data":"b820d186bee28edd1c55ac6380a6987416ca51ef3ff64ae7bf3a04304904c238"} Mar 12 14:12:11.882265 master-0 kubenswrapper[4141]: I0312 14:12:11.882146 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:11.882412 master-0 kubenswrapper[4141]: I0312 14:12:11.882278 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:11.882499 master-0 kubenswrapper[4141]: E0312 14:12:11.882474 4141 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 14:12:11.882613 master-0 kubenswrapper[4141]: E0312 14:12:11.882591 4141 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 14:12:11.882680 master-0 kubenswrapper[4141]: E0312 14:12:11.882661 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls podName:879e9bf1-ce4a-40b7-a72c-fe4c61e96cea nodeName:}" failed. No retries permitted until 2026-03-12 14:12:13.882639135 +0000 UTC m=+108.444211384 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-zghs6" (UID: "879e9bf1-ce4a-40b7-a72c-fe4c61e96cea") : secret "node-tuning-operator-tls" not found Mar 12 14:12:11.883806 master-0 kubenswrapper[4141]: E0312 14:12:11.883781 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert podName:879e9bf1-ce4a-40b7-a72c-fe4c61e96cea nodeName:}" failed. No retries permitted until 2026-03-12 14:12:13.883768793 +0000 UTC m=+108.445341042 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-zghs6" (UID: "879e9bf1-ce4a-40b7-a72c-fe4c61e96cea") : secret "performance-addon-operator-webhook-cert" not found Mar 12 14:12:11.984111 master-0 kubenswrapper[4141]: I0312 14:12:11.984028 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:11.984361 master-0 kubenswrapper[4141]: E0312 14:12:11.984309 4141 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 14:12:11.984436 master-0 kubenswrapper[4141]: E0312 14:12:11.984417 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls podName:42dbcb8f-e8c4-413e-977d-40aa6df226aa nodeName:}" failed. No retries permitted until 2026-03-12 14:12:13.984391676 +0000 UTC m=+108.545963925 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-6w5nv" (UID: "42dbcb8f-e8c4-413e-977d-40aa6df226aa") : secret "cluster-monitoring-operator-tls" not found Mar 12 14:12:11.984508 master-0 kubenswrapper[4141]: I0312 14:12:11.984484 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert\") pod \"catalog-operator-7d9c49f57b-whr79\" (UID: \"272b53c4-134c-404d-9a27-c7371415b1f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:12:11.984575 master-0 kubenswrapper[4141]: I0312 14:12:11.984554 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:11.984989 master-0 kubenswrapper[4141]: I0312 14:12:11.984634 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:11.984989 master-0 kubenswrapper[4141]: I0312 14:12:11.984712 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert\") pod \"olm-operator-d64cfc9db-f48hv\" (UID: \"07a6a1d6-fecf-4847-b7c1-160d5d7320fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:12:11.984989 master-0 kubenswrapper[4141]: E0312 14:12:11.984749 4141 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 14:12:11.984989 master-0 kubenswrapper[4141]: E0312 14:12:11.984810 4141 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 14:12:11.984989 master-0 kubenswrapper[4141]: E0312 14:12:11.984823 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert podName:272b53c4-134c-404d-9a27-c7371415b1f7 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:13.984803946 +0000 UTC m=+108.546376195 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert") pod "catalog-operator-7d9c49f57b-whr79" (UID: "272b53c4-134c-404d-9a27-c7371415b1f7") : secret "catalog-operator-serving-cert" not found Mar 12 14:12:11.984989 master-0 kubenswrapper[4141]: E0312 14:12:11.984841 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert podName:07a6a1d6-fecf-4847-b7c1-160d5d7320fb nodeName:}" failed. No retries permitted until 2026-03-12 14:12:13.984830307 +0000 UTC m=+108.546402766 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert") pod "olm-operator-d64cfc9db-f48hv" (UID: "07a6a1d6-fecf-4847-b7c1-160d5d7320fb") : secret "olm-operator-serving-cert" not found Mar 12 14:12:11.984989 master-0 kubenswrapper[4141]: E0312 14:12:11.984876 4141 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 14:12:11.984989 master-0 kubenswrapper[4141]: E0312 14:12:11.984943 4141 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 14:12:11.984989 master-0 kubenswrapper[4141]: E0312 14:12:11.984952 4141 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 14:12:11.984989 master-0 kubenswrapper[4141]: I0312 14:12:11.984913 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs\") pod \"multus-admission-controller-8d675b596-sm9nb\" (UID: \"7023af8b-bfcc-4253-85cd-d891dff1c86e\") " pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" Mar 12 14:12:11.984989 master-0 kubenswrapper[4141]: E0312 14:12:11.984990 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls podName:4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:13.984964321 +0000 UTC m=+108.546536630 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls") pod "ingress-operator-677db989d6-44hhf" (UID: "4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6") : secret "metrics-tls" not found Mar 12 14:12:11.985339 master-0 kubenswrapper[4141]: E0312 14:12:11.985011 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls podName:a2435b91-86d6-415b-a978-34cc859e74f2 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:13.985002522 +0000 UTC m=+108.546574771 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-54cr9" (UID: "a2435b91-86d6-415b-a978-34cc859e74f2") : secret "image-registry-operator-tls" not found Mar 12 14:12:11.985339 master-0 kubenswrapper[4141]: E0312 14:12:11.985028 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs podName:7023af8b-bfcc-4253-85cd-d891dff1c86e nodeName:}" failed. No retries permitted until 2026-03-12 14:12:13.985020592 +0000 UTC m=+108.546592931 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs") pod "multus-admission-controller-8d675b596-sm9nb" (UID: "7023af8b-bfcc-4253-85cd-d891dff1c86e") : secret "multus-admission-controller-secret" not found Mar 12 14:12:11.985339 master-0 kubenswrapper[4141]: I0312 14:12:11.985059 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls\") pod \"dns-operator-589895fbb7-q4wwv\" (UID: \"8c6b9f13-4a3a-4920-a84b-f76516501f81\") " pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" Mar 12 14:12:11.985339 master-0 kubenswrapper[4141]: I0312 14:12:11.985109 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:11.985339 master-0 kubenswrapper[4141]: I0312 14:12:11.985180 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-dvv78\" (UID: \"85459175-2c9c-425d-bdfb-0a79c92ed110\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:12:11.985339 master-0 kubenswrapper[4141]: E0312 14:12:11.985286 4141 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 14:12:11.985339 master-0 kubenswrapper[4141]: E0312 14:12:11.985315 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert podName:85459175-2c9c-425d-bdfb-0a79c92ed110 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:13.985305889 +0000 UTC m=+108.546878138 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-dvv78" (UID: "85459175-2c9c-425d-bdfb-0a79c92ed110") : secret "package-server-manager-serving-cert" not found Mar 12 14:12:11.985562 master-0 kubenswrapper[4141]: E0312 14:12:11.985367 4141 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 14:12:11.985562 master-0 kubenswrapper[4141]: E0312 14:12:11.985397 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls podName:8c6b9f13-4a3a-4920-a84b-f76516501f81 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:13.985385991 +0000 UTC m=+108.546958410 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls") pod "dns-operator-589895fbb7-q4wwv" (UID: "8c6b9f13-4a3a-4920-a84b-f76516501f81") : secret "metrics-tls" not found Mar 12 14:12:11.985562 master-0 kubenswrapper[4141]: E0312 14:12:11.985440 4141 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 14:12:11.985562 master-0 kubenswrapper[4141]: E0312 14:12:11.985464 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics podName:1bc0d552-01c7-4212-a551-d16419f2dc80 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:13.985456743 +0000 UTC m=+108.547029082 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-qzdff" (UID: "1bc0d552-01c7-4212-a551-d16419f2dc80") : secret "marketplace-operator-metrics" not found Mar 12 14:12:12.131370 master-0 kubenswrapper[4141]: I0312 14:12:12.131309 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:12:12.134813 master-0 kubenswrapper[4141]: I0312 14:12:12.132613 4141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:12:12.135970 master-0 kubenswrapper[4141]: I0312 14:12:12.134836 4141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 12 14:12:12.135970 master-0 kubenswrapper[4141]: I0312 14:12:12.135054 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 12 14:12:12.135970 master-0 kubenswrapper[4141]: I0312 14:12:12.135175 4141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 12 14:12:12.140150 master-0 kubenswrapper[4141]: I0312 14:12:12.140087 4141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" podStartSLOduration=71.140062844 podStartE2EDuration="1m11.140062844s" podCreationTimestamp="2026-03-12 14:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:12:11.576864228 +0000 UTC m=+106.138436497" watchObservedRunningTime="2026-03-12 14:12:12.140062844 +0000 UTC m=+106.701635093" Mar 12 14:12:12.140968 master-0 kubenswrapper[4141]: I0312 14:12:12.140935 4141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 12 14:12:13.905289 master-0 kubenswrapper[4141]: I0312 14:12:13.905223 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:13.905997 master-0 kubenswrapper[4141]: E0312 14:12:13.905394 4141 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 14:12:13.905997 master-0 kubenswrapper[4141]: E0312 14:12:13.905492 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls podName:879e9bf1-ce4a-40b7-a72c-fe4c61e96cea nodeName:}" failed. No retries permitted until 2026-03-12 14:12:17.905472051 +0000 UTC m=+112.467044300 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-zghs6" (UID: "879e9bf1-ce4a-40b7-a72c-fe4c61e96cea") : secret "node-tuning-operator-tls" not found Mar 12 14:12:13.905997 master-0 kubenswrapper[4141]: I0312 14:12:13.905526 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:13.905997 master-0 kubenswrapper[4141]: E0312 14:12:13.905646 4141 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 14:12:13.905997 master-0 kubenswrapper[4141]: E0312 14:12:13.905723 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert podName:879e9bf1-ce4a-40b7-a72c-fe4c61e96cea nodeName:}" failed. No retries permitted until 2026-03-12 14:12:17.905661145 +0000 UTC m=+112.467233394 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-zghs6" (UID: "879e9bf1-ce4a-40b7-a72c-fe4c61e96cea") : secret "performance-addon-operator-webhook-cert" not found Mar 12 14:12:14.007038 master-0 kubenswrapper[4141]: I0312 14:12:14.006946 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:14.007038 master-0 kubenswrapper[4141]: I0312 14:12:14.007021 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert\") pod \"olm-operator-d64cfc9db-f48hv\" (UID: \"07a6a1d6-fecf-4847-b7c1-160d5d7320fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:12:14.007038 master-0 kubenswrapper[4141]: I0312 14:12:14.007051 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs\") pod \"multus-admission-controller-8d675b596-sm9nb\" (UID: \"7023af8b-bfcc-4253-85cd-d891dff1c86e\") " pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" Mar 12 14:12:14.007455 master-0 kubenswrapper[4141]: I0312 14:12:14.007080 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls\") pod \"dns-operator-589895fbb7-q4wwv\" (UID: \"8c6b9f13-4a3a-4920-a84b-f76516501f81\") " pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" Mar 12 14:12:14.007455 master-0 kubenswrapper[4141]: I0312 14:12:14.007100 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:14.007455 master-0 kubenswrapper[4141]: I0312 14:12:14.007126 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-dvv78\" (UID: \"85459175-2c9c-425d-bdfb-0a79c92ed110\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:12:14.007455 master-0 kubenswrapper[4141]: E0312 14:12:14.007336 4141 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 14:12:14.007455 master-0 kubenswrapper[4141]: E0312 14:12:14.007412 4141 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 14:12:14.007455 master-0 kubenswrapper[4141]: E0312 14:12:14.007426 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls podName:a2435b91-86d6-415b-a978-34cc859e74f2 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:18.007403007 +0000 UTC m=+112.568975256 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-54cr9" (UID: "a2435b91-86d6-415b-a978-34cc859e74f2") : secret "image-registry-operator-tls" not found Mar 12 14:12:14.007455 master-0 kubenswrapper[4141]: E0312 14:12:14.007466 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert podName:07a6a1d6-fecf-4847-b7c1-160d5d7320fb nodeName:}" failed. No retries permitted until 2026-03-12 14:12:18.007450128 +0000 UTC m=+112.569022467 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert") pod "olm-operator-d64cfc9db-f48hv" (UID: "07a6a1d6-fecf-4847-b7c1-160d5d7320fb") : secret "olm-operator-serving-cert" not found Mar 12 14:12:14.007893 master-0 kubenswrapper[4141]: E0312 14:12:14.007508 4141 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 14:12:14.007893 master-0 kubenswrapper[4141]: E0312 14:12:14.007552 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls podName:8c6b9f13-4a3a-4920-a84b-f76516501f81 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:18.00754185 +0000 UTC m=+112.569114099 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls") pod "dns-operator-589895fbb7-q4wwv" (UID: "8c6b9f13-4a3a-4920-a84b-f76516501f81") : secret "metrics-tls" not found Mar 12 14:12:14.007893 master-0 kubenswrapper[4141]: E0312 14:12:14.007595 4141 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 14:12:14.007893 master-0 kubenswrapper[4141]: E0312 14:12:14.007618 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics podName:1bc0d552-01c7-4212-a551-d16419f2dc80 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:18.007610542 +0000 UTC m=+112.569182791 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-qzdff" (UID: "1bc0d552-01c7-4212-a551-d16419f2dc80") : secret "marketplace-operator-metrics" not found Mar 12 14:12:14.007893 master-0 kubenswrapper[4141]: E0312 14:12:14.007639 4141 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 14:12:14.007893 master-0 kubenswrapper[4141]: E0312 14:12:14.007675 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs podName:7023af8b-bfcc-4253-85cd-d891dff1c86e nodeName:}" failed. No retries permitted until 2026-03-12 14:12:18.007663443 +0000 UTC m=+112.569235762 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs") pod "multus-admission-controller-8d675b596-sm9nb" (UID: "7023af8b-bfcc-4253-85cd-d891dff1c86e") : secret "multus-admission-controller-secret" not found Mar 12 14:12:14.007893 master-0 kubenswrapper[4141]: E0312 14:12:14.007681 4141 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 14:12:14.007893 master-0 kubenswrapper[4141]: E0312 14:12:14.007704 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert podName:85459175-2c9c-425d-bdfb-0a79c92ed110 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:18.007697774 +0000 UTC m=+112.569270023 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-dvv78" (UID: "85459175-2c9c-425d-bdfb-0a79c92ed110") : secret "package-server-manager-serving-cert" not found Mar 12 14:12:14.007893 master-0 kubenswrapper[4141]: I0312 14:12:14.007769 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert\") pod \"catalog-operator-7d9c49f57b-whr79\" (UID: \"272b53c4-134c-404d-9a27-c7371415b1f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:12:14.007893 master-0 kubenswrapper[4141]: I0312 14:12:14.007805 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:14.007893 master-0 kubenswrapper[4141]: I0312 14:12:14.007841 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:14.007893 master-0 kubenswrapper[4141]: E0312 14:12:14.007891 4141 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 14:12:14.007893 master-0 kubenswrapper[4141]: E0312 14:12:14.007934 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls podName:42dbcb8f-e8c4-413e-977d-40aa6df226aa nodeName:}" failed. No retries permitted until 2026-03-12 14:12:18.007926839 +0000 UTC m=+112.569499178 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-6w5nv" (UID: "42dbcb8f-e8c4-413e-977d-40aa6df226aa") : secret "cluster-monitoring-operator-tls" not found Mar 12 14:12:14.007893 master-0 kubenswrapper[4141]: E0312 14:12:14.007952 4141 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 14:12:14.008776 master-0 kubenswrapper[4141]: E0312 14:12:14.007988 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls podName:4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:18.007978761 +0000 UTC m=+112.569551080 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls") pod "ingress-operator-677db989d6-44hhf" (UID: "4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6") : secret "metrics-tls" not found Mar 12 14:12:14.008776 master-0 kubenswrapper[4141]: E0312 14:12:14.007984 4141 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 14:12:14.008776 master-0 kubenswrapper[4141]: E0312 14:12:14.008064 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert podName:272b53c4-134c-404d-9a27-c7371415b1f7 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:18.008046362 +0000 UTC m=+112.569618611 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert") pod "catalog-operator-7d9c49f57b-whr79" (UID: "272b53c4-134c-404d-9a27-c7371415b1f7") : secret "catalog-operator-serving-cert" not found Mar 12 14:12:16.767181 master-0 kubenswrapper[4141]: I0312 14:12:16.767091 4141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=4.767073531 podStartE2EDuration="4.767073531s" podCreationTimestamp="2026-03-12 14:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:12:16.764299822 +0000 UTC m=+111.325872091" watchObservedRunningTime="2026-03-12 14:12:16.767073531 +0000 UTC m=+111.328645790" Mar 12 14:12:17.954038 master-0 kubenswrapper[4141]: I0312 14:12:17.954002 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:17.954562 master-0 kubenswrapper[4141]: E0312 14:12:17.954205 4141 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 14:12:17.954562 master-0 kubenswrapper[4141]: I0312 14:12:17.954243 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:17.954562 master-0 kubenswrapper[4141]: E0312 14:12:17.954306 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert podName:879e9bf1-ce4a-40b7-a72c-fe4c61e96cea nodeName:}" failed. No retries permitted until 2026-03-12 14:12:25.95428665 +0000 UTC m=+120.515858899 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-zghs6" (UID: "879e9bf1-ce4a-40b7-a72c-fe4c61e96cea") : secret "performance-addon-operator-webhook-cert" not found Mar 12 14:12:17.954562 master-0 kubenswrapper[4141]: E0312 14:12:17.954367 4141 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 14:12:17.954562 master-0 kubenswrapper[4141]: E0312 14:12:17.954420 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls podName:879e9bf1-ce4a-40b7-a72c-fe4c61e96cea nodeName:}" failed. No retries permitted until 2026-03-12 14:12:25.954404923 +0000 UTC m=+120.515977172 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-zghs6" (UID: "879e9bf1-ce4a-40b7-a72c-fe4c61e96cea") : secret "node-tuning-operator-tls" not found Mar 12 14:12:18.055114 master-0 kubenswrapper[4141]: I0312 14:12:18.055074 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert\") pod \"catalog-operator-7d9c49f57b-whr79\" (UID: \"272b53c4-134c-404d-9a27-c7371415b1f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:12:18.055114 master-0 kubenswrapper[4141]: I0312 14:12:18.055118 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:18.055375 master-0 kubenswrapper[4141]: I0312 14:12:18.055138 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:18.055375 master-0 kubenswrapper[4141]: I0312 14:12:18.055163 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:18.055375 master-0 kubenswrapper[4141]: I0312 14:12:18.055189 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert\") pod \"olm-operator-d64cfc9db-f48hv\" (UID: \"07a6a1d6-fecf-4847-b7c1-160d5d7320fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:12:18.055375 master-0 kubenswrapper[4141]: I0312 14:12:18.055211 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs\") pod \"multus-admission-controller-8d675b596-sm9nb\" (UID: \"7023af8b-bfcc-4253-85cd-d891dff1c86e\") " pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" Mar 12 14:12:18.055375 master-0 kubenswrapper[4141]: I0312 14:12:18.055234 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls\") pod \"dns-operator-589895fbb7-q4wwv\" (UID: \"8c6b9f13-4a3a-4920-a84b-f76516501f81\") " pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" Mar 12 14:12:18.055375 master-0 kubenswrapper[4141]: I0312 14:12:18.055250 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:18.055560 master-0 kubenswrapper[4141]: E0312 14:12:18.055498 4141 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 14:12:18.055560 master-0 kubenswrapper[4141]: E0312 14:12:18.055550 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls podName:a2435b91-86d6-415b-a978-34cc859e74f2 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:26.055534629 +0000 UTC m=+120.617106878 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-54cr9" (UID: "a2435b91-86d6-415b-a978-34cc859e74f2") : secret "image-registry-operator-tls" not found Mar 12 14:12:18.055931 master-0 kubenswrapper[4141]: E0312 14:12:18.055908 4141 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 14:12:18.055971 master-0 kubenswrapper[4141]: E0312 14:12:18.055939 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert podName:272b53c4-134c-404d-9a27-c7371415b1f7 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:26.055929748 +0000 UTC m=+120.617501997 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert") pod "catalog-operator-7d9c49f57b-whr79" (UID: "272b53c4-134c-404d-9a27-c7371415b1f7") : secret "catalog-operator-serving-cert" not found Mar 12 14:12:18.056020 master-0 kubenswrapper[4141]: I0312 14:12:18.055273 4141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-dvv78\" (UID: \"85459175-2c9c-425d-bdfb-0a79c92ed110\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:12:18.056141 master-0 kubenswrapper[4141]: E0312 14:12:18.056115 4141 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 14:12:18.056981 master-0 kubenswrapper[4141]: E0312 14:12:18.056956 4141 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 14:12:18.057031 master-0 kubenswrapper[4141]: E0312 14:12:18.057017 4141 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 14:12:18.057111 master-0 kubenswrapper[4141]: E0312 14:12:18.057091 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert podName:85459175-2c9c-425d-bdfb-0a79c92ed110 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:26.056137314 +0000 UTC m=+120.617709563 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-dvv78" (UID: "85459175-2c9c-425d-bdfb-0a79c92ed110") : secret "package-server-manager-serving-cert" not found Mar 12 14:12:18.057152 master-0 kubenswrapper[4141]: E0312 14:12:18.057111 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls podName:42dbcb8f-e8c4-413e-977d-40aa6df226aa nodeName:}" failed. No retries permitted until 2026-03-12 14:12:26.057102467 +0000 UTC m=+120.618674706 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-6w5nv" (UID: "42dbcb8f-e8c4-413e-977d-40aa6df226aa") : secret "cluster-monitoring-operator-tls" not found Mar 12 14:12:18.057152 master-0 kubenswrapper[4141]: E0312 14:12:18.057121 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls podName:4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:26.057116168 +0000 UTC m=+120.618688417 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls") pod "ingress-operator-677db989d6-44hhf" (UID: "4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6") : secret "metrics-tls" not found Mar 12 14:12:18.057222 master-0 kubenswrapper[4141]: E0312 14:12:18.057172 4141 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 14:12:18.057222 master-0 kubenswrapper[4141]: E0312 14:12:18.057212 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls podName:8c6b9f13-4a3a-4920-a84b-f76516501f81 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:26.05720549 +0000 UTC m=+120.618777739 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls") pod "dns-operator-589895fbb7-q4wwv" (UID: "8c6b9f13-4a3a-4920-a84b-f76516501f81") : secret "metrics-tls" not found Mar 12 14:12:18.057470 master-0 kubenswrapper[4141]: E0312 14:12:18.057446 4141 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 14:12:18.057510 master-0 kubenswrapper[4141]: E0312 14:12:18.057484 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs podName:7023af8b-bfcc-4253-85cd-d891dff1c86e nodeName:}" failed. No retries permitted until 2026-03-12 14:12:26.057477727 +0000 UTC m=+120.619049976 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs") pod "multus-admission-controller-8d675b596-sm9nb" (UID: "7023af8b-bfcc-4253-85cd-d891dff1c86e") : secret "multus-admission-controller-secret" not found Mar 12 14:12:18.057510 master-0 kubenswrapper[4141]: E0312 14:12:18.057502 4141 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 14:12:18.057571 master-0 kubenswrapper[4141]: E0312 14:12:18.057505 4141 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 14:12:18.057571 master-0 kubenswrapper[4141]: E0312 14:12:18.057523 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics podName:1bc0d552-01c7-4212-a551-d16419f2dc80 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:26.057518438 +0000 UTC m=+120.619090687 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-qzdff" (UID: "1bc0d552-01c7-4212-a551-d16419f2dc80") : secret "marketplace-operator-metrics" not found Mar 12 14:12:18.057630 master-0 kubenswrapper[4141]: E0312 14:12:18.057594 4141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert podName:07a6a1d6-fecf-4847-b7c1-160d5d7320fb nodeName:}" failed. No retries permitted until 2026-03-12 14:12:26.05757647 +0000 UTC m=+120.619148719 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert") pod "olm-operator-d64cfc9db-f48hv" (UID: "07a6a1d6-fecf-4847-b7c1-160d5d7320fb") : secret "olm-operator-serving-cert" not found Mar 12 14:12:19.579755 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 12 14:12:19.612610 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 14:12:19.612933 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 12 14:12:19.614531 master-0 systemd[1]: kubelet.service: Consumed 7.650s CPU time. Mar 12 14:12:19.626591 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 12 14:12:19.698833 master-0 kubenswrapper[7440]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 14:12:19.698833 master-0 kubenswrapper[7440]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 12 14:12:19.698833 master-0 kubenswrapper[7440]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 14:12:19.698833 master-0 kubenswrapper[7440]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 14:12:19.698833 master-0 kubenswrapper[7440]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 12 14:12:19.698833 master-0 kubenswrapper[7440]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 14:12:19.700007 master-0 kubenswrapper[7440]: I0312 14:12:19.698941 7440 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 14:12:19.701154 master-0 kubenswrapper[7440]: W0312 14:12:19.701132 7440 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 14:12:19.701154 master-0 kubenswrapper[7440]: W0312 14:12:19.701148 7440 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 14:12:19.701154 master-0 kubenswrapper[7440]: W0312 14:12:19.701153 7440 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 14:12:19.701154 master-0 kubenswrapper[7440]: W0312 14:12:19.701157 7440 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 14:12:19.701278 master-0 kubenswrapper[7440]: W0312 14:12:19.701162 7440 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 14:12:19.701278 master-0 kubenswrapper[7440]: W0312 14:12:19.701165 7440 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 14:12:19.701278 master-0 kubenswrapper[7440]: W0312 14:12:19.701169 7440 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 14:12:19.701278 master-0 kubenswrapper[7440]: W0312 14:12:19.701172 7440 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 14:12:19.701278 master-0 kubenswrapper[7440]: W0312 14:12:19.701176 7440 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 14:12:19.701278 master-0 kubenswrapper[7440]: W0312 14:12:19.701180 7440 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 14:12:19.701278 master-0 kubenswrapper[7440]: W0312 14:12:19.701183 7440 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 14:12:19.701278 master-0 kubenswrapper[7440]: W0312 14:12:19.701187 7440 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 14:12:19.701278 master-0 kubenswrapper[7440]: W0312 14:12:19.701190 7440 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 14:12:19.701278 master-0 kubenswrapper[7440]: W0312 14:12:19.701194 7440 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 14:12:19.701278 master-0 kubenswrapper[7440]: W0312 14:12:19.701198 7440 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 14:12:19.701278 master-0 kubenswrapper[7440]: W0312 14:12:19.701201 7440 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 14:12:19.701278 master-0 kubenswrapper[7440]: W0312 14:12:19.701204 7440 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 14:12:19.701278 master-0 kubenswrapper[7440]: W0312 14:12:19.701208 7440 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 14:12:19.701278 master-0 kubenswrapper[7440]: W0312 14:12:19.701211 7440 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 14:12:19.701278 master-0 kubenswrapper[7440]: W0312 14:12:19.701220 7440 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 14:12:19.701278 master-0 kubenswrapper[7440]: W0312 14:12:19.701224 7440 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 14:12:19.701278 master-0 kubenswrapper[7440]: W0312 14:12:19.701227 7440 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 14:12:19.701278 master-0 kubenswrapper[7440]: W0312 14:12:19.701231 7440 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 14:12:19.701278 master-0 kubenswrapper[7440]: W0312 14:12:19.701234 7440 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 14:12:19.701733 master-0 kubenswrapper[7440]: W0312 14:12:19.701238 7440 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 14:12:19.701733 master-0 kubenswrapper[7440]: W0312 14:12:19.701241 7440 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 14:12:19.701733 master-0 kubenswrapper[7440]: W0312 14:12:19.701245 7440 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 14:12:19.701733 master-0 kubenswrapper[7440]: W0312 14:12:19.701248 7440 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 14:12:19.701733 master-0 kubenswrapper[7440]: W0312 14:12:19.701252 7440 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 14:12:19.701733 master-0 kubenswrapper[7440]: W0312 14:12:19.701256 7440 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 14:12:19.701733 master-0 kubenswrapper[7440]: W0312 14:12:19.701260 7440 feature_gate.go:330] unrecognized feature gate: Example Mar 12 14:12:19.701733 master-0 kubenswrapper[7440]: W0312 14:12:19.701263 7440 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 14:12:19.701733 master-0 kubenswrapper[7440]: W0312 14:12:19.701267 7440 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 14:12:19.701733 master-0 kubenswrapper[7440]: W0312 14:12:19.701271 7440 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 14:12:19.701733 master-0 kubenswrapper[7440]: W0312 14:12:19.701274 7440 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 14:12:19.701733 master-0 kubenswrapper[7440]: W0312 14:12:19.701278 7440 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 14:12:19.701733 master-0 kubenswrapper[7440]: W0312 14:12:19.701281 7440 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 14:12:19.701733 master-0 kubenswrapper[7440]: W0312 14:12:19.701285 7440 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 14:12:19.701733 master-0 kubenswrapper[7440]: W0312 14:12:19.701288 7440 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 14:12:19.701733 master-0 kubenswrapper[7440]: W0312 14:12:19.701293 7440 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 14:12:19.701733 master-0 kubenswrapper[7440]: W0312 14:12:19.701296 7440 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 14:12:19.701733 master-0 kubenswrapper[7440]: W0312 14:12:19.701300 7440 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 14:12:19.701733 master-0 kubenswrapper[7440]: W0312 14:12:19.701303 7440 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 14:12:19.701733 master-0 kubenswrapper[7440]: W0312 14:12:19.701307 7440 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 14:12:19.702204 master-0 kubenswrapper[7440]: W0312 14:12:19.701310 7440 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 14:12:19.702204 master-0 kubenswrapper[7440]: W0312 14:12:19.701314 7440 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 14:12:19.702204 master-0 kubenswrapper[7440]: W0312 14:12:19.701317 7440 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 14:12:19.702204 master-0 kubenswrapper[7440]: W0312 14:12:19.701321 7440 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 14:12:19.702204 master-0 kubenswrapper[7440]: W0312 14:12:19.701324 7440 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 14:12:19.702204 master-0 kubenswrapper[7440]: W0312 14:12:19.701328 7440 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 14:12:19.702204 master-0 kubenswrapper[7440]: W0312 14:12:19.701331 7440 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 14:12:19.702204 master-0 kubenswrapper[7440]: W0312 14:12:19.701335 7440 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 14:12:19.702204 master-0 kubenswrapper[7440]: W0312 14:12:19.701338 7440 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 14:12:19.702204 master-0 kubenswrapper[7440]: W0312 14:12:19.701344 7440 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 14:12:19.702204 master-0 kubenswrapper[7440]: W0312 14:12:19.701349 7440 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 14:12:19.702204 master-0 kubenswrapper[7440]: W0312 14:12:19.701353 7440 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 14:12:19.702204 master-0 kubenswrapper[7440]: W0312 14:12:19.701357 7440 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 14:12:19.702204 master-0 kubenswrapper[7440]: W0312 14:12:19.701361 7440 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 14:12:19.702204 master-0 kubenswrapper[7440]: W0312 14:12:19.701365 7440 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 14:12:19.702204 master-0 kubenswrapper[7440]: W0312 14:12:19.701370 7440 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 14:12:19.702204 master-0 kubenswrapper[7440]: W0312 14:12:19.701375 7440 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 14:12:19.702204 master-0 kubenswrapper[7440]: W0312 14:12:19.701381 7440 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 14:12:19.702691 master-0 kubenswrapper[7440]: W0312 14:12:19.701386 7440 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 14:12:19.702691 master-0 kubenswrapper[7440]: W0312 14:12:19.701391 7440 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 14:12:19.702691 master-0 kubenswrapper[7440]: W0312 14:12:19.701395 7440 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 14:12:19.702691 master-0 kubenswrapper[7440]: W0312 14:12:19.701399 7440 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 14:12:19.702691 master-0 kubenswrapper[7440]: W0312 14:12:19.701403 7440 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 14:12:19.702691 master-0 kubenswrapper[7440]: W0312 14:12:19.701407 7440 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 14:12:19.702691 master-0 kubenswrapper[7440]: W0312 14:12:19.701411 7440 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 14:12:19.702691 master-0 kubenswrapper[7440]: W0312 14:12:19.701415 7440 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 14:12:19.702691 master-0 kubenswrapper[7440]: W0312 14:12:19.701419 7440 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 14:12:19.702691 master-0 kubenswrapper[7440]: W0312 14:12:19.701422 7440 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 14:12:19.702691 master-0 kubenswrapper[7440]: I0312 14:12:19.701506 7440 flags.go:64] FLAG: --address="0.0.0.0" Mar 12 14:12:19.702691 master-0 kubenswrapper[7440]: I0312 14:12:19.701514 7440 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 12 14:12:19.702691 master-0 kubenswrapper[7440]: I0312 14:12:19.701520 7440 flags.go:64] FLAG: --anonymous-auth="true" Mar 12 14:12:19.702691 master-0 kubenswrapper[7440]: I0312 14:12:19.701525 7440 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 12 14:12:19.702691 master-0 kubenswrapper[7440]: I0312 14:12:19.701530 7440 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 12 14:12:19.702691 master-0 kubenswrapper[7440]: I0312 14:12:19.701535 7440 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 12 14:12:19.702691 master-0 kubenswrapper[7440]: I0312 14:12:19.701542 7440 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 12 14:12:19.702691 master-0 kubenswrapper[7440]: I0312 14:12:19.701547 7440 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 12 14:12:19.702691 master-0 kubenswrapper[7440]: I0312 14:12:19.701551 7440 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 12 14:12:19.702691 master-0 kubenswrapper[7440]: I0312 14:12:19.701555 7440 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 12 14:12:19.702691 master-0 kubenswrapper[7440]: I0312 14:12:19.701560 7440 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 12 14:12:19.703224 master-0 kubenswrapper[7440]: I0312 14:12:19.701564 7440 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 12 14:12:19.703224 master-0 kubenswrapper[7440]: I0312 14:12:19.701568 7440 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 12 14:12:19.703224 master-0 kubenswrapper[7440]: I0312 14:12:19.701572 7440 flags.go:64] FLAG: --cgroup-root="" Mar 12 14:12:19.703224 master-0 kubenswrapper[7440]: I0312 14:12:19.701576 7440 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 12 14:12:19.703224 master-0 kubenswrapper[7440]: I0312 14:12:19.701581 7440 flags.go:64] FLAG: --client-ca-file="" Mar 12 14:12:19.703224 master-0 kubenswrapper[7440]: I0312 14:12:19.701584 7440 flags.go:64] FLAG: --cloud-config="" Mar 12 14:12:19.703224 master-0 kubenswrapper[7440]: I0312 14:12:19.701588 7440 flags.go:64] FLAG: --cloud-provider="" Mar 12 14:12:19.703224 master-0 kubenswrapper[7440]: I0312 14:12:19.701592 7440 flags.go:64] FLAG: --cluster-dns="[]" Mar 12 14:12:19.703224 master-0 kubenswrapper[7440]: I0312 14:12:19.702043 7440 flags.go:64] FLAG: --cluster-domain="" Mar 12 14:12:19.703224 master-0 kubenswrapper[7440]: I0312 14:12:19.702228 7440 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 12 14:12:19.703224 master-0 kubenswrapper[7440]: I0312 14:12:19.702238 7440 flags.go:64] FLAG: --config-dir="" Mar 12 14:12:19.703224 master-0 kubenswrapper[7440]: I0312 14:12:19.702245 7440 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 12 14:12:19.703224 master-0 kubenswrapper[7440]: I0312 14:12:19.702253 7440 flags.go:64] FLAG: --container-log-max-files="5" Mar 12 14:12:19.703224 master-0 kubenswrapper[7440]: I0312 14:12:19.702263 7440 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 12 14:12:19.703224 master-0 kubenswrapper[7440]: I0312 14:12:19.702270 7440 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 12 14:12:19.703224 master-0 kubenswrapper[7440]: I0312 14:12:19.702276 7440 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 12 14:12:19.703224 master-0 kubenswrapper[7440]: I0312 14:12:19.702281 7440 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 12 14:12:19.703224 master-0 kubenswrapper[7440]: I0312 14:12:19.702288 7440 flags.go:64] FLAG: --contention-profiling="false" Mar 12 14:12:19.703224 master-0 kubenswrapper[7440]: I0312 14:12:19.702297 7440 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 12 14:12:19.703224 master-0 kubenswrapper[7440]: I0312 14:12:19.702303 7440 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 12 14:12:19.703224 master-0 kubenswrapper[7440]: I0312 14:12:19.702309 7440 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 12 14:12:19.703224 master-0 kubenswrapper[7440]: I0312 14:12:19.702314 7440 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 12 14:12:19.703224 master-0 kubenswrapper[7440]: I0312 14:12:19.702321 7440 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 12 14:12:19.703224 master-0 kubenswrapper[7440]: I0312 14:12:19.702327 7440 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 12 14:12:19.703224 master-0 kubenswrapper[7440]: I0312 14:12:19.702332 7440 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 12 14:12:19.703748 master-0 kubenswrapper[7440]: I0312 14:12:19.702337 7440 flags.go:64] FLAG: --enable-load-reader="false" Mar 12 14:12:19.703748 master-0 kubenswrapper[7440]: I0312 14:12:19.702347 7440 flags.go:64] FLAG: --enable-server="true" Mar 12 14:12:19.703748 master-0 kubenswrapper[7440]: I0312 14:12:19.702352 7440 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 12 14:12:19.703748 master-0 kubenswrapper[7440]: I0312 14:12:19.702395 7440 flags.go:64] FLAG: --event-burst="100" Mar 12 14:12:19.703748 master-0 kubenswrapper[7440]: I0312 14:12:19.702400 7440 flags.go:64] FLAG: --event-qps="50" Mar 12 14:12:19.703748 master-0 kubenswrapper[7440]: I0312 14:12:19.702405 7440 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 12 14:12:19.703748 master-0 kubenswrapper[7440]: I0312 14:12:19.702411 7440 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 12 14:12:19.703748 master-0 kubenswrapper[7440]: I0312 14:12:19.702416 7440 flags.go:64] FLAG: --eviction-hard="" Mar 12 14:12:19.703748 master-0 kubenswrapper[7440]: I0312 14:12:19.702423 7440 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 12 14:12:19.703748 master-0 kubenswrapper[7440]: I0312 14:12:19.702432 7440 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 12 14:12:19.703748 master-0 kubenswrapper[7440]: I0312 14:12:19.702437 7440 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 12 14:12:19.703748 master-0 kubenswrapper[7440]: I0312 14:12:19.702443 7440 flags.go:64] FLAG: --eviction-soft="" Mar 12 14:12:19.703748 master-0 kubenswrapper[7440]: I0312 14:12:19.702448 7440 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 12 14:12:19.703748 master-0 kubenswrapper[7440]: I0312 14:12:19.702453 7440 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 12 14:12:19.703748 master-0 kubenswrapper[7440]: I0312 14:12:19.702458 7440 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 12 14:12:19.703748 master-0 kubenswrapper[7440]: I0312 14:12:19.702464 7440 flags.go:64] FLAG: --experimental-mounter-path="" Mar 12 14:12:19.703748 master-0 kubenswrapper[7440]: I0312 14:12:19.702469 7440 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 12 14:12:19.704266 master-0 kubenswrapper[7440]: I0312 14:12:19.702475 7440 flags.go:64] FLAG: --fail-swap-on="true" Mar 12 14:12:19.704583 master-0 kubenswrapper[7440]: I0312 14:12:19.704533 7440 flags.go:64] FLAG: --feature-gates="" Mar 12 14:12:19.704626 master-0 kubenswrapper[7440]: I0312 14:12:19.704579 7440 flags.go:64] FLAG: --file-check-frequency="20s" Mar 12 14:12:19.704626 master-0 kubenswrapper[7440]: I0312 14:12:19.704617 7440 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 12 14:12:19.704626 master-0 kubenswrapper[7440]: I0312 14:12:19.704625 7440 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 12 14:12:19.704717 master-0 kubenswrapper[7440]: I0312 14:12:19.704633 7440 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 12 14:12:19.704717 master-0 kubenswrapper[7440]: I0312 14:12:19.704641 7440 flags.go:64] FLAG: --healthz-port="10248" Mar 12 14:12:19.704717 master-0 kubenswrapper[7440]: I0312 14:12:19.704648 7440 flags.go:64] FLAG: --help="false" Mar 12 14:12:19.704717 master-0 kubenswrapper[7440]: I0312 14:12:19.704656 7440 flags.go:64] FLAG: --hostname-override="" Mar 12 14:12:19.704717 master-0 kubenswrapper[7440]: I0312 14:12:19.704668 7440 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 12 14:12:19.704717 master-0 kubenswrapper[7440]: I0312 14:12:19.704673 7440 flags.go:64] FLAG: --http-check-frequency="20s" Mar 12 14:12:19.704717 master-0 kubenswrapper[7440]: I0312 14:12:19.704696 7440 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 12 14:12:19.704717 master-0 kubenswrapper[7440]: I0312 14:12:19.704702 7440 flags.go:64] FLAG: --image-credential-provider-config="" Mar 12 14:12:19.704717 master-0 kubenswrapper[7440]: I0312 14:12:19.704707 7440 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 12 14:12:19.704717 master-0 kubenswrapper[7440]: I0312 14:12:19.704712 7440 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 12 14:12:19.704717 master-0 kubenswrapper[7440]: I0312 14:12:19.704717 7440 flags.go:64] FLAG: --image-service-endpoint="" Mar 12 14:12:19.704717 master-0 kubenswrapper[7440]: I0312 14:12:19.704723 7440 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 12 14:12:19.705202 master-0 kubenswrapper[7440]: I0312 14:12:19.704729 7440 flags.go:64] FLAG: --kube-api-burst="100" Mar 12 14:12:19.705202 master-0 kubenswrapper[7440]: I0312 14:12:19.704738 7440 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 12 14:12:19.705202 master-0 kubenswrapper[7440]: I0312 14:12:19.704745 7440 flags.go:64] FLAG: --kube-api-qps="50" Mar 12 14:12:19.705202 master-0 kubenswrapper[7440]: I0312 14:12:19.704750 7440 flags.go:64] FLAG: --kube-reserved="" Mar 12 14:12:19.705202 master-0 kubenswrapper[7440]: I0312 14:12:19.704757 7440 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 12 14:12:19.705202 master-0 kubenswrapper[7440]: I0312 14:12:19.704777 7440 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 12 14:12:19.705202 master-0 kubenswrapper[7440]: I0312 14:12:19.704784 7440 flags.go:64] FLAG: --kubelet-cgroups="" Mar 12 14:12:19.705202 master-0 kubenswrapper[7440]: I0312 14:12:19.704810 7440 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 12 14:12:19.705202 master-0 kubenswrapper[7440]: I0312 14:12:19.704817 7440 flags.go:64] FLAG: --lock-file="" Mar 12 14:12:19.705202 master-0 kubenswrapper[7440]: I0312 14:12:19.704821 7440 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 12 14:12:19.705202 master-0 kubenswrapper[7440]: I0312 14:12:19.704829 7440 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 12 14:12:19.705202 master-0 kubenswrapper[7440]: I0312 14:12:19.704834 7440 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 12 14:12:19.705202 master-0 kubenswrapper[7440]: I0312 14:12:19.704843 7440 flags.go:64] FLAG: --log-json-split-stream="false" Mar 12 14:12:19.705202 master-0 kubenswrapper[7440]: I0312 14:12:19.704848 7440 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 12 14:12:19.705202 master-0 kubenswrapper[7440]: I0312 14:12:19.704852 7440 flags.go:64] FLAG: --log-text-split-stream="false" Mar 12 14:12:19.705202 master-0 kubenswrapper[7440]: I0312 14:12:19.704857 7440 flags.go:64] FLAG: --logging-format="text" Mar 12 14:12:19.705202 master-0 kubenswrapper[7440]: I0312 14:12:19.704861 7440 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 12 14:12:19.705202 master-0 kubenswrapper[7440]: I0312 14:12:19.704908 7440 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 12 14:12:19.705202 master-0 kubenswrapper[7440]: I0312 14:12:19.704943 7440 flags.go:64] FLAG: --manifest-url="" Mar 12 14:12:19.705202 master-0 kubenswrapper[7440]: I0312 14:12:19.704949 7440 flags.go:64] FLAG: --manifest-url-header="" Mar 12 14:12:19.705202 master-0 kubenswrapper[7440]: I0312 14:12:19.704962 7440 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 12 14:12:19.705202 master-0 kubenswrapper[7440]: I0312 14:12:19.705191 7440 flags.go:64] FLAG: --max-open-files="1000000" Mar 12 14:12:19.705202 master-0 kubenswrapper[7440]: I0312 14:12:19.705198 7440 flags.go:64] FLAG: --max-pods="110" Mar 12 14:12:19.705202 master-0 kubenswrapper[7440]: I0312 14:12:19.705203 7440 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 12 14:12:19.705202 master-0 kubenswrapper[7440]: I0312 14:12:19.705209 7440 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 12 14:12:19.705833 master-0 kubenswrapper[7440]: I0312 14:12:19.705216 7440 flags.go:64] FLAG: --memory-manager-policy="None" Mar 12 14:12:19.705833 master-0 kubenswrapper[7440]: I0312 14:12:19.705227 7440 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 12 14:12:19.705833 master-0 kubenswrapper[7440]: I0312 14:12:19.705234 7440 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 12 14:12:19.705833 master-0 kubenswrapper[7440]: I0312 14:12:19.705240 7440 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 12 14:12:19.705833 master-0 kubenswrapper[7440]: I0312 14:12:19.705246 7440 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 12 14:12:19.705833 master-0 kubenswrapper[7440]: I0312 14:12:19.705291 7440 flags.go:64] FLAG: --node-status-max-images="50" Mar 12 14:12:19.705833 master-0 kubenswrapper[7440]: I0312 14:12:19.705296 7440 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 12 14:12:19.705833 master-0 kubenswrapper[7440]: I0312 14:12:19.705301 7440 flags.go:64] FLAG: --oom-score-adj="-999" Mar 12 14:12:19.705833 master-0 kubenswrapper[7440]: I0312 14:12:19.705325 7440 flags.go:64] FLAG: --pod-cidr="" Mar 12 14:12:19.705833 master-0 kubenswrapper[7440]: I0312 14:12:19.705332 7440 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 12 14:12:19.705833 master-0 kubenswrapper[7440]: I0312 14:12:19.705345 7440 flags.go:64] FLAG: --pod-manifest-path="" Mar 12 14:12:19.705833 master-0 kubenswrapper[7440]: I0312 14:12:19.705349 7440 flags.go:64] FLAG: --pod-max-pids="-1" Mar 12 14:12:19.705833 master-0 kubenswrapper[7440]: I0312 14:12:19.705354 7440 flags.go:64] FLAG: --pods-per-core="0" Mar 12 14:12:19.705833 master-0 kubenswrapper[7440]: I0312 14:12:19.705359 7440 flags.go:64] FLAG: --port="10250" Mar 12 14:12:19.705833 master-0 kubenswrapper[7440]: I0312 14:12:19.705363 7440 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 12 14:12:19.705833 master-0 kubenswrapper[7440]: I0312 14:12:19.705368 7440 flags.go:64] FLAG: --provider-id="" Mar 12 14:12:19.705833 master-0 kubenswrapper[7440]: I0312 14:12:19.705375 7440 flags.go:64] FLAG: --qos-reserved="" Mar 12 14:12:19.705833 master-0 kubenswrapper[7440]: I0312 14:12:19.705380 7440 flags.go:64] FLAG: --read-only-port="10255" Mar 12 14:12:19.705833 master-0 kubenswrapper[7440]: I0312 14:12:19.705386 7440 flags.go:64] FLAG: --register-node="true" Mar 12 14:12:19.705833 master-0 kubenswrapper[7440]: I0312 14:12:19.705409 7440 flags.go:64] FLAG: --register-schedulable="true" Mar 12 14:12:19.705833 master-0 kubenswrapper[7440]: I0312 14:12:19.705429 7440 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 12 14:12:19.705833 master-0 kubenswrapper[7440]: I0312 14:12:19.705452 7440 flags.go:64] FLAG: --registry-burst="10" Mar 12 14:12:19.705833 master-0 kubenswrapper[7440]: I0312 14:12:19.705458 7440 flags.go:64] FLAG: --registry-qps="5" Mar 12 14:12:19.705833 master-0 kubenswrapper[7440]: I0312 14:12:19.705464 7440 flags.go:64] FLAG: --reserved-cpus="" Mar 12 14:12:19.706441 master-0 kubenswrapper[7440]: I0312 14:12:19.705474 7440 flags.go:64] FLAG: --reserved-memory="" Mar 12 14:12:19.706441 master-0 kubenswrapper[7440]: I0312 14:12:19.705483 7440 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 12 14:12:19.706441 master-0 kubenswrapper[7440]: I0312 14:12:19.705511 7440 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 12 14:12:19.706441 master-0 kubenswrapper[7440]: I0312 14:12:19.705520 7440 flags.go:64] FLAG: --rotate-certificates="false" Mar 12 14:12:19.706441 master-0 kubenswrapper[7440]: I0312 14:12:19.705526 7440 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 12 14:12:19.706441 master-0 kubenswrapper[7440]: I0312 14:12:19.705531 7440 flags.go:64] FLAG: --runonce="false" Mar 12 14:12:19.706441 master-0 kubenswrapper[7440]: I0312 14:12:19.705535 7440 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 12 14:12:19.706441 master-0 kubenswrapper[7440]: I0312 14:12:19.705541 7440 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 12 14:12:19.706441 master-0 kubenswrapper[7440]: I0312 14:12:19.705550 7440 flags.go:64] FLAG: --seccomp-default="false" Mar 12 14:12:19.706441 master-0 kubenswrapper[7440]: I0312 14:12:19.705555 7440 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 12 14:12:19.706441 master-0 kubenswrapper[7440]: I0312 14:12:19.705561 7440 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 12 14:12:19.706441 master-0 kubenswrapper[7440]: I0312 14:12:19.705566 7440 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 12 14:12:19.706441 master-0 kubenswrapper[7440]: I0312 14:12:19.705571 7440 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 12 14:12:19.706441 master-0 kubenswrapper[7440]: I0312 14:12:19.705595 7440 flags.go:64] FLAG: --storage-driver-password="root" Mar 12 14:12:19.706441 master-0 kubenswrapper[7440]: I0312 14:12:19.705600 7440 flags.go:64] FLAG: --storage-driver-secure="false" Mar 12 14:12:19.706441 master-0 kubenswrapper[7440]: I0312 14:12:19.705605 7440 flags.go:64] FLAG: --storage-driver-table="stats" Mar 12 14:12:19.706441 master-0 kubenswrapper[7440]: I0312 14:12:19.705609 7440 flags.go:64] FLAG: --storage-driver-user="root" Mar 12 14:12:19.706441 master-0 kubenswrapper[7440]: I0312 14:12:19.705617 7440 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 12 14:12:19.706441 master-0 kubenswrapper[7440]: I0312 14:12:19.705621 7440 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 12 14:12:19.706441 master-0 kubenswrapper[7440]: I0312 14:12:19.705626 7440 flags.go:64] FLAG: --system-cgroups="" Mar 12 14:12:19.706441 master-0 kubenswrapper[7440]: I0312 14:12:19.705630 7440 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 12 14:12:19.706441 master-0 kubenswrapper[7440]: I0312 14:12:19.705639 7440 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 12 14:12:19.706441 master-0 kubenswrapper[7440]: I0312 14:12:19.705645 7440 flags.go:64] FLAG: --tls-cert-file="" Mar 12 14:12:19.706441 master-0 kubenswrapper[7440]: I0312 14:12:19.705650 7440 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 12 14:12:19.706441 master-0 kubenswrapper[7440]: I0312 14:12:19.705687 7440 flags.go:64] FLAG: --tls-min-version="" Mar 12 14:12:19.707072 master-0 kubenswrapper[7440]: I0312 14:12:19.705692 7440 flags.go:64] FLAG: --tls-private-key-file="" Mar 12 14:12:19.707072 master-0 kubenswrapper[7440]: I0312 14:12:19.705697 7440 flags.go:64] FLAG: --topology-manager-policy="none" Mar 12 14:12:19.707072 master-0 kubenswrapper[7440]: I0312 14:12:19.705701 7440 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 12 14:12:19.707072 master-0 kubenswrapper[7440]: I0312 14:12:19.705706 7440 flags.go:64] FLAG: --topology-manager-scope="container" Mar 12 14:12:19.707072 master-0 kubenswrapper[7440]: I0312 14:12:19.705710 7440 flags.go:64] FLAG: --v="2" Mar 12 14:12:19.707072 master-0 kubenswrapper[7440]: I0312 14:12:19.705718 7440 flags.go:64] FLAG: --version="false" Mar 12 14:12:19.707072 master-0 kubenswrapper[7440]: I0312 14:12:19.705724 7440 flags.go:64] FLAG: --vmodule="" Mar 12 14:12:19.707072 master-0 kubenswrapper[7440]: I0312 14:12:19.705733 7440 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 12 14:12:19.707072 master-0 kubenswrapper[7440]: I0312 14:12:19.705738 7440 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 12 14:12:19.707072 master-0 kubenswrapper[7440]: W0312 14:12:19.706089 7440 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 14:12:19.707072 master-0 kubenswrapper[7440]: W0312 14:12:19.706099 7440 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 14:12:19.707072 master-0 kubenswrapper[7440]: W0312 14:12:19.706104 7440 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 14:12:19.707072 master-0 kubenswrapper[7440]: W0312 14:12:19.706107 7440 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 14:12:19.707072 master-0 kubenswrapper[7440]: W0312 14:12:19.706113 7440 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 14:12:19.707072 master-0 kubenswrapper[7440]: W0312 14:12:19.706118 7440 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 14:12:19.707072 master-0 kubenswrapper[7440]: W0312 14:12:19.706124 7440 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 14:12:19.707072 master-0 kubenswrapper[7440]: W0312 14:12:19.706129 7440 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 14:12:19.707072 master-0 kubenswrapper[7440]: W0312 14:12:19.706132 7440 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 14:12:19.707072 master-0 kubenswrapper[7440]: W0312 14:12:19.706137 7440 feature_gate.go:330] unrecognized feature gate: Example Mar 12 14:12:19.707072 master-0 kubenswrapper[7440]: W0312 14:12:19.706141 7440 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 14:12:19.707072 master-0 kubenswrapper[7440]: W0312 14:12:19.706162 7440 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 14:12:19.707072 master-0 kubenswrapper[7440]: W0312 14:12:19.706169 7440 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 14:12:19.707720 master-0 kubenswrapper[7440]: W0312 14:12:19.706173 7440 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 14:12:19.707720 master-0 kubenswrapper[7440]: W0312 14:12:19.706177 7440 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 14:12:19.707720 master-0 kubenswrapper[7440]: W0312 14:12:19.706180 7440 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 14:12:19.707720 master-0 kubenswrapper[7440]: W0312 14:12:19.706184 7440 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 14:12:19.707720 master-0 kubenswrapper[7440]: W0312 14:12:19.706187 7440 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 14:12:19.707720 master-0 kubenswrapper[7440]: W0312 14:12:19.706191 7440 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 14:12:19.707720 master-0 kubenswrapper[7440]: W0312 14:12:19.706196 7440 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 14:12:19.707720 master-0 kubenswrapper[7440]: W0312 14:12:19.706200 7440 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 14:12:19.707720 master-0 kubenswrapper[7440]: W0312 14:12:19.706203 7440 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 14:12:19.707720 master-0 kubenswrapper[7440]: W0312 14:12:19.706207 7440 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 14:12:19.707720 master-0 kubenswrapper[7440]: W0312 14:12:19.706211 7440 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 14:12:19.707720 master-0 kubenswrapper[7440]: W0312 14:12:19.706214 7440 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 14:12:19.707720 master-0 kubenswrapper[7440]: W0312 14:12:19.706220 7440 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 14:12:19.707720 master-0 kubenswrapper[7440]: W0312 14:12:19.706224 7440 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 14:12:19.707720 master-0 kubenswrapper[7440]: W0312 14:12:19.706246 7440 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 14:12:19.707720 master-0 kubenswrapper[7440]: W0312 14:12:19.706250 7440 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 14:12:19.707720 master-0 kubenswrapper[7440]: W0312 14:12:19.706254 7440 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 14:12:19.707720 master-0 kubenswrapper[7440]: W0312 14:12:19.706258 7440 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 14:12:19.707720 master-0 kubenswrapper[7440]: W0312 14:12:19.706262 7440 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 14:12:19.707720 master-0 kubenswrapper[7440]: W0312 14:12:19.706266 7440 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 14:12:19.707720 master-0 kubenswrapper[7440]: W0312 14:12:19.706269 7440 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 14:12:19.708214 master-0 kubenswrapper[7440]: W0312 14:12:19.706274 7440 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 14:12:19.708214 master-0 kubenswrapper[7440]: W0312 14:12:19.706279 7440 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 14:12:19.708214 master-0 kubenswrapper[7440]: W0312 14:12:19.706283 7440 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 14:12:19.708214 master-0 kubenswrapper[7440]: W0312 14:12:19.706290 7440 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 14:12:19.708214 master-0 kubenswrapper[7440]: W0312 14:12:19.706295 7440 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 14:12:19.708214 master-0 kubenswrapper[7440]: W0312 14:12:19.706301 7440 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 14:12:19.708214 master-0 kubenswrapper[7440]: W0312 14:12:19.706305 7440 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 14:12:19.708214 master-0 kubenswrapper[7440]: W0312 14:12:19.706327 7440 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 14:12:19.708214 master-0 kubenswrapper[7440]: W0312 14:12:19.706333 7440 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 14:12:19.708214 master-0 kubenswrapper[7440]: W0312 14:12:19.706337 7440 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 14:12:19.708214 master-0 kubenswrapper[7440]: W0312 14:12:19.706341 7440 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 14:12:19.708214 master-0 kubenswrapper[7440]: W0312 14:12:19.706346 7440 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 14:12:19.708214 master-0 kubenswrapper[7440]: W0312 14:12:19.706350 7440 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 14:12:19.708214 master-0 kubenswrapper[7440]: W0312 14:12:19.706354 7440 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 14:12:19.708214 master-0 kubenswrapper[7440]: W0312 14:12:19.706375 7440 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 14:12:19.708214 master-0 kubenswrapper[7440]: W0312 14:12:19.706382 7440 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 14:12:19.708214 master-0 kubenswrapper[7440]: W0312 14:12:19.706386 7440 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 14:12:19.708214 master-0 kubenswrapper[7440]: W0312 14:12:19.706390 7440 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 14:12:19.708214 master-0 kubenswrapper[7440]: W0312 14:12:19.706394 7440 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 14:12:19.708743 master-0 kubenswrapper[7440]: W0312 14:12:19.706398 7440 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 14:12:19.708743 master-0 kubenswrapper[7440]: W0312 14:12:19.706402 7440 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 14:12:19.708743 master-0 kubenswrapper[7440]: W0312 14:12:19.706405 7440 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 14:12:19.708743 master-0 kubenswrapper[7440]: W0312 14:12:19.706409 7440 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 14:12:19.708743 master-0 kubenswrapper[7440]: W0312 14:12:19.706413 7440 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 14:12:19.708743 master-0 kubenswrapper[7440]: W0312 14:12:19.706418 7440 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 14:12:19.708743 master-0 kubenswrapper[7440]: W0312 14:12:19.706422 7440 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 14:12:19.708743 master-0 kubenswrapper[7440]: W0312 14:12:19.706427 7440 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 14:12:19.708743 master-0 kubenswrapper[7440]: W0312 14:12:19.706431 7440 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 14:12:19.708743 master-0 kubenswrapper[7440]: W0312 14:12:19.706470 7440 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 14:12:19.708743 master-0 kubenswrapper[7440]: W0312 14:12:19.706475 7440 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 14:12:19.708743 master-0 kubenswrapper[7440]: W0312 14:12:19.706480 7440 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 14:12:19.708743 master-0 kubenswrapper[7440]: W0312 14:12:19.706484 7440 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 14:12:19.708743 master-0 kubenswrapper[7440]: W0312 14:12:19.706487 7440 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 14:12:19.708743 master-0 kubenswrapper[7440]: W0312 14:12:19.706491 7440 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 14:12:19.708743 master-0 kubenswrapper[7440]: W0312 14:12:19.706495 7440 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 14:12:19.708743 master-0 kubenswrapper[7440]: W0312 14:12:19.706499 7440 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 14:12:19.708743 master-0 kubenswrapper[7440]: W0312 14:12:19.706503 7440 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 14:12:19.708743 master-0 kubenswrapper[7440]: W0312 14:12:19.706507 7440 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 14:12:19.709400 master-0 kubenswrapper[7440]: I0312 14:12:19.706542 7440 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 14:12:19.713403 master-0 kubenswrapper[7440]: I0312 14:12:19.713369 7440 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 12 14:12:19.713403 master-0 kubenswrapper[7440]: I0312 14:12:19.713395 7440 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 14:12:19.713493 master-0 kubenswrapper[7440]: W0312 14:12:19.713455 7440 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 14:12:19.713493 master-0 kubenswrapper[7440]: W0312 14:12:19.713462 7440 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 14:12:19.713493 master-0 kubenswrapper[7440]: W0312 14:12:19.713467 7440 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 14:12:19.713493 master-0 kubenswrapper[7440]: W0312 14:12:19.713473 7440 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 14:12:19.713493 master-0 kubenswrapper[7440]: W0312 14:12:19.713477 7440 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 14:12:19.713493 master-0 kubenswrapper[7440]: W0312 14:12:19.713481 7440 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 14:12:19.713493 master-0 kubenswrapper[7440]: W0312 14:12:19.713484 7440 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 14:12:19.713493 master-0 kubenswrapper[7440]: W0312 14:12:19.713488 7440 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 14:12:19.713493 master-0 kubenswrapper[7440]: W0312 14:12:19.713492 7440 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 14:12:19.713493 master-0 kubenswrapper[7440]: W0312 14:12:19.713495 7440 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 14:12:19.713753 master-0 kubenswrapper[7440]: W0312 14:12:19.713499 7440 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 14:12:19.713753 master-0 kubenswrapper[7440]: W0312 14:12:19.713503 7440 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 14:12:19.713753 master-0 kubenswrapper[7440]: W0312 14:12:19.713506 7440 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 14:12:19.713753 master-0 kubenswrapper[7440]: W0312 14:12:19.713510 7440 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 14:12:19.713753 master-0 kubenswrapper[7440]: W0312 14:12:19.713514 7440 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 14:12:19.713753 master-0 kubenswrapper[7440]: W0312 14:12:19.713519 7440 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 14:12:19.713753 master-0 kubenswrapper[7440]: W0312 14:12:19.713524 7440 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 14:12:19.713753 master-0 kubenswrapper[7440]: W0312 14:12:19.713528 7440 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 14:12:19.713753 master-0 kubenswrapper[7440]: W0312 14:12:19.713533 7440 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 14:12:19.713753 master-0 kubenswrapper[7440]: W0312 14:12:19.713538 7440 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 14:12:19.713753 master-0 kubenswrapper[7440]: W0312 14:12:19.713542 7440 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 14:12:19.713753 master-0 kubenswrapper[7440]: W0312 14:12:19.713546 7440 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 14:12:19.713753 master-0 kubenswrapper[7440]: W0312 14:12:19.713550 7440 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 14:12:19.713753 master-0 kubenswrapper[7440]: W0312 14:12:19.713554 7440 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 14:12:19.713753 master-0 kubenswrapper[7440]: W0312 14:12:19.713558 7440 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 14:12:19.713753 master-0 kubenswrapper[7440]: W0312 14:12:19.713562 7440 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 14:12:19.713753 master-0 kubenswrapper[7440]: W0312 14:12:19.713565 7440 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 14:12:19.713753 master-0 kubenswrapper[7440]: W0312 14:12:19.713569 7440 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 14:12:19.713753 master-0 kubenswrapper[7440]: W0312 14:12:19.713573 7440 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 14:12:19.714372 master-0 kubenswrapper[7440]: W0312 14:12:19.713577 7440 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 14:12:19.714372 master-0 kubenswrapper[7440]: W0312 14:12:19.713581 7440 feature_gate.go:330] unrecognized feature gate: Example Mar 12 14:12:19.714372 master-0 kubenswrapper[7440]: W0312 14:12:19.713585 7440 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 14:12:19.714372 master-0 kubenswrapper[7440]: W0312 14:12:19.713589 7440 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 14:12:19.714372 master-0 kubenswrapper[7440]: W0312 14:12:19.713593 7440 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 14:12:19.714372 master-0 kubenswrapper[7440]: W0312 14:12:19.713597 7440 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 14:12:19.714372 master-0 kubenswrapper[7440]: W0312 14:12:19.713601 7440 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 14:12:19.714372 master-0 kubenswrapper[7440]: W0312 14:12:19.713605 7440 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 14:12:19.714372 master-0 kubenswrapper[7440]: W0312 14:12:19.713609 7440 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 14:12:19.714372 master-0 kubenswrapper[7440]: W0312 14:12:19.713612 7440 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 14:12:19.714372 master-0 kubenswrapper[7440]: W0312 14:12:19.713616 7440 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 14:12:19.714372 master-0 kubenswrapper[7440]: W0312 14:12:19.713619 7440 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 14:12:19.714372 master-0 kubenswrapper[7440]: W0312 14:12:19.713624 7440 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 14:12:19.714372 master-0 kubenswrapper[7440]: W0312 14:12:19.713627 7440 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 14:12:19.714372 master-0 kubenswrapper[7440]: W0312 14:12:19.713631 7440 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 14:12:19.714372 master-0 kubenswrapper[7440]: W0312 14:12:19.713635 7440 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 14:12:19.714372 master-0 kubenswrapper[7440]: W0312 14:12:19.713638 7440 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 14:12:19.714372 master-0 kubenswrapper[7440]: W0312 14:12:19.713644 7440 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 14:12:19.714372 master-0 kubenswrapper[7440]: W0312 14:12:19.713649 7440 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 14:12:19.714926 master-0 kubenswrapper[7440]: W0312 14:12:19.713653 7440 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 14:12:19.714926 master-0 kubenswrapper[7440]: W0312 14:12:19.713657 7440 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 14:12:19.714926 master-0 kubenswrapper[7440]: W0312 14:12:19.713662 7440 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 14:12:19.714926 master-0 kubenswrapper[7440]: W0312 14:12:19.713666 7440 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 14:12:19.714926 master-0 kubenswrapper[7440]: W0312 14:12:19.713670 7440 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 14:12:19.714926 master-0 kubenswrapper[7440]: W0312 14:12:19.713674 7440 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 14:12:19.714926 master-0 kubenswrapper[7440]: W0312 14:12:19.713678 7440 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 14:12:19.714926 master-0 kubenswrapper[7440]: W0312 14:12:19.713682 7440 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 14:12:19.714926 master-0 kubenswrapper[7440]: W0312 14:12:19.713686 7440 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 14:12:19.714926 master-0 kubenswrapper[7440]: W0312 14:12:19.713690 7440 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 14:12:19.714926 master-0 kubenswrapper[7440]: W0312 14:12:19.713694 7440 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 14:12:19.714926 master-0 kubenswrapper[7440]: W0312 14:12:19.713697 7440 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 14:12:19.714926 master-0 kubenswrapper[7440]: W0312 14:12:19.713702 7440 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 14:12:19.714926 master-0 kubenswrapper[7440]: W0312 14:12:19.713707 7440 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 14:12:19.714926 master-0 kubenswrapper[7440]: W0312 14:12:19.713712 7440 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 14:12:19.714926 master-0 kubenswrapper[7440]: W0312 14:12:19.713715 7440 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 14:12:19.714926 master-0 kubenswrapper[7440]: W0312 14:12:19.713719 7440 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 14:12:19.714926 master-0 kubenswrapper[7440]: W0312 14:12:19.713723 7440 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 14:12:19.714926 master-0 kubenswrapper[7440]: W0312 14:12:19.713727 7440 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 14:12:19.714926 master-0 kubenswrapper[7440]: W0312 14:12:19.713730 7440 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 14:12:19.715350 master-0 kubenswrapper[7440]: W0312 14:12:19.713734 7440 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 14:12:19.715350 master-0 kubenswrapper[7440]: W0312 14:12:19.713737 7440 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 14:12:19.715350 master-0 kubenswrapper[7440]: W0312 14:12:19.713741 7440 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 14:12:19.715350 master-0 kubenswrapper[7440]: W0312 14:12:19.713744 7440 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 14:12:19.715350 master-0 kubenswrapper[7440]: I0312 14:12:19.713750 7440 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 14:12:19.715350 master-0 kubenswrapper[7440]: W0312 14:12:19.713864 7440 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 14:12:19.715350 master-0 kubenswrapper[7440]: W0312 14:12:19.713870 7440 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 14:12:19.715350 master-0 kubenswrapper[7440]: W0312 14:12:19.713875 7440 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 14:12:19.715350 master-0 kubenswrapper[7440]: W0312 14:12:19.713879 7440 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 14:12:19.715350 master-0 kubenswrapper[7440]: W0312 14:12:19.713882 7440 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 14:12:19.715350 master-0 kubenswrapper[7440]: W0312 14:12:19.713886 7440 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 14:12:19.715350 master-0 kubenswrapper[7440]: W0312 14:12:19.713891 7440 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 14:12:19.715350 master-0 kubenswrapper[7440]: W0312 14:12:19.713943 7440 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 14:12:19.715350 master-0 kubenswrapper[7440]: W0312 14:12:19.713948 7440 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 14:12:19.715350 master-0 kubenswrapper[7440]: W0312 14:12:19.713952 7440 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 14:12:19.715700 master-0 kubenswrapper[7440]: W0312 14:12:19.713955 7440 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 14:12:19.715700 master-0 kubenswrapper[7440]: W0312 14:12:19.713961 7440 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 14:12:19.715700 master-0 kubenswrapper[7440]: W0312 14:12:19.713964 7440 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 14:12:19.715700 master-0 kubenswrapper[7440]: W0312 14:12:19.713968 7440 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 14:12:19.715700 master-0 kubenswrapper[7440]: W0312 14:12:19.713971 7440 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 14:12:19.715700 master-0 kubenswrapper[7440]: W0312 14:12:19.713975 7440 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 14:12:19.715700 master-0 kubenswrapper[7440]: W0312 14:12:19.713979 7440 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 14:12:19.715700 master-0 kubenswrapper[7440]: W0312 14:12:19.713982 7440 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 14:12:19.715700 master-0 kubenswrapper[7440]: W0312 14:12:19.713986 7440 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 14:12:19.715700 master-0 kubenswrapper[7440]: W0312 14:12:19.713990 7440 feature_gate.go:330] unrecognized feature gate: Example Mar 12 14:12:19.715700 master-0 kubenswrapper[7440]: W0312 14:12:19.713993 7440 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 14:12:19.715700 master-0 kubenswrapper[7440]: W0312 14:12:19.713997 7440 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 14:12:19.715700 master-0 kubenswrapper[7440]: W0312 14:12:19.714001 7440 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 14:12:19.715700 master-0 kubenswrapper[7440]: W0312 14:12:19.714004 7440 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 14:12:19.715700 master-0 kubenswrapper[7440]: W0312 14:12:19.714008 7440 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 14:12:19.715700 master-0 kubenswrapper[7440]: W0312 14:12:19.714011 7440 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 14:12:19.715700 master-0 kubenswrapper[7440]: W0312 14:12:19.714015 7440 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 14:12:19.715700 master-0 kubenswrapper[7440]: W0312 14:12:19.714019 7440 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 14:12:19.715700 master-0 kubenswrapper[7440]: W0312 14:12:19.714022 7440 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 14:12:19.715700 master-0 kubenswrapper[7440]: W0312 14:12:19.714026 7440 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 14:12:19.716157 master-0 kubenswrapper[7440]: W0312 14:12:19.714031 7440 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 14:12:19.716157 master-0 kubenswrapper[7440]: W0312 14:12:19.714036 7440 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 14:12:19.716157 master-0 kubenswrapper[7440]: W0312 14:12:19.714039 7440 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 14:12:19.716157 master-0 kubenswrapper[7440]: W0312 14:12:19.714043 7440 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 14:12:19.716157 master-0 kubenswrapper[7440]: W0312 14:12:19.714047 7440 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 14:12:19.716157 master-0 kubenswrapper[7440]: W0312 14:12:19.714052 7440 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 14:12:19.716157 master-0 kubenswrapper[7440]: W0312 14:12:19.714057 7440 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 14:12:19.716157 master-0 kubenswrapper[7440]: W0312 14:12:19.714061 7440 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 14:12:19.716157 master-0 kubenswrapper[7440]: W0312 14:12:19.714065 7440 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 14:12:19.716157 master-0 kubenswrapper[7440]: W0312 14:12:19.714069 7440 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 14:12:19.716157 master-0 kubenswrapper[7440]: W0312 14:12:19.714073 7440 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 14:12:19.716157 master-0 kubenswrapper[7440]: W0312 14:12:19.714076 7440 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 14:12:19.716157 master-0 kubenswrapper[7440]: W0312 14:12:19.714080 7440 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 14:12:19.716157 master-0 kubenswrapper[7440]: W0312 14:12:19.714084 7440 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 14:12:19.716157 master-0 kubenswrapper[7440]: W0312 14:12:19.714088 7440 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 14:12:19.716157 master-0 kubenswrapper[7440]: W0312 14:12:19.714092 7440 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 14:12:19.716157 master-0 kubenswrapper[7440]: W0312 14:12:19.714095 7440 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 14:12:19.716157 master-0 kubenswrapper[7440]: W0312 14:12:19.714100 7440 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 14:12:19.716157 master-0 kubenswrapper[7440]: W0312 14:12:19.714103 7440 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 14:12:19.716621 master-0 kubenswrapper[7440]: W0312 14:12:19.714106 7440 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 14:12:19.716621 master-0 kubenswrapper[7440]: W0312 14:12:19.714110 7440 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 14:12:19.716621 master-0 kubenswrapper[7440]: W0312 14:12:19.714114 7440 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 14:12:19.716621 master-0 kubenswrapper[7440]: W0312 14:12:19.714117 7440 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 14:12:19.716621 master-0 kubenswrapper[7440]: W0312 14:12:19.714121 7440 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 14:12:19.716621 master-0 kubenswrapper[7440]: W0312 14:12:19.714124 7440 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 14:12:19.716621 master-0 kubenswrapper[7440]: W0312 14:12:19.714128 7440 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 14:12:19.716621 master-0 kubenswrapper[7440]: W0312 14:12:19.714132 7440 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 14:12:19.716621 master-0 kubenswrapper[7440]: W0312 14:12:19.714135 7440 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 14:12:19.716621 master-0 kubenswrapper[7440]: W0312 14:12:19.714138 7440 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 14:12:19.716621 master-0 kubenswrapper[7440]: W0312 14:12:19.714142 7440 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 14:12:19.716621 master-0 kubenswrapper[7440]: W0312 14:12:19.714146 7440 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 14:12:19.716621 master-0 kubenswrapper[7440]: W0312 14:12:19.714149 7440 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 14:12:19.716621 master-0 kubenswrapper[7440]: W0312 14:12:19.714153 7440 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 14:12:19.716621 master-0 kubenswrapper[7440]: W0312 14:12:19.714157 7440 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 14:12:19.716621 master-0 kubenswrapper[7440]: W0312 14:12:19.714161 7440 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 14:12:19.716621 master-0 kubenswrapper[7440]: W0312 14:12:19.714166 7440 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 14:12:19.716621 master-0 kubenswrapper[7440]: W0312 14:12:19.714169 7440 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 14:12:19.716621 master-0 kubenswrapper[7440]: W0312 14:12:19.714173 7440 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 14:12:19.716621 master-0 kubenswrapper[7440]: W0312 14:12:19.714177 7440 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 14:12:19.717091 master-0 kubenswrapper[7440]: W0312 14:12:19.714180 7440 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 14:12:19.717091 master-0 kubenswrapper[7440]: W0312 14:12:19.714184 7440 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 14:12:19.717091 master-0 kubenswrapper[7440]: W0312 14:12:19.714188 7440 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 14:12:19.717091 master-0 kubenswrapper[7440]: I0312 14:12:19.714194 7440 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 14:12:19.717091 master-0 kubenswrapper[7440]: I0312 14:12:19.714342 7440 server.go:940] "Client rotation is on, will bootstrap in background" Mar 12 14:12:19.717091 master-0 kubenswrapper[7440]: I0312 14:12:19.716491 7440 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 12 14:12:19.717091 master-0 kubenswrapper[7440]: I0312 14:12:19.716559 7440 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 12 14:12:19.717091 master-0 kubenswrapper[7440]: I0312 14:12:19.716791 7440 server.go:997] "Starting client certificate rotation" Mar 12 14:12:19.717091 master-0 kubenswrapper[7440]: I0312 14:12:19.716800 7440 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 12 14:12:19.717091 master-0 kubenswrapper[7440]: I0312 14:12:19.716970 7440 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-13 14:02:38 +0000 UTC, rotation deadline is 2026-03-13 09:17:33.242379908 +0000 UTC Mar 12 14:12:19.717091 master-0 kubenswrapper[7440]: I0312 14:12:19.717080 7440 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h5m13.525303676s for next certificate rotation Mar 12 14:12:19.717367 master-0 kubenswrapper[7440]: I0312 14:12:19.717343 7440 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 14:12:19.718482 master-0 kubenswrapper[7440]: I0312 14:12:19.718452 7440 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 14:12:19.722557 master-0 kubenswrapper[7440]: I0312 14:12:19.722525 7440 log.go:25] "Validated CRI v1 runtime API" Mar 12 14:12:19.724879 master-0 kubenswrapper[7440]: I0312 14:12:19.724855 7440 log.go:25] "Validated CRI v1 image API" Mar 12 14:12:19.725746 master-0 kubenswrapper[7440]: I0312 14:12:19.725705 7440 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 14:12:19.729648 master-0 kubenswrapper[7440]: I0312 14:12:19.729564 7440 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 847b9f13-6083-4550-852f-e0336cfa76ca:/dev/vda3 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 12 14:12:19.730513 master-0 kubenswrapper[7440]: I0312 14:12:19.729645 7440 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1ba5c83b988cf94fb241db9240f0b33554a204e49670a14cf13953d488a8abe8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1ba5c83b988cf94fb241db9240f0b33554a204e49670a14cf13953d488a8abe8/userdata/shm major:0 minor:269 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1cc258e5add24f89b3e9a9a1502a4d4f7e01fa0c35af8f6d6a9076b7b4e48345/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1cc258e5add24f89b3e9a9a1502a4d4f7e01fa0c35af8f6d6a9076b7b4e48345/userdata/shm major:0 minor:239 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/273deb0b6a9c20f6e288a8f04dbffa2d991224ef0582918efc29bdb17656c1b9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/273deb0b6a9c20f6e288a8f04dbffa2d991224ef0582918efc29bdb17656c1b9/userdata/shm major:0 minor:148 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2ed4af146d2bc6a8dae65fe67eb8f5e0b4dce64f0e0b6991bdd46a09447f48de/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2ed4af146d2bc6a8dae65fe67eb8f5e0b4dce64f0e0b6991bdd46a09447f48de/userdata/shm major:0 minor:245 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/360de6d7cd6901ac994724b265fa41deda5af26bfc1f5396acb31cdc3acfea90/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/360de6d7cd6901ac994724b265fa41deda5af26bfc1f5396acb31cdc3acfea90/userdata/shm major:0 minor:48 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/422b72f1d9f4ed3748b07f1e5c14fad3faa59d5f9a198007cce69e02be1d9fa2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/422b72f1d9f4ed3748b07f1e5c14fad3faa59d5f9a198007cce69e02be1d9fa2/userdata/shm major:0 minor:99 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/43ed8c1a4973dd17aafd4ecf7a139cc5fe9ab8ae42ddeb20c5c40716650f035f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/43ed8c1a4973dd17aafd4ecf7a139cc5fe9ab8ae42ddeb20c5c40716650f035f/userdata/shm major:0 minor:257 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/451cb30a0b8b39cb726cc182b92fb7f0c2e916a7e1138a7ad734d273a44b3de6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/451cb30a0b8b39cb726cc182b92fb7f0c2e916a7e1138a7ad734d273a44b3de6/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/59d708b78a7b260fc1f5fce51861156cd584df9875d86be3a6175021610d5f66/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/59d708b78a7b260fc1f5fce51861156cd584df9875d86be3a6175021610d5f66/userdata/shm major:0 minor:281 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/643a9eb1fc3e8f464aba2201dd6fa47d57c365903e1554bd77d2fd4b8d623917/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/643a9eb1fc3e8f464aba2201dd6fa47d57c365903e1554bd77d2fd4b8d623917/userdata/shm major:0 minor:254 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/667a33334db41ad265e60ff8664b098419b2a584d575b100118b0dcbbdce439e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/667a33334db41ad265e60ff8664b098419b2a584d575b100118b0dcbbdce439e/userdata/shm major:0 minor:260 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6cddeeb3d78172cd6ac796885f0e90479fda94b207b0174c18397e7f3e17b7e9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6cddeeb3d78172cd6ac796885f0e90479fda94b207b0174c18397e7f3e17b7e9/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6f063e04e3f4cea4c5a58314f5a114923174086e042c2c243d9038f9f34bad2b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6f063e04e3f4cea4c5a58314f5a114923174086e042c2c243d9038f9f34bad2b/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7ad7c4acbfd0070259486f35a18b99f96bb34f57c1bf16a0b81a55c2de084162/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7ad7c4acbfd0070259486f35a18b99f96bb34f57c1bf16a0b81a55c2de084162/userdata/shm major:0 minor:129 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7bbac52760e3fcba097d54391f795f027fe56fcf9f7e33e8c515250455992a3b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7bbac52760e3fcba097d54391f795f027fe56fcf9f7e33e8c515250455992a3b/userdata/shm major:0 minor:279 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/84ea14c79c9435282226e3a70b4b302086d9d4276408c71b8e887b9f85e1f795/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/84ea14c79c9435282226e3a70b4b302086d9d4276408c71b8e887b9f85e1f795/userdata/shm major:0 minor:248 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b1fca57791a870ac4ac75e7237e7b4e82aa4de3284ea9553565786a397ec7628/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b1fca57791a870ac4ac75e7237e7b4e82aa4de3284ea9553565786a397ec7628/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b4d899998f745455ee9f9d0e86782192bfb9c3fa197ad167b3e3e1e3896ea9e7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b4d899998f745455ee9f9d0e86782192bfb9c3fa197ad167b3e3e1e3896ea9e7/userdata/shm major:0 minor:271 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b820d186bee28edd1c55ac6380a6987416ca51ef3ff64ae7bf3a04304904c238/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b820d186bee28edd1c55ac6380a6987416ca51ef3ff64ae7bf3a04304904c238/userdata/shm major:0 minor:252 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b8604dab0ababfe57b1fd26a526dbe9889c845e06d2a34bab1a127fa06b3b512/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b8604dab0ababfe57b1fd26a526dbe9889c845e06d2a34bab1a127fa06b3b512/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ba6778d1fdc6908e0a785cdabed807cc4f2dd052e1c7ef6d135e92d89f5e89d1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ba6778d1fdc6908e0a785cdabed807cc4f2dd052e1c7ef6d135e92d89f5e89d1/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bb2ba7d0c1c51336231f0b223ca71f794a5f473f0c46059600789cebd6ae818f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bb2ba7d0c1c51336231f0b223ca71f794a5f473f0c46059600789cebd6ae818f/userdata/shm major:0 minor:238 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e3ded18e3d6f447b9e66f1d69e24e4a3db671b9e96141bd007fb10aec777b522/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e3ded18e3d6f447b9e66f1d69e24e4a3db671b9e96141bd007fb10aec777b522/userdata/shm major:0 minor:272 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fb9c2d52a7f820046d4d8f7dbc4ab42d1bcf38f9fbb4f9b3e069dc056c52a7d9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fb9c2d52a7f820046d4d8f7dbc4ab42d1bcf38f9fbb4f9b3e069dc056c52a7d9/userdata/shm major:0 minor:114 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/07a6a1d6-fecf-4847-b7c1-160d5d7320fb/volumes/kubernetes.io~projected/kube-api-access-cqh9t:{mountpoint:/var/lib/kubelet/pods/07a6a1d6-fecf-4847-b7c1-160d5d7320fb/volumes/kubernetes.io~projected/kube-api-access-cqh9t major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/08ea0d9f-0635-4759-803e-572eca2f2d34/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/08ea0d9f-0635-4759-803e-572eca2f2d34/volumes/kubernetes.io~projected/kube-api-access major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/08ea0d9f-0635-4759-803e-572eca2f2d34/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/08ea0d9f-0635-4759-803e-572eca2f2d34/volumes/kubernetes.io~secret/serving-cert major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0a898118-6d01-4211-92f0-43967b75405c/volumes/kubernetes.io~projected/kube-api-access-8rfxl:{mountpoint:/var/lib/kubelet/pods/0a898118-6d01-4211-92f0-43967b75405c/volumes/kubernetes.io~projected/kube-api-access-8rfxl major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0a898118-6d01-4211-92f0-43967b75405c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0a898118-6d01-4211-92f0-43967b75405c/volumes/kubernetes.io~secret/serving-cert major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1bba274a-38c7-4d13-88a5-6bc39228416c/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/1bba274a-38c7-4d13-88a5-6bc39228416c/volumes/kubernetes.io~projected/kube-api-access major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1bba274a-38c7-4d13-88a5-6bc39228416c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1bba274a-38c7-4d13-88a5-6bc39228416c/volumes/kubernetes.io~secret/serving-cert major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1bc0d552-01c7-4212-a551-d16419f2dc80/volumes/kubernetes.io~projected/kube-api-access-vpq4d:{mountpoint:/var/lib/kubelet/pods/1bc0d552-01c7-4212-a551-d16419f2dc80/volumes/kubernetes.io~projected/kube-api-access-vpq4d major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/272b53c4-134c-404d-9a27-c7371415b1f7/volumes/kubernetes.io~projected/kube-api-access-nqqcc:{mountpoint:/var/lib/kubelet/pods/272b53c4-134c-404d-9a27-c7371415b1f7/volumes/kubernetes.io~projected/kube-api-access-nqqcc major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6/volumes/kubernetes.io~projected/kube-api-access major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3dc73c14-852d-4957-b6ac-84366ba0594f/volumes/kubernetes.io~projected/kube-api-access-sc9zd:{mountpoint:/var/lib/kubelet/pods/3dc73c14-852d-4957-b6ac-84366ba0594f/volumes/kubernetes.io~projected/kube-api-access-sc9zd major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3dc73c14-852d-4957-b6ac-84366ba0594f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3dc73c14-852d-4957-b6ac-84366ba0594f/volumes/kubernetes.io~secret/serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3f72fbbe-69f0-4622-be05-b839ff9b4d45/volumes/kubernetes.io~projected/kube-api-access-2mbjg:{mountpoint:/var/lib/kubelet/pods/3f72fbbe-69f0-4622-be05-b839ff9b4d45/volumes/kubernetes.io~projected/kube-api-access-2mbjg major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3f72fbbe-69f0-4622-be05-b839ff9b4d45/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3f72fbbe-69f0-4622-be05-b839ff9b4d45/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/42dbcb8f-e8c4-413e-977d-40aa6df226aa/volumes/kubernetes.io~projected/kube-api-access-j47xv:{mountpoint:/var/lib/kubelet/pods/42dbcb8f-e8c4-413e-977d-40aa6df226aa/volumes/kubernetes.io~projected/kube-api-access-j47xv major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/volumes/kubernetes.io~projected/kube-api-access-qhdq5:{mountpoint:/var/lib/kubelet/pods/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/volumes/kubernetes.io~projected/kube-api-access-qhdq5 major:0 minor:259 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/57930a54-89ab-4ec8-a504-74035bb74d63/volumes/kubernetes.io~projected/kube-api-access-d6z8v:{mountpoint:/var/lib/kubelet/pods/57930a54-89ab-4ec8-a504-74035bb74d63/volumes/kubernetes.io~projected/kube-api-access-d6z8v major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/57930a54-89ab-4ec8-a504-74035bb74d63/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/57930a54-89ab-4ec8-a504-74035bb74d63/volumes/kubernetes.io~secret/serving-cert major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6defef79-6058-466a-ae0b-8eb9258126be/volumes/kubernetes.io~projected/kube-api-access-zxt4g:{mountpoint:/var/lib/kubelet/pods/6defef79-6058-466a-ae0b-8eb9258126be/volumes/kubernetes.io~projected/kube-api-access-zxt4g major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6defef79-6058-466a-ae0b-8eb9258126be/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/6defef79-6058-466a-ae0b-8eb9258126be/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7023af8b-bfcc-4253-85cd-d891dff1c86e/volumes/kubernetes.io~projected/kube-api-access-dm476:{mountpoint:/var/lib/kubelet/pods/7023af8b-bfcc-4253-85cd-d891dff1c86e/volumes/kubernetes.io~projected/kube-api-access-dm476 major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7433d9bf-4edf-4787-a7a1-e5102c7264c7/volumes/kubernetes.io~projected/kube-api-access-t4q4w:{mountpoint:/var/lib/kubelet/pods/7433d9bf-4edf-4787-a7a1-e5102c7264c7/volumes/kubernetes.io~projected/kube-api-access-t4q4w major:0 minor:98 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7433d9bf-4edf-4787-a7a1-e5102c7264c7/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/7433d9bf-4edf-4787-a7a1-e5102c7264c7/volumes/kubernetes.io~secret/metrics-tls major:0 minor:94 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/761993bb-2cba-4e1a-b304-36a24817af94/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/761993bb-2cba-4e1a-b304-36a24817af94/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/761993bb-2cba-4e1a-b304-36a24817af94/volumes/kubernetes.io~projected/kube-api-access-2k4mx:{mountpoint:/var/lib/kubelet/pods/761993bb-2cba-4e1a-b304-36a24817af94/volumes/kubernetes.io~projected/kube-api-access-2k4mx major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/761993bb-2cba-4e1a-b304-36a24817af94/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/761993bb-2cba-4e1a-b304-36a24817af94/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/76d596c0-6a41-43e1-9516-aee9ad834ec2/volumes/kubernetes.io~projected/kube-api-access-c4pvp:{mountpoint:/var/lib/kubelet/pods/76d596c0-6a41-43e1-9516-aee9ad834ec2/volumes/kubernetes.io~projected/kube-api-access-c4pvp major:0 minor:263 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/76d596c0-6a41-43e1-9516-aee9ad834ec2/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/76d596c0-6a41-43e1-9516-aee9ad834ec2/volumes/kubernetes.io~secret/serving-cert major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7fdce71e-8085-4316-be40-e535530c2ca4/volumes/kubernetes.io~projected/kube-api-access-5bdqv:{mountpoint:/var/lib/kubelet/pods/7fdce71e-8085-4316-be40-e535530c2ca4/volumes/kubernetes.io~projected/kube-api-access-5bdqv major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8106d14a-b448-4dd1-bccd-926f85394b5d/volumes/kubernetes.io~projected/kube-api-access-jtqp6:{mountpoint:/var/lib/kubelet/pods/8106d14a-b448-4dd1-bccd-926f85394b5d/volumes/kubernetes.io~projected/kube-api-access-jtqp6 major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8106d14a-b448-4dd1-bccd-926f85394b5d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/8106d14a-b448-4dd1-bccd-926f85394b5d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/85459175-2c9c-425d-bdfb-0a79c92ed110/volumes/kubernetes.io~projected/kube-api-access-v8tts:{mountpoint:/var/lib/kubelet/pods/85459175-2c9c-425d-bdfb-0a79c92ed110/volumes/kubernetes.io~projected/kube-api-access-v8tts major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8660cef9-0ab3-453e-a4b9-c243daa6ddb0/volumes/kubernetes.io~projected/kube-api-access-clj2j:{mountpoint:/var/lib/kubelet/pods/8660cef9-0ab3-453e-a4b9-c243daa6ddb0/volumes/kubernetes.io~projected/kube-api-access-clj2j major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea/volumes/kubernetes.io~projected/kube-api-access-2z8pd:{mountpoint:/var/lib/kubelet/pods/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea/volumes/kubernetes.io~projected/kube-api-access-2z8pd major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c6b9f13-4a3a-4920-a84b-f76516501f81/volumes/kubernetes.io~projected/kube-api-access-2vnhl:{mountpoint:/var/lib/kubelet/pods/8c6b9f13-4a3a-4920-a84b-f76516501f81/volumes/kubernetes.io~projected/kube-api-access-2vnhl major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8d775283-2696-4411-8ddf-d4e6000f0a0c/volumes/kubernetes.io~projected/kube-api-access-lcwrv:{mountpoint:/var/lib/kubelet/pods/8d775283-2696-4411-8ddf-d4e6000f0a0c/volumes/kubernetes.io~projected/kube-api-access-lcwrv major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8d775283-2696-4411-8ddf-d4e6000f0a0c/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/8d775283-2696-4411-8ddf-d4e6000f0a0c/volumes/kubernetes.io~secret/etcd-client major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8d775283-2696-4411-8ddf-d4e6000f0a0c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8d775283-2696-4411-8ddf-d4e6000f0a0c/volumes/kubernetes.io~secret/serving-cert major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/95c11263-0d68-4b11-bcfd-bcb0e96a6988/volumes/kubernetes.io~projected/kube-api-access-6pfns:{mountpoint:/var/lib/kubelet/pods/95c11263-0d68-4b11-bcfd-bcb0e96a6988/volumes/kubernetes.io~projected/kube-api-access-6pfns major:0 minor:105 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9757756c-cb67-4b6f-99c3-dd63f904897a/volumes/kubernetes.io~projected/kube-api-access-hxnzm:{mountpoint:/var/lib/kubelet/pods/9757756c-cb67-4b6f-99c3-dd63f904897a/volumes/kubernetes.io~projected/kube-api-access-hxnzm major:0 minor:118 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2/volumes/kubernetes.io~projected/kube-api-access major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2/volumes/kubernetes.io~secret/serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a2435b91-86d6-415b-a978-34cc859e74f2/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/a2435b91-86d6-415b-a978-34cc859e74f2/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:262 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a2435b91-86d6-415b-a978-34cc859e74f2/volumes/kubernetes.io~projected/kube-api-access-qkmrv:{mountpoint:/var/lib/kubelet/pods/a2435b91-86d6-415b-a978-34cc859e74f2/volumes/kubernetes.io~projected/kube-api-access-qkmrv major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b9d51570-06dd-4e2f-9c19-07fb694279ae/volumes/kubernetes.io~projected/kube-api-access-2cqkl:{mountpoint:/var/lib/kubelet/pods/b9d51570-06dd-4e2f-9c19-07fb694279ae/volumes/kubernetes.io~projected/kube-api-access-2cqkl major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d00a8cc7-7774-40bd-94a1-9ac2d0f63234/volumes/kubernetes.io~projected/kube-api-access-bbv7q:{mountpoint:/var/lib/kubelet/pods/d00a8cc7-7774-40bd-94a1-9ac2d0f63234/volumes/kubernetes.io~projected/kube-api-access-bbv7q major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d00a8cc7-7774-40bd-94a1-9ac2d0f63234/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d00a8cc7-7774-40bd-94a1-9ac2d0f63234/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9/volumes/kubernetes.io~projected/kube-api-access-wwtr9:{mountpoint:/var/lib/kubelet/pods/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9/volumes/kubernetes.io~projected/kube-api-access-wwtr9 major:0 minor:147 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9/volumes/kubernetes.io~secret/webhook-cert major:0 minor:140 fsType:tmpfs blockSize:0} overlay_0-101:{mountpoint:/var/lib/containers/storage/overlay/f9d3c84142517cd138f2413be7ba207ae7d7046b40bf9ac399e2c38b3a171198/merged major:0 minor:101 fsType:overlay blockSize:0} overlay_0-103:{mountpoint:/var/lib/containers/storage/overlay/4c53aaa3c766c2e21b92b2cba6cdee74b21d19b33610cf550c4c808bc7d7686c/merged major:0 minor:103 fsType:overlay blockSize:0} overlay_0-108:{mountpoint:/var/lib/containers/storage/overlay/a4cc60cc955d52a93c484e196e87bf52a7e723149f471c707012cd41bfb5e7a3/merged major:0 minor:108 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/3619cef3dbc8d8eae831d0928051aefa4e241d3e82c098af355bbce1c657a0c8/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/3e15df7b511d07530e8097da3b8fc8943a006dd08de698a3ec232b5a6b93b12b/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-132:{mountpoint:/var/lib/containers/storage/overlay/15f01deb4ab497d91f622675f60f0e855693766540ae9ff4e2a30e81bbf6fb54/merged major:0 minor:132 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/db9c10fb78b7c84d5bad58fa27fd84070b6de6bbd1e2895d431829cdc64003a5/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/162fd1652dab1e53525d2bd9ed8782172ccc1ca9fd57d981bec3a3431cbaeb13/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-138:{mountpoint:/var/lib/containers/storage/overlay/6da148745ca9151032b82a2c1e9898728b4bc4124b86b9ea39dc939518e1051e/merged major:0 minor:138 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/15e3d3a61f49cc873877c380ff1ba610ae69ab0dff1f84e044be83d77dd1fe21/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/b9be8c2a1b8f54f13123602444e7f3026b0c79d09fd7fdf7a4aae4ef5ab81cac/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/d7801a1f237e3eae1514700dcf533d0c0dfbc45b4c1e303921557444a627360f/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/ca4ef5f273658c2f0ac27108513a03e4717e00aca75e6361a5a5f98b3b101af5/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-167:{mountpoint:/var/lib/containers/storage/overlay/9f6e00018799d2796cedb84da3a413e5128c0e4293f054377062145baf870d98/merged major:0 minor:167 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/3b513ee961359669705362e7371564f1d0e17baafb3007206507ed2010ada3ab/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/2e72d41f6d85089a5d77d079151d819aa1c77ef1e52fb75fb34dfa0ede6a2314/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/32124ee8694c9b34488bab7eb742dc9b5f377fbc178316c0d5aeb6608b8f77cc/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/24df0a275e56935a697db889bbc9bbbd27e90f78ca2db30863de7bb506ef7398/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/a9c601662dd223a12d57aa63a7ed568f61cfc64c895610665161c008680f6d90/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/e83dd687fd60d175c9b00e32df7e8d7c347edf1f3dcd1749a948e9ca1cf28890/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/bd1616afefaffaa6083e4928cacadbbf145fbcefa7a46945b703568c2c6207ab/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/61dc9b4a330d6dc1210a239eabc75def091e83b251cd64c9b08d3566dd90c3c8/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-265:{mountpoint:/var/lib/containers/storage/overlay/a9637eadba2e5e770c4200d2bc633f5420326caa798514b5fb1340b07723d32d/merged major:0 minor:265 fsType:overlay blockSize:0} overlay_0-267:{mountpoint:/var/lib/containers/storage/overlay/1d6da2fac25bb5f9d861ced1c5a42a2f289fe3119a915dac81404c1480874283/merged major:0 minor:267 fsType:overlay blockSize:0} overlay_0-275:{mountpoint:/var/lib/containers/storage/overlay/30501fea601fb38955f429099bc47c2376772448c3284ec89d82a4485d0274ac/merged major:0 minor:275 fsType:overlay blockSize:0} overlay_0-277:{mountpoint:/var/lib/containers/storage/overlay/baa44976eb7c61a33f6e8fb1f9b8869132324d62873f7e84dc01352a8ba11735/merged major:0 minor:277 fsType:overlay blockSize:0} overlay_0-283:{mountpoint:/var/lib/containers/storage/overlay/4d33795c48e6c06a6bba3e78a2e1f63b523f5454e3a423e7f0a79a876ee04a3d/merged major:0 minor:283 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/1ee374b9e875f8e97bdc3f51208e8e0f4095ddea09a39d9017b13d0e34097401/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/b4737cfb0bee92a9b39bd98408228b425332bce3e47632c13f04d91db556f58e/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/0b5995d6ab4cb1d37852d1385eda72c1305fb81f36822392b0f9eb0d5df6636a/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-291:{mountpoint:/var/lib/containers/storage/overlay/6d9634c168c7db74feb81783daef03e386fda835e5c7ab8ddab1fbc252214bc8/merged major:0 minor:291 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/084579f7fda123e3eacb1faeae140270ffd5afc52088f22ea8e34202cc3e30b3/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/139e598740f254c07cdc42110c5735a5226ae18efcdc86c26683ca34ac686033/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/6bb6dffd18980dc8faf8a352db4a975c02ccc90f05f7257e1d5871754f0ea951/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/bb1a76c3a3934eb3479404c0933495e566a1555931e23a36a69c56824bdde124/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/31ce0783f4aa69c1ba3ead0dd61117ff23ddcc69c26e2102009dc13bb379741f/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/ecb92476c651c1f5dd117dcb1e677cb793322cc0155e30b3066869cf99b226b6/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-49:{mountpoint:/var/lib/containers/storage/overlay/6f0a275eaa649615ff87ae3ae9c8c3708b1b60fa4eff2ba949a1338249207d31/merged major:0 minor:49 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/803f6a092bb4f2059e816d54f81bbc0b0f7664a1feabcf075300da3d8eb49c39/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/08c4d57d989c3486df8f53773d393ab7755d4d85ba70a596c7476885bc73259d/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/45494390ba458d9e289aa62cc1fd9126caef27e2fecd79ec1b659317b0646ec0/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/082f4feed6bb3500a0a0e59e75099ac5a0b30eed77ee94e1d295dc557dea439c/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/7f7cca31991f61ca9aaaa31cc9af43e7362308a57acf9f1ba67dd8eff542e62a/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/1ae99cffda7d4e32db7442bbab33bd1b4735cf27a31026ed2fd6196b83eda3ca/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/14bc9b0a6fc223a9eb9ada9ccb913aac0bfefb46d9fdbcce66451641dacd1d48/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-70:{mountpoint:/var/lib/containers/storage/overlay/1e2c88db8a2876e2f72e732e0de98cd46ef5cbaf0bbcf87a61bf51f0a7bc0497/merged major:0 minor:70 fsType:overlay blockSize:0} overlay_0-75:{mountpoint:/var/lib/containers/storage/overlay/4db9735232980a9a93cd3912a15c37439bd2fdd8b78e8462f43ce1c83cebfe4e/merged major:0 minor:75 fsType:overlay blockSize:0} overlay_0-83:{mountpoint:/var/lib/containers/storage/overlay/d27214cb6296540ee71b0ef85ecc16c182fea82c225f8c4df2d25a3bcfd111bd/merged major:0 minor:83 fsType:overlay blockSize:0}] Mar 12 14:12:19.754860 master-0 kubenswrapper[7440]: I0312 14:12:19.754293 7440 manager.go:217] Machine: {Timestamp:2026-03-12 14:12:19.753343371 +0000 UTC m=+0.088721930 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:e4246b74030446349cda326caa7abc15 SystemUUID:e4246b74-0304-4634-9cda-326caa7abc15 BootID:00119185-c574-4bb3-ab0c-7bce10775874 Filesystems:[{Device:/var/lib/kubelet/pods/9757756c-cb67-4b6f-99c3-dd63f904897a/volumes/kubernetes.io~projected/kube-api-access-hxnzm DeviceMajor:0 DeviceMinor:118 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3dc73c14-852d-4957-b6ac-84366ba0594f/volumes/kubernetes.io~projected/kube-api-access-sc9zd DeviceMajor:0 DeviceMinor:249 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/59d708b78a7b260fc1f5fce51861156cd584df9875d86be3a6175021610d5f66/userdata/shm DeviceMajor:0 DeviceMinor:281 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7433d9bf-4edf-4787-a7a1-e5102c7264c7/volumes/kubernetes.io~projected/kube-api-access-t4q4w DeviceMajor:0 DeviceMinor:98 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1bba274a-38c7-4d13-88a5-6bc39228416c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:225 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6f063e04e3f4cea4c5a58314f5a114923174086e042c2c243d9038f9f34bad2b/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/76d596c0-6a41-43e1-9516-aee9ad834ec2/volumes/kubernetes.io~projected/kube-api-access-c4pvp DeviceMajor:0 DeviceMinor:263 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ba6778d1fdc6908e0a785cdabed807cc4f2dd052e1c7ef6d135e92d89f5e89d1/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8c6b9f13-4a3a-4920-a84b-f76516501f81/volumes/kubernetes.io~projected/kube-api-access-2vnhl DeviceMajor:0 DeviceMinor:230 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a2435b91-86d6-415b-a978-34cc859e74f2/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:262 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-83 DeviceMajor:0 DeviceMinor:83 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1bba274a-38c7-4d13-88a5-6bc39228416c/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:228 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:43 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-275 DeviceMajor:0 DeviceMinor:275 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/57930a54-89ab-4ec8-a504-74035bb74d63/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:224 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:242 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-108 DeviceMajor:0 DeviceMinor:108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7fdce71e-8085-4316-be40-e535530c2ca4/volumes/kubernetes.io~projected/kube-api-access-5bdqv DeviceMajor:0 DeviceMinor:123 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b4d899998f745455ee9f9d0e86782192bfb9c3fa197ad167b3e3e1e3896ea9e7/userdata/shm DeviceMajor:0 DeviceMinor:271 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-283 DeviceMajor:0 DeviceMinor:283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/451cb30a0b8b39cb726cc182b92fb7f0c2e916a7e1138a7ad734d273a44b3de6/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fb9c2d52a7f820046d4d8f7dbc4ab42d1bcf38f9fbb4f9b3e069dc056c52a7d9/userdata/shm DeviceMajor:0 DeviceMinor:114 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6defef79-6058-466a-ae0b-8eb9258126be/volumes/kubernetes.io~projected/kube-api-access-zxt4g DeviceMajor:0 DeviceMinor:125 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bb2ba7d0c1c51336231f0b223ca71f794a5f473f0c46059600789cebd6ae818f/userdata/shm DeviceMajor:0 DeviceMinor:238 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/07a6a1d6-fecf-4847-b7c1-160d5d7320fb/volumes/kubernetes.io~projected/kube-api-access-cqh9t DeviceMajor:0 DeviceMinor:247 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/08ea0d9f-0635-4759-803e-572eca2f2d34/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:223 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0a898118-6d01-4211-92f0-43967b75405c/volumes/kubernetes.io~projected/kube-api-access-8rfxl DeviceMajor:0 DeviceMinor:251 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/422b72f1d9f4ed3748b07f1e5c14fad3faa59d5f9a198007cce69e02be1d9fa2/userdata/shm DeviceMajor:0 DeviceMinor:99 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/667a33334db41ad265e60ff8664b098419b2a584d575b100118b0dcbbdce439e/userdata/shm DeviceMajor:0 DeviceMinor:260 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/761993bb-2cba-4e1a-b304-36a24817af94/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:126 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8d775283-2696-4411-8ddf-d4e6000f0a0c/volumes/kubernetes.io~projected/kube-api-access-lcwrv DeviceMajor:0 DeviceMinor:233 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/84ea14c79c9435282226e3a70b4b302086d9d4276408c71b8e887b9f85e1f795/userdata/shm DeviceMajor:0 DeviceMinor:248 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8d775283-2696-4411-8ddf-d4e6000f0a0c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:221 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:232 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8660cef9-0ab3-453e-a4b9-c243daa6ddb0/volumes/kubernetes.io~projected/kube-api-access-clj2j DeviceMajor:0 DeviceMinor:209 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-103 DeviceMajor:0 DeviceMinor:103 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8106d14a-b448-4dd1-bccd-926f85394b5d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:222 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e3ded18e3d6f447b9e66f1d69e24e4a3db671b9e96141bd007fb10aec777b522/userdata/shm DeviceMajor:0 DeviceMinor:272 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d00a8cc7-7774-40bd-94a1-9ac2d0f63234/volumes/kubernetes.io~projected/kube-api-access-bbv7q DeviceMajor:0 DeviceMinor:256 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7ad7c4acbfd0070259486f35a18b99f96bb34f57c1bf16a0b81a55c2de084162/userdata/shm DeviceMajor:0 DeviceMinor:129 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/273deb0b6a9c20f6e288a8f04dbffa2d991224ef0582918efc29bdb17656c1b9/userdata/shm DeviceMajor:0 DeviceMinor:148 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/42dbcb8f-e8c4-413e-977d-40aa6df226aa/volumes/kubernetes.io~projected/kube-api-access-j47xv DeviceMajor:0 DeviceMinor:227 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b9d51570-06dd-4e2f-9c19-07fb694279ae/volumes/kubernetes.io~projected/kube-api-access-2cqkl DeviceMajor:0 DeviceMinor:264 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b8604dab0ababfe57b1fd26a526dbe9889c845e06d2a34bab1a127fa06b3b512/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-75 DeviceMajor:0 DeviceMinor:75 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b820d186bee28edd1c55ac6380a6987416ca51ef3ff64ae7bf3a04304904c238/userdata/shm DeviceMajor:0 DeviceMinor:252 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/volumes/kubernetes.io~projected/kube-api-access-qhdq5 DeviceMajor:0 DeviceMinor:259 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3f72fbbe-69f0-4622-be05-b839ff9b4d45/volumes/kubernetes.io~projected/kube-api-access-2mbjg DeviceMajor:0 DeviceMinor:237 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/761993bb-2cba-4e1a-b304-36a24817af94/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/8d775283-2696-4411-8ddf-d4e6000f0a0c/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:217 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a2435b91-86d6-415b-a978-34cc859e74f2/volumes/kubernetes.io~projected/kube-api-access-qkmrv DeviceMajor:0 DeviceMinor:244 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-101 DeviceMajor:0 DeviceMinor:101 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-138 DeviceMajor:0 DeviceMinor:138 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9/volumes/kubernetes.io~projected/kube-api-access-wwtr9 DeviceMajor:0 DeviceMinor:147 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7023af8b-bfcc-4253-85cd-d891dff1c86e/volumes/kubernetes.io~projected/kube-api-access-dm476 DeviceMajor:0 DeviceMinor:229 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/272b53c4-134c-404d-9a27-c7371415b1f7/volumes/kubernetes.io~projected/kube-api-access-nqqcc DeviceMajor:0 DeviceMinor:234 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/360de6d7cd6901ac994724b265fa41deda5af26bfc1f5396acb31cdc3acfea90/userdata/shm DeviceMajor:0 DeviceMinor:48 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-49 DeviceMajor:0 DeviceMinor:49 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6cddeeb3d78172cd6ac796885f0e90479fda94b207b0174c18397e7f3e17b7e9/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7433d9bf-4edf-4787-a7a1-e5102c7264c7/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:94 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-267 DeviceMajor:0 DeviceMinor:267 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0a898118-6d01-4211-92f0-43967b75405c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:220 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1bc0d552-01c7-4212-a551-d16419f2dc80/volumes/kubernetes.io~projected/kube-api-access-vpq4d DeviceMajor:0 DeviceMinor:236 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1cc258e5add24f89b3e9a9a1502a4d4f7e01fa0c35af8f6d6a9076b7b4e48345/userdata/shm DeviceMajor:0 DeviceMinor:239 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-291 DeviceMajor:0 DeviceMinor:291 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-277 DeviceMajor:0 DeviceMinor:277 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-70 DeviceMajor:0 DeviceMinor:70 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6defef79-6058-466a-ae0b-8eb9258126be/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/08ea0d9f-0635-4759-803e-572eca2f2d34/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:235 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/643a9eb1fc3e8f464aba2201dd6fa47d57c365903e1554bd77d2fd4b8d623917/userdata/shm DeviceMajor:0 DeviceMinor:254 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/43ed8c1a4973dd17aafd4ecf7a139cc5fe9ab8ae42ddeb20c5c40716650f035f/userdata/shm DeviceMajor:0 DeviceMinor:257 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-265 DeviceMajor:0 DeviceMinor:265 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:140 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3f72fbbe-69f0-4622-be05-b839ff9b4d45/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/85459175-2c9c-425d-bdfb-0a79c92ed110/volumes/kubernetes.io~projected/kube-api-access-v8tts DeviceMajor:0 DeviceMinor:231 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-167 DeviceMajor:0 DeviceMinor:167 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3dc73c14-852d-4957-b6ac-84366ba0594f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8106d14a-b448-4dd1-bccd-926f85394b5d/volumes/kubernetes.io~projected/kube-api-access-jtqp6 DeviceMajor:0 DeviceMinor:243 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea/volumes/kubernetes.io~projected/kube-api-access-2z8pd DeviceMajor:0 DeviceMinor:213 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d00a8cc7-7774-40bd-94a1-9ac2d0f63234/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/95c11263-0d68-4b11-bcfd-bcb0e96a6988/volumes/kubernetes.io~projected/kube-api-access-6pfns DeviceMajor:0 DeviceMinor:105 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/76d596c0-6a41-43e1-9516-aee9ad834ec2/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:215 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7bbac52760e3fcba097d54391f795f027fe56fcf9f7e33e8c515250455992a3b/userdata/shm DeviceMajor:0 DeviceMinor:279 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2ed4af146d2bc6a8dae65fe67eb8f5e0b4dce64f0e0b6991bdd46a09447f48de/userdata/shm DeviceMajor:0 DeviceMinor:245 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b1fca57791a870ac4ac75e7237e7b4e82aa4de3284ea9553565786a397ec7628/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/57930a54-89ab-4ec8-a504-74035bb74d63/volumes/kubernetes.io~projected/kube-api-access-d6z8v DeviceMajor:0 DeviceMinor:226 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/761993bb-2cba-4e1a-b304-36a24817af94/volumes/kubernetes.io~projected/kube-api-access-2k4mx DeviceMajor:0 DeviceMinor:127 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-132 DeviceMajor:0 DeviceMinor:132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1ba5c83b988cf94fb241db9240f0b33554a204e49670a14cf13953d488a8abe8/userdata/shm DeviceMajor:0 DeviceMinor:269 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:1ba5c83b988cf94 MacAddress:be:e7:8f:c3:97:c3 Speed:10000 Mtu:8900} {Name:1cc258e5add24f8 MacAddress:1e:09:54:50:00:60 Speed:10000 Mtu:8900} {Name:2ed4af146d2bc6a MacAddress:46:f0:99:90:ed:48 Speed:10000 Mtu:8900} {Name:43ed8c1a4973dd1 MacAddress:8a:c2:6b:45:40:cb Speed:10000 Mtu:8900} {Name:59d708b78a7b260 MacAddress:e6:c4:bf:bd:b7:9e Speed:10000 Mtu:8900} {Name:643a9eb1fc3e8f4 MacAddress:3a:59:9a:8b:db:91 Speed:10000 Mtu:8900} {Name:667a33334db41ad MacAddress:62:7c:36:d9:01:5f Speed:10000 Mtu:8900} {Name:7bbac52760e3fcb MacAddress:6e:c2:91:d7:8b:cb Speed:10000 Mtu:8900} {Name:84ea14c79c94352 MacAddress:6e:95:54:25:85:62 Speed:10000 Mtu:8900} {Name:b4d899998f74545 MacAddress:76:cc:1f:81:b8:9c Speed:10000 Mtu:8900} {Name:b820d186bee28ed MacAddress:4e:34:1c:51:aa:9d Speed:10000 Mtu:8900} {Name:bb2ba7d0c1c5133 MacAddress:02:93:03:7b:f1:99 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:d6:09:00:13:e9:99 Speed:0 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:fa:69:5a Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:bb:95:55 Speed:-1 Mtu:9000} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:6a:3f:76:c6:88:2a Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 12 14:12:19.754860 master-0 kubenswrapper[7440]: I0312 14:12:19.754846 7440 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 12 14:12:19.755195 master-0 kubenswrapper[7440]: I0312 14:12:19.755037 7440 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 12 14:12:19.755368 master-0 kubenswrapper[7440]: I0312 14:12:19.755333 7440 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 12 14:12:19.755554 master-0 kubenswrapper[7440]: I0312 14:12:19.755515 7440 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 14:12:19.755761 master-0 kubenswrapper[7440]: I0312 14:12:19.755549 7440 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 14:12:19.755800 master-0 kubenswrapper[7440]: I0312 14:12:19.755783 7440 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 14:12:19.755800 master-0 kubenswrapper[7440]: I0312 14:12:19.755796 7440 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 14:12:19.755847 master-0 kubenswrapper[7440]: I0312 14:12:19.755806 7440 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 14:12:19.755847 master-0 kubenswrapper[7440]: I0312 14:12:19.755829 7440 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 14:12:19.756063 master-0 kubenswrapper[7440]: I0312 14:12:19.756031 7440 state_mem.go:36] "Initialized new in-memory state store" Mar 12 14:12:19.756146 master-0 kubenswrapper[7440]: I0312 14:12:19.756130 7440 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 12 14:12:19.756227 master-0 kubenswrapper[7440]: I0312 14:12:19.756214 7440 kubelet.go:418] "Attempting to sync node with API server" Mar 12 14:12:19.756268 master-0 kubenswrapper[7440]: I0312 14:12:19.756233 7440 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 14:12:19.756268 master-0 kubenswrapper[7440]: I0312 14:12:19.756250 7440 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 12 14:12:19.756268 master-0 kubenswrapper[7440]: I0312 14:12:19.756263 7440 kubelet.go:324] "Adding apiserver pod source" Mar 12 14:12:19.756374 master-0 kubenswrapper[7440]: I0312 14:12:19.756283 7440 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 14:12:19.757376 master-0 kubenswrapper[7440]: I0312 14:12:19.757343 7440 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 12 14:12:19.757542 master-0 kubenswrapper[7440]: I0312 14:12:19.757521 7440 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 12 14:12:19.757780 master-0 kubenswrapper[7440]: I0312 14:12:19.757760 7440 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 12 14:12:19.757890 master-0 kubenswrapper[7440]: I0312 14:12:19.757866 7440 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 12 14:12:19.757890 master-0 kubenswrapper[7440]: I0312 14:12:19.757885 7440 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 12 14:12:19.757890 master-0 kubenswrapper[7440]: I0312 14:12:19.757892 7440 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 12 14:12:19.757890 master-0 kubenswrapper[7440]: I0312 14:12:19.757913 7440 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 12 14:12:19.757890 master-0 kubenswrapper[7440]: I0312 14:12:19.757920 7440 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 12 14:12:19.758188 master-0 kubenswrapper[7440]: I0312 14:12:19.757927 7440 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 12 14:12:19.758188 master-0 kubenswrapper[7440]: I0312 14:12:19.757934 7440 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 12 14:12:19.758188 master-0 kubenswrapper[7440]: I0312 14:12:19.757940 7440 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 12 14:12:19.758188 master-0 kubenswrapper[7440]: I0312 14:12:19.757947 7440 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 12 14:12:19.758188 master-0 kubenswrapper[7440]: I0312 14:12:19.757954 7440 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 12 14:12:19.758188 master-0 kubenswrapper[7440]: I0312 14:12:19.757983 7440 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 12 14:12:19.758188 master-0 kubenswrapper[7440]: I0312 14:12:19.757995 7440 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 12 14:12:19.758188 master-0 kubenswrapper[7440]: I0312 14:12:19.758022 7440 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 12 14:12:19.758491 master-0 kubenswrapper[7440]: I0312 14:12:19.758461 7440 server.go:1280] "Started kubelet" Mar 12 14:12:19.758882 master-0 kubenswrapper[7440]: I0312 14:12:19.758810 7440 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 14:12:19.759682 master-0 kubenswrapper[7440]: I0312 14:12:19.759591 7440 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 14:12:19.759682 master-0 kubenswrapper[7440]: I0312 14:12:19.759681 7440 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 12 14:12:19.760083 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 12 14:12:19.766981 master-0 kubenswrapper[7440]: I0312 14:12:19.766692 7440 server.go:449] "Adding debug handlers to kubelet server" Mar 12 14:12:19.769341 master-0 kubenswrapper[7440]: I0312 14:12:19.764210 7440 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 14:12:19.773343 master-0 kubenswrapper[7440]: I0312 14:12:19.773285 7440 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 12 14:12:19.773343 master-0 kubenswrapper[7440]: I0312 14:12:19.773340 7440 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 14:12:19.774171 master-0 kubenswrapper[7440]: I0312 14:12:19.774013 7440 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-13 14:02:38 +0000 UTC, rotation deadline is 2026-03-13 10:59:11.42376102 +0000 UTC Mar 12 14:12:19.774171 master-0 kubenswrapper[7440]: I0312 14:12:19.774068 7440 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h46m51.649700236s for next certificate rotation Mar 12 14:12:19.774171 master-0 kubenswrapper[7440]: E0312 14:12:19.774029 7440 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 14:12:19.774171 master-0 kubenswrapper[7440]: I0312 14:12:19.774118 7440 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 12 14:12:19.774171 master-0 kubenswrapper[7440]: I0312 14:12:19.774132 7440 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 12 14:12:19.774526 master-0 kubenswrapper[7440]: I0312 14:12:19.774371 7440 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 12 14:12:19.774526 master-0 kubenswrapper[7440]: I0312 14:12:19.774499 7440 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 12 14:12:19.775099 master-0 kubenswrapper[7440]: I0312 14:12:19.775077 7440 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 12 14:12:19.775099 master-0 kubenswrapper[7440]: I0312 14:12:19.775084 7440 factory.go:55] Registering systemd factory Mar 12 14:12:19.775205 master-0 kubenswrapper[7440]: I0312 14:12:19.775105 7440 factory.go:221] Registration of the systemd container factory successfully Mar 12 14:12:19.775917 master-0 kubenswrapper[7440]: I0312 14:12:19.775864 7440 factory.go:153] Registering CRI-O factory Mar 12 14:12:19.775917 master-0 kubenswrapper[7440]: I0312 14:12:19.775889 7440 factory.go:221] Registration of the crio container factory successfully Mar 12 14:12:19.776085 master-0 kubenswrapper[7440]: I0312 14:12:19.776025 7440 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 12 14:12:19.776085 master-0 kubenswrapper[7440]: I0312 14:12:19.776051 7440 factory.go:103] Registering Raw factory Mar 12 14:12:19.776085 master-0 kubenswrapper[7440]: I0312 14:12:19.776073 7440 manager.go:1196] Started watching for new ooms in manager Mar 12 14:12:19.777029 master-0 kubenswrapper[7440]: I0312 14:12:19.777004 7440 manager.go:319] Starting recovery of all containers Mar 12 14:12:19.777886 master-0 kubenswrapper[7440]: I0312 14:12:19.777835 7440 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 12 14:12:19.782526 master-0 kubenswrapper[7440]: I0312 14:12:19.782465 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="761993bb-2cba-4e1a-b304-36a24817af94" volumeName="kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-env-overrides" seLinuxMountContext="" Mar 12 14:12:19.782526 master-0 kubenswrapper[7440]: I0312 14:12:19.782512 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="761993bb-2cba-4e1a-b304-36a24817af94" volumeName="kubernetes.io/projected/761993bb-2cba-4e1a-b304-36a24817af94-kube-api-access-2k4mx" seLinuxMountContext="" Mar 12 14:12:19.782526 master-0 kubenswrapper[7440]: I0312 14:12:19.782527 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2" volumeName="kubernetes.io/configmap/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-config" seLinuxMountContext="" Mar 12 14:12:19.782759 master-0 kubenswrapper[7440]: I0312 14:12:19.782538 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9d51570-06dd-4e2f-9c19-07fb694279ae" volumeName="kubernetes.io/configmap/b9d51570-06dd-4e2f-9c19-07fb694279ae-iptables-alerter-script" seLinuxMountContext="" Mar 12 14:12:19.782759 master-0 kubenswrapper[7440]: I0312 14:12:19.782549 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9d51570-06dd-4e2f-9c19-07fb694279ae" volumeName="kubernetes.io/projected/b9d51570-06dd-4e2f-9c19-07fb694279ae-kube-api-access-2cqkl" seLinuxMountContext="" Mar 12 14:12:19.782759 master-0 kubenswrapper[7440]: I0312 14:12:19.782562 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bba274a-38c7-4d13-88a5-6bc39228416c" volumeName="kubernetes.io/projected/1bba274a-38c7-4d13-88a5-6bc39228416c-kube-api-access" seLinuxMountContext="" Mar 12 14:12:19.782759 master-0 kubenswrapper[7440]: I0312 14:12:19.782573 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f72fbbe-69f0-4622-be05-b839ff9b4d45" volumeName="kubernetes.io/configmap/3f72fbbe-69f0-4622-be05-b839ff9b4d45-config" seLinuxMountContext="" Mar 12 14:12:19.782759 master-0 kubenswrapper[7440]: I0312 14:12:19.782584 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29ab0e68-ebc6-48a3-b234-e1794c4c5ad6" volumeName="kubernetes.io/projected/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-kube-api-access" seLinuxMountContext="" Mar 12 14:12:19.782759 master-0 kubenswrapper[7440]: I0312 14:12:19.782597 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8106d14a-b448-4dd1-bccd-926f85394b5d" volumeName="kubernetes.io/secret/8106d14a-b448-4dd1-bccd-926f85394b5d-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 12 14:12:19.782759 master-0 kubenswrapper[7440]: I0312 14:12:19.782611 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d775283-2696-4411-8ddf-d4e6000f0a0c" volumeName="kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-ca" seLinuxMountContext="" Mar 12 14:12:19.782759 master-0 kubenswrapper[7440]: I0312 14:12:19.782625 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="761993bb-2cba-4e1a-b304-36a24817af94" volumeName="kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-ovnkube-script-lib" seLinuxMountContext="" Mar 12 14:12:19.782759 master-0 kubenswrapper[7440]: I0312 14:12:19.782637 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d775283-2696-4411-8ddf-d4e6000f0a0c" volumeName="kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-config" seLinuxMountContext="" Mar 12 14:12:19.782759 master-0 kubenswrapper[7440]: I0312 14:12:19.782648 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="76d596c0-6a41-43e1-9516-aee9ad834ec2" volumeName="kubernetes.io/configmap/76d596c0-6a41-43e1-9516-aee9ad834ec2-config" seLinuxMountContext="" Mar 12 14:12:19.782759 master-0 kubenswrapper[7440]: I0312 14:12:19.782662 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d775283-2696-4411-8ddf-d4e6000f0a0c" volumeName="kubernetes.io/projected/8d775283-2696-4411-8ddf-d4e6000f0a0c-kube-api-access-lcwrv" seLinuxMountContext="" Mar 12 14:12:19.782759 master-0 kubenswrapper[7440]: I0312 14:12:19.782674 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2435b91-86d6-415b-a978-34cc859e74f2" volumeName="kubernetes.io/configmap/a2435b91-86d6-415b-a978-34cc859e74f2-trusted-ca" seLinuxMountContext="" Mar 12 14:12:19.783143 master-0 kubenswrapper[7440]: I0312 14:12:19.782692 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f72fbbe-69f0-4622-be05-b839ff9b4d45" volumeName="kubernetes.io/secret/3f72fbbe-69f0-4622-be05-b839ff9b4d45-serving-cert" seLinuxMountContext="" Mar 12 14:12:19.783143 master-0 kubenswrapper[7440]: I0312 14:12:19.783009 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6defef79-6058-466a-ae0b-8eb9258126be" volumeName="kubernetes.io/configmap/6defef79-6058-466a-ae0b-8eb9258126be-env-overrides" seLinuxMountContext="" Mar 12 14:12:19.783143 master-0 kubenswrapper[7440]: I0312 14:12:19.783030 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6defef79-6058-466a-ae0b-8eb9258126be" volumeName="kubernetes.io/projected/6defef79-6058-466a-ae0b-8eb9258126be-kube-api-access-zxt4g" seLinuxMountContext="" Mar 12 14:12:19.783143 master-0 kubenswrapper[7440]: I0312 14:12:19.783051 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="879e9bf1-ce4a-40b7-a72c-fe4c61e96cea" volumeName="kubernetes.io/configmap/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-trusted-ca" seLinuxMountContext="" Mar 12 14:12:19.783143 master-0 kubenswrapper[7440]: I0312 14:12:19.783064 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d775283-2696-4411-8ddf-d4e6000f0a0c" volumeName="kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-service-ca" seLinuxMountContext="" Mar 12 14:12:19.783143 master-0 kubenswrapper[7440]: I0312 14:12:19.783076 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9757756c-cb67-4b6f-99c3-dd63f904897a" volumeName="kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-cni-sysctl-allowlist" seLinuxMountContext="" Mar 12 14:12:19.783143 master-0 kubenswrapper[7440]: I0312 14:12:19.783092 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3dc73c14-852d-4957-b6ac-84366ba0594f" volumeName="kubernetes.io/configmap/3dc73c14-852d-4957-b6ac-84366ba0594f-config" seLinuxMountContext="" Mar 12 14:12:19.783143 master-0 kubenswrapper[7440]: I0312 14:12:19.783104 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57930a54-89ab-4ec8-a504-74035bb74d63" volumeName="kubernetes.io/projected/57930a54-89ab-4ec8-a504-74035bb74d63-kube-api-access-d6z8v" seLinuxMountContext="" Mar 12 14:12:19.783143 master-0 kubenswrapper[7440]: I0312 14:12:19.783118 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6defef79-6058-466a-ae0b-8eb9258126be" volumeName="kubernetes.io/configmap/6defef79-6058-466a-ae0b-8eb9258126be-ovnkube-config" seLinuxMountContext="" Mar 12 14:12:19.783143 master-0 kubenswrapper[7440]: I0312 14:12:19.783132 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="95c11263-0d68-4b11-bcfd-bcb0e96a6988" volumeName="kubernetes.io/projected/95c11263-0d68-4b11-bcfd-bcb0e96a6988-kube-api-access-6pfns" seLinuxMountContext="" Mar 12 14:12:19.783143 master-0 kubenswrapper[7440]: I0312 14:12:19.783144 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57930a54-89ab-4ec8-a504-74035bb74d63" volumeName="kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-trusted-ca-bundle" seLinuxMountContext="" Mar 12 14:12:19.783492 master-0 kubenswrapper[7440]: I0312 14:12:19.783164 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57930a54-89ab-4ec8-a504-74035bb74d63" volumeName="kubernetes.io/secret/57930a54-89ab-4ec8-a504-74035bb74d63-serving-cert" seLinuxMountContext="" Mar 12 14:12:19.783492 master-0 kubenswrapper[7440]: I0312 14:12:19.783183 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="76d596c0-6a41-43e1-9516-aee9ad834ec2" volumeName="kubernetes.io/secret/76d596c0-6a41-43e1-9516-aee9ad834ec2-serving-cert" seLinuxMountContext="" Mar 12 14:12:19.783492 master-0 kubenswrapper[7440]: I0312 14:12:19.783195 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="85459175-2c9c-425d-bdfb-0a79c92ed110" volumeName="kubernetes.io/projected/85459175-2c9c-425d-bdfb-0a79c92ed110-kube-api-access-v8tts" seLinuxMountContext="" Mar 12 14:12:19.783492 master-0 kubenswrapper[7440]: I0312 14:12:19.783214 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="95c11263-0d68-4b11-bcfd-bcb0e96a6988" volumeName="kubernetes.io/configmap/95c11263-0d68-4b11-bcfd-bcb0e96a6988-cni-binary-copy" seLinuxMountContext="" Mar 12 14:12:19.783492 master-0 kubenswrapper[7440]: I0312 14:12:19.783226 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08ea0d9f-0635-4759-803e-572eca2f2d34" volumeName="kubernetes.io/projected/08ea0d9f-0635-4759-803e-572eca2f2d34-kube-api-access" seLinuxMountContext="" Mar 12 14:12:19.783492 master-0 kubenswrapper[7440]: I0312 14:12:19.783242 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" volumeName="kubernetes.io/projected/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-bound-sa-token" seLinuxMountContext="" Mar 12 14:12:19.783492 master-0 kubenswrapper[7440]: I0312 14:12:19.783260 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2435b91-86d6-415b-a978-34cc859e74f2" volumeName="kubernetes.io/projected/a2435b91-86d6-415b-a978-34cc859e74f2-kube-api-access-qkmrv" seLinuxMountContext="" Mar 12 14:12:19.783492 master-0 kubenswrapper[7440]: I0312 14:12:19.783273 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08ea0d9f-0635-4759-803e-572eca2f2d34" volumeName="kubernetes.io/configmap/08ea0d9f-0635-4759-803e-572eca2f2d34-config" seLinuxMountContext="" Mar 12 14:12:19.783492 master-0 kubenswrapper[7440]: I0312 14:12:19.783290 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57930a54-89ab-4ec8-a504-74035bb74d63" volumeName="kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-service-ca-bundle" seLinuxMountContext="" Mar 12 14:12:19.783492 master-0 kubenswrapper[7440]: I0312 14:12:19.783302 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29ab0e68-ebc6-48a3-b234-e1794c4c5ad6" volumeName="kubernetes.io/configmap/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-service-ca" seLinuxMountContext="" Mar 12 14:12:19.783492 master-0 kubenswrapper[7440]: I0312 14:12:19.783315 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" volumeName="kubernetes.io/projected/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-kube-api-access-qhdq5" seLinuxMountContext="" Mar 12 14:12:19.783492 master-0 kubenswrapper[7440]: I0312 14:12:19.783332 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="95c11263-0d68-4b11-bcfd-bcb0e96a6988" volumeName="kubernetes.io/configmap/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-daemon-config" seLinuxMountContext="" Mar 12 14:12:19.783492 master-0 kubenswrapper[7440]: I0312 14:12:19.783344 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9757756c-cb67-4b6f-99c3-dd63f904897a" volumeName="kubernetes.io/projected/9757756c-cb67-4b6f-99c3-dd63f904897a-kube-api-access-hxnzm" seLinuxMountContext="" Mar 12 14:12:19.783492 master-0 kubenswrapper[7440]: I0312 14:12:19.783361 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2" volumeName="kubernetes.io/secret/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-serving-cert" seLinuxMountContext="" Mar 12 14:12:19.783492 master-0 kubenswrapper[7440]: I0312 14:12:19.783375 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d00a8cc7-7774-40bd-94a1-9ac2d0f63234" volumeName="kubernetes.io/configmap/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-config" seLinuxMountContext="" Mar 12 14:12:19.783492 master-0 kubenswrapper[7440]: I0312 14:12:19.783390 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08ea0d9f-0635-4759-803e-572eca2f2d34" volumeName="kubernetes.io/secret/08ea0d9f-0635-4759-803e-572eca2f2d34-serving-cert" seLinuxMountContext="" Mar 12 14:12:19.783492 master-0 kubenswrapper[7440]: I0312 14:12:19.783408 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0a898118-6d01-4211-92f0-43967b75405c" volumeName="kubernetes.io/empty-dir/0a898118-6d01-4211-92f0-43967b75405c-available-featuregates" seLinuxMountContext="" Mar 12 14:12:19.783492 master-0 kubenswrapper[7440]: I0312 14:12:19.783422 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" volumeName="kubernetes.io/configmap/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-trusted-ca" seLinuxMountContext="" Mar 12 14:12:19.784080 master-0 kubenswrapper[7440]: I0312 14:12:19.783554 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7433d9bf-4edf-4787-a7a1-e5102c7264c7" volumeName="kubernetes.io/projected/7433d9bf-4edf-4787-a7a1-e5102c7264c7-kube-api-access-t4q4w" seLinuxMountContext="" Mar 12 14:12:19.784080 master-0 kubenswrapper[7440]: I0312 14:12:19.783588 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="76d596c0-6a41-43e1-9516-aee9ad834ec2" volumeName="kubernetes.io/projected/76d596c0-6a41-43e1-9516-aee9ad834ec2-kube-api-access-c4pvp" seLinuxMountContext="" Mar 12 14:12:19.784080 master-0 kubenswrapper[7440]: I0312 14:12:19.783618 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fdce71e-8085-4316-be40-e535530c2ca4" volumeName="kubernetes.io/projected/7fdce71e-8085-4316-be40-e535530c2ca4-kube-api-access-5bdqv" seLinuxMountContext="" Mar 12 14:12:19.784080 master-0 kubenswrapper[7440]: I0312 14:12:19.783635 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="879e9bf1-ce4a-40b7-a72c-fe4c61e96cea" volumeName="kubernetes.io/projected/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-kube-api-access-2z8pd" seLinuxMountContext="" Mar 12 14:12:19.784080 master-0 kubenswrapper[7440]: I0312 14:12:19.783656 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c6b9f13-4a3a-4920-a84b-f76516501f81" volumeName="kubernetes.io/projected/8c6b9f13-4a3a-4920-a84b-f76516501f81-kube-api-access-2vnhl" seLinuxMountContext="" Mar 12 14:12:19.784080 master-0 kubenswrapper[7440]: I0312 14:12:19.783673 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0a898118-6d01-4211-92f0-43967b75405c" volumeName="kubernetes.io/projected/0a898118-6d01-4211-92f0-43967b75405c-kube-api-access-8rfxl" seLinuxMountContext="" Mar 12 14:12:19.784080 master-0 kubenswrapper[7440]: I0312 14:12:19.783694 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3dc73c14-852d-4957-b6ac-84366ba0594f" volumeName="kubernetes.io/secret/3dc73c14-852d-4957-b6ac-84366ba0594f-serving-cert" seLinuxMountContext="" Mar 12 14:12:19.784080 master-0 kubenswrapper[7440]: I0312 14:12:19.783719 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9" volumeName="kubernetes.io/projected/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-kube-api-access-wwtr9" seLinuxMountContext="" Mar 12 14:12:19.784080 master-0 kubenswrapper[7440]: I0312 14:12:19.783748 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8106d14a-b448-4dd1-bccd-926f85394b5d" volumeName="kubernetes.io/empty-dir/8106d14a-b448-4dd1-bccd-926f85394b5d-operand-assets" seLinuxMountContext="" Mar 12 14:12:19.784080 master-0 kubenswrapper[7440]: I0312 14:12:19.783778 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d00a8cc7-7774-40bd-94a1-9ac2d0f63234" volumeName="kubernetes.io/projected/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-kube-api-access-bbv7q" seLinuxMountContext="" Mar 12 14:12:19.784080 master-0 kubenswrapper[7440]: I0312 14:12:19.783802 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9" volumeName="kubernetes.io/configmap/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-env-overrides" seLinuxMountContext="" Mar 12 14:12:19.784080 master-0 kubenswrapper[7440]: I0312 14:12:19.783820 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9" volumeName="kubernetes.io/configmap/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-ovnkube-identity-cm" seLinuxMountContext="" Mar 12 14:12:19.784080 master-0 kubenswrapper[7440]: I0312 14:12:19.783841 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07a6a1d6-fecf-4847-b7c1-160d5d7320fb" volumeName="kubernetes.io/projected/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-kube-api-access-cqh9t" seLinuxMountContext="" Mar 12 14:12:19.784080 master-0 kubenswrapper[7440]: I0312 14:12:19.783860 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="761993bb-2cba-4e1a-b304-36a24817af94" volumeName="kubernetes.io/secret/761993bb-2cba-4e1a-b304-36a24817af94-ovn-node-metrics-cert" seLinuxMountContext="" Mar 12 14:12:19.784080 master-0 kubenswrapper[7440]: I0312 14:12:19.783879 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d00a8cc7-7774-40bd-94a1-9ac2d0f63234" volumeName="kubernetes.io/secret/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-serving-cert" seLinuxMountContext="" Mar 12 14:12:19.784080 master-0 kubenswrapper[7440]: I0312 14:12:19.783943 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3dc73c14-852d-4957-b6ac-84366ba0594f" volumeName="kubernetes.io/projected/3dc73c14-852d-4957-b6ac-84366ba0594f-kube-api-access-sc9zd" seLinuxMountContext="" Mar 12 14:12:19.784080 master-0 kubenswrapper[7440]: I0312 14:12:19.783965 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42dbcb8f-e8c4-413e-977d-40aa6df226aa" volumeName="kubernetes.io/configmap/42dbcb8f-e8c4-413e-977d-40aa6df226aa-telemetry-config" seLinuxMountContext="" Mar 12 14:12:19.784080 master-0 kubenswrapper[7440]: I0312 14:12:19.783983 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9757756c-cb67-4b6f-99c3-dd63f904897a" volumeName="kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-cni-binary-copy" seLinuxMountContext="" Mar 12 14:12:19.784080 master-0 kubenswrapper[7440]: I0312 14:12:19.783997 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bc0d552-01c7-4212-a551-d16419f2dc80" volumeName="kubernetes.io/configmap/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-trusted-ca" seLinuxMountContext="" Mar 12 14:12:19.784080 master-0 kubenswrapper[7440]: I0312 14:12:19.784014 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8106d14a-b448-4dd1-bccd-926f85394b5d" volumeName="kubernetes.io/projected/8106d14a-b448-4dd1-bccd-926f85394b5d-kube-api-access-jtqp6" seLinuxMountContext="" Mar 12 14:12:19.784080 master-0 kubenswrapper[7440]: I0312 14:12:19.784032 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7023af8b-bfcc-4253-85cd-d891dff1c86e" volumeName="kubernetes.io/projected/7023af8b-bfcc-4253-85cd-d891dff1c86e-kube-api-access-dm476" seLinuxMountContext="" Mar 12 14:12:19.784080 master-0 kubenswrapper[7440]: I0312 14:12:19.784049 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7433d9bf-4edf-4787-a7a1-e5102c7264c7" volumeName="kubernetes.io/secret/7433d9bf-4edf-4787-a7a1-e5102c7264c7-metrics-tls" seLinuxMountContext="" Mar 12 14:12:19.784080 master-0 kubenswrapper[7440]: I0312 14:12:19.784064 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9757756c-cb67-4b6f-99c3-dd63f904897a" volumeName="kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-whereabouts-configmap" seLinuxMountContext="" Mar 12 14:12:19.784080 master-0 kubenswrapper[7440]: I0312 14:12:19.784079 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2" volumeName="kubernetes.io/projected/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-kube-api-access" seLinuxMountContext="" Mar 12 14:12:19.784080 master-0 kubenswrapper[7440]: I0312 14:12:19.784100 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9" volumeName="kubernetes.io/secret/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-webhook-cert" seLinuxMountContext="" Mar 12 14:12:19.785116 master-0 kubenswrapper[7440]: I0312 14:12:19.784115 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bba274a-38c7-4d13-88a5-6bc39228416c" volumeName="kubernetes.io/configmap/1bba274a-38c7-4d13-88a5-6bc39228416c-config" seLinuxMountContext="" Mar 12 14:12:19.785116 master-0 kubenswrapper[7440]: I0312 14:12:19.784132 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bc0d552-01c7-4212-a551-d16419f2dc80" volumeName="kubernetes.io/projected/1bc0d552-01c7-4212-a551-d16419f2dc80-kube-api-access-vpq4d" seLinuxMountContext="" Mar 12 14:12:19.785116 master-0 kubenswrapper[7440]: I0312 14:12:19.784146 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8660cef9-0ab3-453e-a4b9-c243daa6ddb0" volumeName="kubernetes.io/projected/8660cef9-0ab3-453e-a4b9-c243daa6ddb0-kube-api-access-clj2j" seLinuxMountContext="" Mar 12 14:12:19.785116 master-0 kubenswrapper[7440]: I0312 14:12:19.784161 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2435b91-86d6-415b-a978-34cc859e74f2" volumeName="kubernetes.io/projected/a2435b91-86d6-415b-a978-34cc859e74f2-bound-sa-token" seLinuxMountContext="" Mar 12 14:12:19.785116 master-0 kubenswrapper[7440]: I0312 14:12:19.784181 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f72fbbe-69f0-4622-be05-b839ff9b4d45" volumeName="kubernetes.io/projected/3f72fbbe-69f0-4622-be05-b839ff9b4d45-kube-api-access-2mbjg" seLinuxMountContext="" Mar 12 14:12:19.785116 master-0 kubenswrapper[7440]: I0312 14:12:19.784196 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42dbcb8f-e8c4-413e-977d-40aa6df226aa" volumeName="kubernetes.io/projected/42dbcb8f-e8c4-413e-977d-40aa6df226aa-kube-api-access-j47xv" seLinuxMountContext="" Mar 12 14:12:19.785116 master-0 kubenswrapper[7440]: I0312 14:12:19.784219 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="272b53c4-134c-404d-9a27-c7371415b1f7" volumeName="kubernetes.io/projected/272b53c4-134c-404d-9a27-c7371415b1f7-kube-api-access-nqqcc" seLinuxMountContext="" Mar 12 14:12:19.785116 master-0 kubenswrapper[7440]: I0312 14:12:19.784243 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57930a54-89ab-4ec8-a504-74035bb74d63" volumeName="kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-config" seLinuxMountContext="" Mar 12 14:12:19.785116 master-0 kubenswrapper[7440]: I0312 14:12:19.784258 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6defef79-6058-466a-ae0b-8eb9258126be" volumeName="kubernetes.io/secret/6defef79-6058-466a-ae0b-8eb9258126be-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 12 14:12:19.785116 master-0 kubenswrapper[7440]: I0312 14:12:19.784277 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="761993bb-2cba-4e1a-b304-36a24817af94" volumeName="kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-ovnkube-config" seLinuxMountContext="" Mar 12 14:12:19.785116 master-0 kubenswrapper[7440]: I0312 14:12:19.784295 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d775283-2696-4411-8ddf-d4e6000f0a0c" volumeName="kubernetes.io/secret/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-client" seLinuxMountContext="" Mar 12 14:12:19.785116 master-0 kubenswrapper[7440]: I0312 14:12:19.784307 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d775283-2696-4411-8ddf-d4e6000f0a0c" volumeName="kubernetes.io/secret/8d775283-2696-4411-8ddf-d4e6000f0a0c-serving-cert" seLinuxMountContext="" Mar 12 14:12:19.785116 master-0 kubenswrapper[7440]: I0312 14:12:19.784330 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0a898118-6d01-4211-92f0-43967b75405c" volumeName="kubernetes.io/secret/0a898118-6d01-4211-92f0-43967b75405c-serving-cert" seLinuxMountContext="" Mar 12 14:12:19.785116 master-0 kubenswrapper[7440]: I0312 14:12:19.784343 7440 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bba274a-38c7-4d13-88a5-6bc39228416c" volumeName="kubernetes.io/secret/1bba274a-38c7-4d13-88a5-6bc39228416c-serving-cert" seLinuxMountContext="" Mar 12 14:12:19.785116 master-0 kubenswrapper[7440]: I0312 14:12:19.784356 7440 reconstruct.go:97] "Volume reconstruction finished" Mar 12 14:12:19.785116 master-0 kubenswrapper[7440]: I0312 14:12:19.784365 7440 reconciler.go:26] "Reconciler: start to sync state" Mar 12 14:12:19.787805 master-0 kubenswrapper[7440]: I0312 14:12:19.787759 7440 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 12 14:12:19.801851 master-0 kubenswrapper[7440]: I0312 14:12:19.801765 7440 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 12 14:12:19.803563 master-0 kubenswrapper[7440]: I0312 14:12:19.803540 7440 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 12 14:12:19.803647 master-0 kubenswrapper[7440]: I0312 14:12:19.803582 7440 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 12 14:12:19.803647 master-0 kubenswrapper[7440]: I0312 14:12:19.803605 7440 kubelet.go:2335] "Starting kubelet main sync loop" Mar 12 14:12:19.803712 master-0 kubenswrapper[7440]: E0312 14:12:19.803653 7440 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 14:12:19.806011 master-0 kubenswrapper[7440]: I0312 14:12:19.805956 7440 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 12 14:12:19.813025 master-0 kubenswrapper[7440]: I0312 14:12:19.811961 7440 generic.go:334] "Generic (PLEG): container finished" podID="9757756c-cb67-4b6f-99c3-dd63f904897a" containerID="f2ba438d34b4b3304e8d60d973e3309595cd9060a2ebe30a5d88db295ad25e25" exitCode=0 Mar 12 14:12:19.813158 master-0 kubenswrapper[7440]: I0312 14:12:19.813075 7440 generic.go:334] "Generic (PLEG): container finished" podID="9757756c-cb67-4b6f-99c3-dd63f904897a" containerID="d39ce324f3db6164db245417f53b6d8ff38716c386224704af63bf67e207b5f1" exitCode=0 Mar 12 14:12:19.813158 master-0 kubenswrapper[7440]: I0312 14:12:19.813101 7440 generic.go:334] "Generic (PLEG): container finished" podID="9757756c-cb67-4b6f-99c3-dd63f904897a" containerID="9fbd87c96fccfe4bfad334fd8c3bc1df622b06005839f21efff6ba86833c49f2" exitCode=0 Mar 12 14:12:19.813158 master-0 kubenswrapper[7440]: I0312 14:12:19.813114 7440 generic.go:334] "Generic (PLEG): container finished" podID="9757756c-cb67-4b6f-99c3-dd63f904897a" containerID="affa558e980cee997cdd8182eda2cfef7d818deacab403a1f48e02cffbc1c48b" exitCode=0 Mar 12 14:12:19.813158 master-0 kubenswrapper[7440]: I0312 14:12:19.813148 7440 generic.go:334] "Generic (PLEG): container finished" podID="9757756c-cb67-4b6f-99c3-dd63f904897a" containerID="badf1c98d1937a2f8e44bf83e8bf87b7da9889235c52744f099d88d3a841de7f" exitCode=0 Mar 12 14:12:19.813158 master-0 kubenswrapper[7440]: I0312 14:12:19.813157 7440 generic.go:334] "Generic (PLEG): container finished" podID="9757756c-cb67-4b6f-99c3-dd63f904897a" containerID="cfa5b038bc7b07de92bf843b3a45833830090fe9d6879ece21a0622781be697c" exitCode=0 Mar 12 14:12:19.816993 master-0 kubenswrapper[7440]: I0312 14:12:19.816959 7440 generic.go:334] "Generic (PLEG): container finished" podID="146495bf-0787-483f-a9fc-0e8925b89150" containerID="6033bc31672a320e7b8ffbe7a63f79564d187ec798713169c640338dfe2b84c4" exitCode=0 Mar 12 14:12:19.819202 master-0 kubenswrapper[7440]: I0312 14:12:19.819171 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 12 14:12:19.819643 master-0 kubenswrapper[7440]: I0312 14:12:19.819583 7440 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="93a2be4c1cc0002fe72e77c70515d0d6599835f46c575d492bb4928167ddaaac" exitCode=1 Mar 12 14:12:19.819714 master-0 kubenswrapper[7440]: I0312 14:12:19.819658 7440 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="ece77fc75f8a7b32ae075ac5d9a3759a5a3b706e4492b696da7d62701d1c5eb8" exitCode=0 Mar 12 14:12:19.825428 master-0 kubenswrapper[7440]: I0312 14:12:19.825393 7440 generic.go:334] "Generic (PLEG): container finished" podID="9e7877fc-0d91-4dbe-b2ae-fa50012ced6c" containerID="e918e5e1279bbcaf698142b1c788174be79639920e9232ace941582c175becab" exitCode=0 Mar 12 14:12:19.828267 master-0 kubenswrapper[7440]: I0312 14:12:19.828111 7440 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="fc7c0f722bd2f10c123348ade47d19a8deffa1a39c549432778dbf52755ce3ca" exitCode=1 Mar 12 14:12:19.831785 master-0 kubenswrapper[7440]: I0312 14:12:19.831684 7440 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="e520d98d7cf8903cafb8595cf7b3f03df14b8a00d253f1fd4abb1292c29d616a" exitCode=0 Mar 12 14:12:19.836076 master-0 kubenswrapper[7440]: I0312 14:12:19.836029 7440 generic.go:334] "Generic (PLEG): container finished" podID="761993bb-2cba-4e1a-b304-36a24817af94" containerID="e511180297e76f6a11f5330905f38a15021808c15b34dd938afb52d0fc965c91" exitCode=0 Mar 12 14:12:19.905515 master-0 kubenswrapper[7440]: E0312 14:12:19.903762 7440 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 14:12:19.906590 master-0 kubenswrapper[7440]: I0312 14:12:19.906126 7440 manager.go:324] Recovery completed Mar 12 14:12:19.952681 master-0 kubenswrapper[7440]: I0312 14:12:19.952632 7440 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 12 14:12:19.952681 master-0 kubenswrapper[7440]: I0312 14:12:19.952661 7440 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 12 14:12:19.952681 master-0 kubenswrapper[7440]: I0312 14:12:19.952692 7440 state_mem.go:36] "Initialized new in-memory state store" Mar 12 14:12:19.952990 master-0 kubenswrapper[7440]: I0312 14:12:19.952969 7440 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 12 14:12:19.953073 master-0 kubenswrapper[7440]: I0312 14:12:19.952988 7440 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 12 14:12:19.953073 master-0 kubenswrapper[7440]: I0312 14:12:19.953010 7440 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 12 14:12:19.953073 master-0 kubenswrapper[7440]: I0312 14:12:19.953018 7440 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 12 14:12:19.953073 master-0 kubenswrapper[7440]: I0312 14:12:19.953025 7440 policy_none.go:49] "None policy: Start" Mar 12 14:12:19.955098 master-0 kubenswrapper[7440]: I0312 14:12:19.955054 7440 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 12 14:12:19.955178 master-0 kubenswrapper[7440]: I0312 14:12:19.955107 7440 state_mem.go:35] "Initializing new in-memory state store" Mar 12 14:12:19.955440 master-0 kubenswrapper[7440]: I0312 14:12:19.955420 7440 state_mem.go:75] "Updated machine memory state" Mar 12 14:12:19.955440 master-0 kubenswrapper[7440]: I0312 14:12:19.955436 7440 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 12 14:12:19.964507 master-0 kubenswrapper[7440]: I0312 14:12:19.964485 7440 manager.go:334] "Starting Device Plugin manager" Mar 12 14:12:19.964604 master-0 kubenswrapper[7440]: I0312 14:12:19.964548 7440 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 12 14:12:19.964604 master-0 kubenswrapper[7440]: I0312 14:12:19.964564 7440 server.go:79] "Starting device plugin registration server" Mar 12 14:12:19.965045 master-0 kubenswrapper[7440]: I0312 14:12:19.965031 7440 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 14:12:19.965098 master-0 kubenswrapper[7440]: I0312 14:12:19.965049 7440 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 14:12:19.965329 master-0 kubenswrapper[7440]: I0312 14:12:19.965302 7440 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 12 14:12:19.965430 master-0 kubenswrapper[7440]: I0312 14:12:19.965417 7440 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 12 14:12:19.965430 master-0 kubenswrapper[7440]: I0312 14:12:19.965426 7440 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 14:12:20.066105 master-0 kubenswrapper[7440]: I0312 14:12:20.066057 7440 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:12:20.067471 master-0 kubenswrapper[7440]: I0312 14:12:20.067432 7440 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:12:20.067471 master-0 kubenswrapper[7440]: I0312 14:12:20.067468 7440 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:12:20.067580 master-0 kubenswrapper[7440]: I0312 14:12:20.067478 7440 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:12:20.067580 master-0 kubenswrapper[7440]: I0312 14:12:20.067524 7440 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 14:12:20.076278 master-0 kubenswrapper[7440]: I0312 14:12:20.076219 7440 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 12 14:12:20.076466 master-0 kubenswrapper[7440]: I0312 14:12:20.076328 7440 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 12 14:12:20.104461 master-0 kubenswrapper[7440]: I0312 14:12:20.104329 7440 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0"] Mar 12 14:12:20.105187 master-0 kubenswrapper[7440]: I0312 14:12:20.105134 7440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8691ff1161482cb0ea7536261d7d49ae2b9d112fc1e670e086005a7ae489ba6c" Mar 12 14:12:20.105249 master-0 kubenswrapper[7440]: I0312 14:12:20.105185 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"464680c0443f63fd05a16f58ce52f9d2432c0930cf81a8fc5c4fea579afa01c4"} Mar 12 14:12:20.105249 master-0 kubenswrapper[7440]: I0312 14:12:20.105246 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"93a2be4c1cc0002fe72e77c70515d0d6599835f46c575d492bb4928167ddaaac"} Mar 12 14:12:20.105316 master-0 kubenswrapper[7440]: I0312 14:12:20.105258 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"ece77fc75f8a7b32ae075ac5d9a3759a5a3b706e4492b696da7d62701d1c5eb8"} Mar 12 14:12:20.105316 master-0 kubenswrapper[7440]: I0312 14:12:20.105269 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"360de6d7cd6901ac994724b265fa41deda5af26bfc1f5396acb31cdc3acfea90"} Mar 12 14:12:20.105316 master-0 kubenswrapper[7440]: I0312 14:12:20.105278 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"d62d60cfbaec34b17f1179067155a280075561a18ae5a4aaf75af0a737c10b39"} Mar 12 14:12:20.105316 master-0 kubenswrapper[7440]: I0312 14:12:20.105288 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"857cc78e0c0678c5508c4eb58b1fbdd872cb096a1de1ff4746f9a88c2863a73c"} Mar 12 14:12:20.105316 master-0 kubenswrapper[7440]: I0312 14:12:20.105296 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"b8604dab0ababfe57b1fd26a526dbe9889c845e06d2a34bab1a127fa06b3b512"} Mar 12 14:12:20.105316 master-0 kubenswrapper[7440]: I0312 14:12:20.105307 7440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a9ca791a9c31d32eb3b1f76dacaa4dbf6e803ba7631d129d0e8b60119983844" Mar 12 14:12:20.105316 master-0 kubenswrapper[7440]: I0312 14:12:20.105315 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"db63589c7d51a05a8314fa99d2bcd36f7d574dddf92caf850f4dc8319e77bd65"} Mar 12 14:12:20.105499 master-0 kubenswrapper[7440]: I0312 14:12:20.105324 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"24ee3eeca5a94629f5c47b0ce9433577ce076c824acff7a3bc086c327eefa56a"} Mar 12 14:12:20.105499 master-0 kubenswrapper[7440]: I0312 14:12:20.105333 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"fc7c0f722bd2f10c123348ade47d19a8deffa1a39c549432778dbf52755ce3ca"} Mar 12 14:12:20.105499 master-0 kubenswrapper[7440]: I0312 14:12:20.105342 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"b1fca57791a870ac4ac75e7237e7b4e82aa4de3284ea9553565786a397ec7628"} Mar 12 14:12:20.105499 master-0 kubenswrapper[7440]: I0312 14:12:20.105350 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"e2b0c764e775c64bb06daa502f6ffcef2b80b99417457721ebe17108234fc61d"} Mar 12 14:12:20.105499 master-0 kubenswrapper[7440]: I0312 14:12:20.105359 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"76e7b395c2a9ba3ff27523b5970961a2bb5a85db216f39e42f2dea82ac7351d4"} Mar 12 14:12:20.105499 master-0 kubenswrapper[7440]: I0312 14:12:20.105372 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerDied","Data":"e520d98d7cf8903cafb8595cf7b3f03df14b8a00d253f1fd4abb1292c29d616a"} Mar 12 14:12:20.105499 master-0 kubenswrapper[7440]: I0312 14:12:20.105381 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"451cb30a0b8b39cb726cc182b92fb7f0c2e916a7e1138a7ad734d273a44b3de6"} Mar 12 14:12:20.105499 master-0 kubenswrapper[7440]: I0312 14:12:20.105417 7440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b89d1bc2f4a8ea2138bad228a8f181af661c81a072e9cd06792d7137bd4ebc43" Mar 12 14:12:20.105499 master-0 kubenswrapper[7440]: I0312 14:12:20.105427 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"d81715b1a2dbc54afa6b4ebf0b0cbc31e29e0bdb6377beba9d7f0f245fb67694"} Mar 12 14:12:20.105499 master-0 kubenswrapper[7440]: I0312 14:12:20.105435 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"6cddeeb3d78172cd6ac796885f0e90479fda94b207b0174c18397e7f3e17b7e9"} Mar 12 14:12:20.114370 master-0 kubenswrapper[7440]: E0312 14:12:20.114323 7440 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:12:20.116734 master-0 kubenswrapper[7440]: W0312 14:12:20.116701 7440 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 12 14:12:20.116809 master-0 kubenswrapper[7440]: E0312 14:12:20.116746 7440 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 14:12:20.120689 master-0 kubenswrapper[7440]: E0312 14:12:20.120638 7440 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 14:12:20.120861 master-0 kubenswrapper[7440]: E0312 14:12:20.120731 7440 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:12:20.120861 master-0 kubenswrapper[7440]: E0312 14:12:20.120744 7440 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 14:12:20.191163 master-0 kubenswrapper[7440]: I0312 14:12:20.191102 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:12:20.191163 master-0 kubenswrapper[7440]: I0312 14:12:20.191149 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:12:20.191356 master-0 kubenswrapper[7440]: I0312 14:12:20.191211 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 14:12:20.191356 master-0 kubenswrapper[7440]: I0312 14:12:20.191295 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:12:20.191356 master-0 kubenswrapper[7440]: I0312 14:12:20.191345 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 14:12:20.191476 master-0 kubenswrapper[7440]: I0312 14:12:20.191374 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 14:12:20.191476 master-0 kubenswrapper[7440]: I0312 14:12:20.191390 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:12:20.191476 master-0 kubenswrapper[7440]: I0312 14:12:20.191425 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:12:20.191476 master-0 kubenswrapper[7440]: I0312 14:12:20.191452 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 14:12:20.191476 master-0 kubenswrapper[7440]: I0312 14:12:20.191469 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:12:20.191705 master-0 kubenswrapper[7440]: I0312 14:12:20.191487 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:12:20.191705 master-0 kubenswrapper[7440]: I0312 14:12:20.191506 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:12:20.191705 master-0 kubenswrapper[7440]: I0312 14:12:20.191524 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:12:20.191705 master-0 kubenswrapper[7440]: I0312 14:12:20.191542 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:12:20.191705 master-0 kubenswrapper[7440]: I0312 14:12:20.191557 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 14:12:20.191705 master-0 kubenswrapper[7440]: I0312 14:12:20.191574 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 14:12:20.191705 master-0 kubenswrapper[7440]: I0312 14:12:20.191602 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:12:20.291982 master-0 kubenswrapper[7440]: I0312 14:12:20.291910 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:12:20.292205 master-0 kubenswrapper[7440]: I0312 14:12:20.292033 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:12:20.292205 master-0 kubenswrapper[7440]: I0312 14:12:20.292079 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 14:12:20.292205 master-0 kubenswrapper[7440]: I0312 14:12:20.292130 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 14:12:20.292205 master-0 kubenswrapper[7440]: I0312 14:12:20.292152 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:12:20.292205 master-0 kubenswrapper[7440]: I0312 14:12:20.292130 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 14:12:20.292205 master-0 kubenswrapper[7440]: I0312 14:12:20.292187 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:12:20.292205 master-0 kubenswrapper[7440]: I0312 14:12:20.292168 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:12:20.292443 master-0 kubenswrapper[7440]: I0312 14:12:20.292156 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 14:12:20.292443 master-0 kubenswrapper[7440]: I0312 14:12:20.292259 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:12:20.292443 master-0 kubenswrapper[7440]: I0312 14:12:20.292318 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 14:12:20.292443 master-0 kubenswrapper[7440]: I0312 14:12:20.292344 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:12:20.292443 master-0 kubenswrapper[7440]: I0312 14:12:20.292386 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 14:12:20.292443 master-0 kubenswrapper[7440]: I0312 14:12:20.292396 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:12:20.292443 master-0 kubenswrapper[7440]: I0312 14:12:20.292419 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 14:12:20.292443 master-0 kubenswrapper[7440]: I0312 14:12:20.292438 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 14:12:20.292701 master-0 kubenswrapper[7440]: I0312 14:12:20.292472 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 14:12:20.292701 master-0 kubenswrapper[7440]: I0312 14:12:20.292480 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 14:12:20.292701 master-0 kubenswrapper[7440]: I0312 14:12:20.292490 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:12:20.292701 master-0 kubenswrapper[7440]: I0312 14:12:20.292508 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:12:20.292701 master-0 kubenswrapper[7440]: I0312 14:12:20.292527 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:12:20.292701 master-0 kubenswrapper[7440]: I0312 14:12:20.292532 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:12:20.292701 master-0 kubenswrapper[7440]: I0312 14:12:20.292544 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:12:20.292701 master-0 kubenswrapper[7440]: I0312 14:12:20.292562 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:12:20.292701 master-0 kubenswrapper[7440]: I0312 14:12:20.292581 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:12:20.292701 master-0 kubenswrapper[7440]: I0312 14:12:20.292562 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:12:20.292701 master-0 kubenswrapper[7440]: I0312 14:12:20.292592 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:12:20.292701 master-0 kubenswrapper[7440]: I0312 14:12:20.292599 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:12:20.292701 master-0 kubenswrapper[7440]: I0312 14:12:20.292586 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:12:20.292701 master-0 kubenswrapper[7440]: I0312 14:12:20.292610 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:12:20.292701 master-0 kubenswrapper[7440]: I0312 14:12:20.292658 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:12:20.292701 master-0 kubenswrapper[7440]: I0312 14:12:20.292709 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 14:12:20.293326 master-0 kubenswrapper[7440]: I0312 14:12:20.292737 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:12:20.293326 master-0 kubenswrapper[7440]: I0312 14:12:20.292740 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 14:12:20.757326 master-0 kubenswrapper[7440]: I0312 14:12:20.757259 7440 apiserver.go:52] "Watching apiserver" Mar 12 14:12:20.766559 master-0 kubenswrapper[7440]: I0312 14:12:20.766505 7440 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 12 14:12:20.767977 master-0 kubenswrapper[7440]: I0312 14:12:20.767948 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-n9v7g","openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82","assisted-installer/assisted-installer-controller-lbcvf","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv","openshift-marketplace/marketplace-operator-64bf9778cb-qzdff","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv","kube-system/bootstrap-kube-controller-manager-master-0","openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v","openshift-dns-operator/dns-operator-589895fbb7-q4wwv","openshift-ingress-operator/ingress-operator-677db989d6-44hhf","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-multus/multus-additional-cni-plugins-h868v","openshift-multus/multus-zttwz","kube-system/bootstrap-kube-scheduler-master-0","openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw","openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj","openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79","openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv","openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78","openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t","openshift-network-operator/network-operator-7c649bf6d4-ldxfn","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6","openshift-network-diagnostics/network-check-target-8q2fv","openshift-network-node-identity/network-node-identity-rqq4v","openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv","openshift-multus/multus-admission-controller-8d675b596-sm9nb","openshift-network-operator/iptables-alerter-vb4v5","openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9","openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5","openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp","openshift-etcd/etcd-master-0-master-0","openshift-ovn-kubernetes/ovnkube-node-h4b4k","openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4","openshift-cluster-version/cluster-version-operator-745944c6b7-vs878","openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk"] Mar 12 14:12:20.768204 master-0 kubenswrapper[7440]: I0312 14:12:20.768179 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-lbcvf" Mar 12 14:12:20.769888 master-0 kubenswrapper[7440]: I0312 14:12:20.769824 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 12 14:12:20.769959 master-0 kubenswrapper[7440]: I0312 14:12:20.769848 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 12 14:12:21.156399 master-0 kubenswrapper[7440]: I0312 14:12:21.156308 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:21.160198 master-0 kubenswrapper[7440]: I0312 14:12:21.160159 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:21.160360 master-0 kubenswrapper[7440]: I0312 14:12:21.160268 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:12:21.160360 master-0 kubenswrapper[7440]: I0312 14:12:21.160346 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:12:21.160632 master-0 kubenswrapper[7440]: I0312 14:12:21.160607 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:21.161030 master-0 kubenswrapper[7440]: I0312 14:12:21.161004 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:12:21.161330 master-0 kubenswrapper[7440]: I0312 14:12:21.161304 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" Mar 12 14:12:21.161663 master-0 kubenswrapper[7440]: I0312 14:12:21.161636 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:12:21.161742 master-0 kubenswrapper[7440]: I0312 14:12:21.161717 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:12:21.162025 master-0 kubenswrapper[7440]: I0312 14:12:21.162003 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" Mar 12 14:12:21.162452 master-0 kubenswrapper[7440]: I0312 14:12:21.162423 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:21.162452 master-0 kubenswrapper[7440]: I0312 14:12:21.162451 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:12:21.162572 master-0 kubenswrapper[7440]: I0312 14:12:21.162480 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:21.163788 master-0 kubenswrapper[7440]: I0312 14:12:21.163753 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:12:21.166048 master-0 kubenswrapper[7440]: I0312 14:12:21.166011 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 12 14:12:21.166446 master-0 kubenswrapper[7440]: I0312 14:12:21.166417 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 12 14:12:21.166562 master-0 kubenswrapper[7440]: I0312 14:12:21.166532 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 12 14:12:21.166627 master-0 kubenswrapper[7440]: I0312 14:12:21.166611 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 12 14:12:21.166728 master-0 kubenswrapper[7440]: I0312 14:12:21.166688 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 12 14:12:21.170606 master-0 kubenswrapper[7440]: I0312 14:12:21.170568 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 12 14:12:21.170606 master-0 kubenswrapper[7440]: I0312 14:12:21.170592 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 12 14:12:21.170754 master-0 kubenswrapper[7440]: I0312 14:12:21.170723 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 12 14:12:21.176706 master-0 kubenswrapper[7440]: I0312 14:12:21.175200 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 12 14:12:21.176706 master-0 kubenswrapper[7440]: I0312 14:12:21.175333 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 12 14:12:21.178669 master-0 kubenswrapper[7440]: I0312 14:12:21.177302 7440 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 12 14:12:21.180133 master-0 kubenswrapper[7440]: I0312 14:12:21.179970 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 12 14:12:21.180524 master-0 kubenswrapper[7440]: I0312 14:12:21.180492 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 12 14:12:21.180731 master-0 kubenswrapper[7440]: I0312 14:12:21.180704 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 12 14:12:21.180872 master-0 kubenswrapper[7440]: I0312 14:12:21.180844 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 12 14:12:21.181004 master-0 kubenswrapper[7440]: I0312 14:12:21.180967 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 12 14:12:21.181073 master-0 kubenswrapper[7440]: I0312 14:12:21.181055 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 12 14:12:21.181163 master-0 kubenswrapper[7440]: I0312 14:12:21.181148 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 12 14:12:21.181244 master-0 kubenswrapper[7440]: I0312 14:12:21.181227 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 12 14:12:21.181334 master-0 kubenswrapper[7440]: I0312 14:12:21.181317 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 12 14:12:21.181414 master-0 kubenswrapper[7440]: I0312 14:12:21.181398 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 12 14:12:21.181497 master-0 kubenswrapper[7440]: I0312 14:12:21.181481 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 12 14:12:21.181616 master-0 kubenswrapper[7440]: I0312 14:12:21.181582 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 12 14:12:21.182731 master-0 kubenswrapper[7440]: I0312 14:12:21.182609 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 12 14:12:21.182731 master-0 kubenswrapper[7440]: I0312 14:12:21.182713 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 12 14:12:21.182826 master-0 kubenswrapper[7440]: I0312 14:12:21.182791 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 12 14:12:21.182954 master-0 kubenswrapper[7440]: I0312 14:12:21.182937 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 12 14:12:21.183021 master-0 kubenswrapper[7440]: I0312 14:12:21.183006 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 12 14:12:21.183124 master-0 kubenswrapper[7440]: I0312 14:12:21.183081 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 12 14:12:21.183173 master-0 kubenswrapper[7440]: I0312 14:12:21.183161 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 12 14:12:21.183250 master-0 kubenswrapper[7440]: I0312 14:12:21.183232 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 12 14:12:21.183350 master-0 kubenswrapper[7440]: I0312 14:12:21.183324 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 12 14:12:21.183410 master-0 kubenswrapper[7440]: I0312 14:12:21.183395 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 12 14:12:21.183476 master-0 kubenswrapper[7440]: I0312 14:12:21.183461 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 12 14:12:21.183525 master-0 kubenswrapper[7440]: I0312 14:12:21.183499 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 12 14:12:21.183573 master-0 kubenswrapper[7440]: I0312 14:12:21.183525 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 12 14:12:21.183573 master-0 kubenswrapper[7440]: I0312 14:12:21.183464 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 12 14:12:21.183679 master-0 kubenswrapper[7440]: I0312 14:12:21.183663 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 12 14:12:21.183761 master-0 kubenswrapper[7440]: I0312 14:12:21.183747 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 12 14:12:21.183882 master-0 kubenswrapper[7440]: I0312 14:12:21.183854 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 12 14:12:21.183941 master-0 kubenswrapper[7440]: I0312 14:12:21.183882 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 12 14:12:21.184007 master-0 kubenswrapper[7440]: I0312 14:12:21.183961 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 12 14:12:21.184007 master-0 kubenswrapper[7440]: I0312 14:12:21.183979 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 12 14:12:21.184086 master-0 kubenswrapper[7440]: I0312 14:12:21.184065 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 12 14:12:21.184086 master-0 kubenswrapper[7440]: I0312 14:12:21.184076 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 12 14:12:21.184156 master-0 kubenswrapper[7440]: I0312 14:12:21.184138 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 12 14:12:21.184156 master-0 kubenswrapper[7440]: I0312 14:12:21.184145 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 12 14:12:21.184913 master-0 kubenswrapper[7440]: I0312 14:12:21.184852 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 12 14:12:21.184968 master-0 kubenswrapper[7440]: I0312 14:12:21.184914 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 12 14:12:21.185007 master-0 kubenswrapper[7440]: I0312 14:12:21.184967 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 12 14:12:21.185077 master-0 kubenswrapper[7440]: I0312 14:12:21.185049 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 12 14:12:21.185240 master-0 kubenswrapper[7440]: I0312 14:12:21.185212 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 12 14:12:21.185599 master-0 kubenswrapper[7440]: I0312 14:12:21.185551 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 12 14:12:21.186063 master-0 kubenswrapper[7440]: I0312 14:12:21.185989 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 12 14:12:21.186689 master-0 kubenswrapper[7440]: I0312 14:12:21.186653 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 12 14:12:21.186924 master-0 kubenswrapper[7440]: I0312 14:12:21.186880 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 12 14:12:21.186970 master-0 kubenswrapper[7440]: I0312 14:12:21.186953 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 12 14:12:21.187200 master-0 kubenswrapper[7440]: I0312 14:12:21.187174 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 12 14:12:21.187318 master-0 kubenswrapper[7440]: I0312 14:12:21.187293 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 12 14:12:21.187358 master-0 kubenswrapper[7440]: I0312 14:12:21.187319 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 12 14:12:21.187415 master-0 kubenswrapper[7440]: I0312 14:12:21.187296 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 12 14:12:21.187415 master-0 kubenswrapper[7440]: I0312 14:12:21.187391 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 12 14:12:21.187486 master-0 kubenswrapper[7440]: I0312 14:12:21.187452 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 12 14:12:21.187547 master-0 kubenswrapper[7440]: I0312 14:12:21.187526 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 12 14:12:21.187586 master-0 kubenswrapper[7440]: I0312 14:12:21.187572 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 12 14:12:21.187663 master-0 kubenswrapper[7440]: I0312 14:12:21.187649 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 12 14:12:21.187718 master-0 kubenswrapper[7440]: I0312 14:12:21.187701 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 12 14:12:21.187761 master-0 kubenswrapper[7440]: I0312 14:12:21.187752 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 12 14:12:21.187871 master-0 kubenswrapper[7440]: I0312 14:12:21.187845 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 12 14:12:21.187871 master-0 kubenswrapper[7440]: I0312 14:12:21.187867 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 12 14:12:21.188033 master-0 kubenswrapper[7440]: I0312 14:12:21.188015 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 12 14:12:21.188033 master-0 kubenswrapper[7440]: I0312 14:12:21.188023 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 12 14:12:21.188108 master-0 kubenswrapper[7440]: I0312 14:12:21.188098 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 12 14:12:21.188150 master-0 kubenswrapper[7440]: I0312 14:12:21.188132 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 12 14:12:21.188191 master-0 kubenswrapper[7440]: I0312 14:12:21.188171 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 12 14:12:21.188228 master-0 kubenswrapper[7440]: I0312 14:12:21.187175 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 12 14:12:21.188228 master-0 kubenswrapper[7440]: I0312 14:12:21.188220 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 12 14:12:21.188308 master-0 kubenswrapper[7440]: I0312 14:12:21.188133 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 12 14:12:21.188308 master-0 kubenswrapper[7440]: I0312 14:12:21.188296 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 12 14:12:21.188376 master-0 kubenswrapper[7440]: I0312 14:12:21.188330 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 12 14:12:21.188376 master-0 kubenswrapper[7440]: I0312 14:12:21.188129 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 12 14:12:21.188376 master-0 kubenswrapper[7440]: I0312 14:12:21.188276 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 12 14:12:21.188482 master-0 kubenswrapper[7440]: I0312 14:12:21.188446 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 12 14:12:21.188482 master-0 kubenswrapper[7440]: I0312 14:12:21.188457 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 12 14:12:21.188573 master-0 kubenswrapper[7440]: I0312 14:12:21.188555 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 12 14:12:21.188630 master-0 kubenswrapper[7440]: I0312 14:12:21.188613 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 12 14:12:21.188676 master-0 kubenswrapper[7440]: I0312 14:12:21.188638 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 14:12:21.189197 master-0 kubenswrapper[7440]: I0312 14:12:21.189168 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 12 14:12:21.189247 master-0 kubenswrapper[7440]: I0312 14:12:21.189230 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 12 14:12:21.189541 master-0 kubenswrapper[7440]: I0312 14:12:21.189513 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 12 14:12:21.189581 master-0 kubenswrapper[7440]: I0312 14:12:21.189572 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 12 14:12:21.189618 master-0 kubenswrapper[7440]: I0312 14:12:21.189593 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 12 14:12:21.189655 master-0 kubenswrapper[7440]: I0312 14:12:21.189634 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 12 14:12:21.189705 master-0 kubenswrapper[7440]: I0312 14:12:21.189677 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 12 14:12:21.189705 master-0 kubenswrapper[7440]: I0312 14:12:21.189687 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 12 14:12:21.190391 master-0 kubenswrapper[7440]: I0312 14:12:21.190355 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 12 14:12:21.190391 master-0 kubenswrapper[7440]: I0312 14:12:21.190387 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 12 14:12:21.190679 master-0 kubenswrapper[7440]: I0312 14:12:21.190486 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 12 14:12:21.196987 master-0 kubenswrapper[7440]: I0312 14:12:21.196950 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 12 14:12:21.205586 master-0 kubenswrapper[7440]: I0312 14:12:21.205379 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 12 14:12:21.205774 master-0 kubenswrapper[7440]: I0312 14:12:21.205656 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 12 14:12:21.206943 master-0 kubenswrapper[7440]: I0312 14:12:21.206923 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 12 14:12:21.214069 master-0 kubenswrapper[7440]: E0312 14:12:21.214024 7440 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:12:21.214622 master-0 kubenswrapper[7440]: W0312 14:12:21.214574 7440 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 12 14:12:21.214695 master-0 kubenswrapper[7440]: E0312 14:12:21.214657 7440 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:12:21.214754 master-0 kubenswrapper[7440]: I0312 14:12:21.214727 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 12 14:12:21.214939 master-0 kubenswrapper[7440]: E0312 14:12:21.214916 7440 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 14:12:21.214998 master-0 kubenswrapper[7440]: E0312 14:12:21.214953 7440 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 14:12:21.216146 master-0 kubenswrapper[7440]: E0312 14:12:21.216120 7440 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 14:12:21.261821 master-0 kubenswrapper[7440]: I0312 14:12:21.261780 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08ea0d9f-0635-4759-803e-572eca2f2d34-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-vpn8v\" (UID: \"08ea0d9f-0635-4759-803e-572eca2f2d34\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" Mar 12 14:12:21.262114 master-0 kubenswrapper[7440]: I0312 14:12:21.262091 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:12:21.262218 master-0 kubenswrapper[7440]: I0312 14:12:21.262201 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.262306 master-0 kubenswrapper[7440]: I0312 14:12:21.262292 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:21.262394 master-0 kubenswrapper[7440]: I0312 14:12:21.262380 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-cni-netd\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.262480 master-0 kubenswrapper[7440]: I0312 14:12:21.262466 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j47xv\" (UniqueName: \"kubernetes.io/projected/42dbcb8f-e8c4-413e-977d-40aa6df226aa-kube-api-access-j47xv\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:21.262570 master-0 kubenswrapper[7440]: I0312 14:12:21.262555 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6defef79-6058-466a-ae0b-8eb9258126be-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:12:21.262661 master-0 kubenswrapper[7440]: I0312 14:12:21.262646 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f72fbbe-69f0-4622-be05-b839ff9b4d45-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-gt2tw\" (UID: \"3f72fbbe-69f0-4622-be05-b839ff9b4d45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" Mar 12 14:12:21.262753 master-0 kubenswrapper[7440]: I0312 14:12:21.262739 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6defef79-6058-466a-ae0b-8eb9258126be-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:12:21.262840 master-0 kubenswrapper[7440]: I0312 14:12:21.262824 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqqcc\" (UniqueName: \"kubernetes.io/projected/272b53c4-134c-404d-9a27-c7371415b1f7-kube-api-access-nqqcc\") pod \"catalog-operator-7d9c49f57b-whr79\" (UID: \"272b53c4-134c-404d-9a27-c7371415b1f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:12:21.262949 master-0 kubenswrapper[7440]: I0312 14:12:21.262932 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-trusted-ca\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:21.263051 master-0 kubenswrapper[7440]: I0312 14:12:21.263036 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bba274a-38c7-4d13-88a5-6bc39228416c-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-qtql5\" (UID: \"1bba274a-38c7-4d13-88a5-6bc39228416c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" Mar 12 14:12:21.263135 master-0 kubenswrapper[7440]: I0312 14:12:21.263121 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-multus-certs\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.263216 master-0 kubenswrapper[7440]: I0312 14:12:21.263202 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k4mx\" (UniqueName: \"kubernetes.io/projected/761993bb-2cba-4e1a-b304-36a24817af94-kube-api-access-2k4mx\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.264170 master-0 kubenswrapper[7440]: I0312 14:12:21.264118 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:12:21.264253 master-0 kubenswrapper[7440]: I0312 14:12:21.264184 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:21.264253 master-0 kubenswrapper[7440]: I0312 14:12:21.264203 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bba274a-38c7-4d13-88a5-6bc39228416c-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-qtql5\" (UID: \"1bba274a-38c7-4d13-88a5-6bc39228416c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" Mar 12 14:12:21.264253 master-0 kubenswrapper[7440]: I0312 14:12:21.264216 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:21.264253 master-0 kubenswrapper[7440]: I0312 14:12:21.264248 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-os-release\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:12:21.264395 master-0 kubenswrapper[7440]: I0312 14:12:21.264282 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d775283-2696-4411-8ddf-d4e6000f0a0c-serving-cert\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:21.264395 master-0 kubenswrapper[7440]: I0312 14:12:21.264309 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-service-ca\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:12:21.264395 master-0 kubenswrapper[7440]: I0312 14:12:21.264333 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqh9t\" (UniqueName: \"kubernetes.io/projected/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-kube-api-access-cqh9t\") pod \"olm-operator-d64cfc9db-f48hv\" (UID: \"07a6a1d6-fecf-4847-b7c1-160d5d7320fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:12:21.264395 master-0 kubenswrapper[7440]: I0312 14:12:21.264363 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mbjg\" (UniqueName: \"kubernetes.io/projected/3f72fbbe-69f0-4622-be05-b839ff9b4d45-kube-api-access-2mbjg\") pod \"openshift-apiserver-operator-799b6db4d7-gt2tw\" (UID: \"3f72fbbe-69f0-4622-be05-b839ff9b4d45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" Mar 12 14:12:21.264395 master-0 kubenswrapper[7440]: I0312 14:12:21.264391 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-k8s-cni-cncf-io\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.264583 master-0 kubenswrapper[7440]: I0312 14:12:21.264421 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-node-log\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.264583 master-0 kubenswrapper[7440]: I0312 14:12:21.264451 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-conf-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.264583 master-0 kubenswrapper[7440]: I0312 14:12:21.264485 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-ovnkube-config\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.264583 master-0 kubenswrapper[7440]: I0312 14:12:21.264513 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:21.264583 master-0 kubenswrapper[7440]: I0312 14:12:21.264543 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:21.264583 master-0 kubenswrapper[7440]: I0312 14:12:21.264577 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4pvp\" (UniqueName: \"kubernetes.io/projected/76d596c0-6a41-43e1-9516-aee9ad834ec2-kube-api-access-c4pvp\") pod \"service-ca-operator-69b6fc6b88-fv6pp\" (UID: \"76d596c0-6a41-43e1-9516-aee9ad834ec2\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" Mar 12 14:12:21.264838 master-0 kubenswrapper[7440]: I0312 14:12:21.264606 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-kubelet\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.264838 master-0 kubenswrapper[7440]: I0312 14:12:21.264633 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-cni-binary-copy\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:12:21.264838 master-0 kubenswrapper[7440]: I0312 14:12:21.264664 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6z8v\" (UniqueName: \"kubernetes.io/projected/57930a54-89ab-4ec8-a504-74035bb74d63-kube-api-access-d6z8v\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:21.264838 master-0 kubenswrapper[7440]: I0312 14:12:21.264694 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-cnibin\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.264838 master-0 kubenswrapper[7440]: I0312 14:12:21.264722 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pfns\" (UniqueName: \"kubernetes.io/projected/95c11263-0d68-4b11-bcfd-bcb0e96a6988-kube-api-access-6pfns\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.264838 master-0 kubenswrapper[7440]: I0312 14:12:21.264749 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkmrv\" (UniqueName: \"kubernetes.io/projected/a2435b91-86d6-415b-a978-34cc859e74f2-kube-api-access-qkmrv\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:21.264838 master-0 kubenswrapper[7440]: I0312 14:12:21.264777 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/95c11263-0d68-4b11-bcfd-bcb0e96a6988-cni-binary-copy\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.264838 master-0 kubenswrapper[7440]: I0312 14:12:21.264806 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-log-socket\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.264838 master-0 kubenswrapper[7440]: I0312 14:12:21.264835 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert\") pod \"olm-operator-d64cfc9db-f48hv\" (UID: \"07a6a1d6-fecf-4847-b7c1-160d5d7320fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:12:21.265189 master-0 kubenswrapper[7440]: I0312 14:12:21.264869 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc9zd\" (UniqueName: \"kubernetes.io/projected/3dc73c14-852d-4957-b6ac-84366ba0594f-kube-api-access-sc9zd\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hkf2t\" (UID: \"3dc73c14-852d-4957-b6ac-84366ba0594f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" Mar 12 14:12:21.265189 master-0 kubenswrapper[7440]: I0312 14:12:21.264913 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d596c0-6a41-43e1-9516-aee9ad834ec2-config\") pod \"service-ca-operator-69b6fc6b88-fv6pp\" (UID: \"76d596c0-6a41-43e1-9516-aee9ad834ec2\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" Mar 12 14:12:21.265189 master-0 kubenswrapper[7440]: I0312 14:12:21.264940 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-config\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:21.265189 master-0 kubenswrapper[7440]: I0312 14:12:21.264964 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:21.265189 master-0 kubenswrapper[7440]: I0312 14:12:21.264991 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z8pd\" (UniqueName: \"kubernetes.io/projected/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-kube-api-access-2z8pd\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:21.265189 master-0 kubenswrapper[7440]: I0312 14:12:21.265004 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:21.265189 master-0 kubenswrapper[7440]: I0312 14:12:21.265017 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76d596c0-6a41-43e1-9516-aee9ad834ec2-serving-cert\") pod \"service-ca-operator-69b6fc6b88-fv6pp\" (UID: \"76d596c0-6a41-43e1-9516-aee9ad834ec2\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" Mar 12 14:12:21.265189 master-0 kubenswrapper[7440]: I0312 14:12:21.263277 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:21.265189 master-0 kubenswrapper[7440]: I0312 14:12:21.262673 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:12:21.265189 master-0 kubenswrapper[7440]: I0312 14:12:21.265047 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:21.265189 master-0 kubenswrapper[7440]: I0312 14:12:21.263860 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6defef79-6058-466a-ae0b-8eb9258126be-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:12:21.265570 master-0 kubenswrapper[7440]: I0312 14:12:21.264053 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6defef79-6058-466a-ae0b-8eb9258126be-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:12:21.265570 master-0 kubenswrapper[7440]: I0312 14:12:21.264057 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f72fbbe-69f0-4622-be05-b839ff9b4d45-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-gt2tw\" (UID: \"3f72fbbe-69f0-4622-be05-b839ff9b4d45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" Mar 12 14:12:21.265570 master-0 kubenswrapper[7440]: I0312 14:12:21.263913 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-trusted-ca\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:21.266374 master-0 kubenswrapper[7440]: I0312 14:12:21.266348 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-ovnkube-config\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.266429 master-0 kubenswrapper[7440]: I0312 14:12:21.266412 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-service-ca\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:12:21.266553 master-0 kubenswrapper[7440]: I0312 14:12:21.266533 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-config\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:21.266597 master-0 kubenswrapper[7440]: I0312 14:12:21.266574 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:21.266644 master-0 kubenswrapper[7440]: I0312 14:12:21.266626 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:21.266700 master-0 kubenswrapper[7440]: I0312 14:12:21.266678 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:12:21.266700 master-0 kubenswrapper[7440]: I0312 14:12:21.266693 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d775283-2696-4411-8ddf-d4e6000f0a0c-serving-cert\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:21.266776 master-0 kubenswrapper[7440]: I0312 14:12:21.266719 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-bound-sa-token\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:21.266776 master-0 kubenswrapper[7440]: I0312 14:12:21.266770 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d596c0-6a41-43e1-9516-aee9ad834ec2-config\") pod \"service-ca-operator-69b6fc6b88-fv6pp\" (UID: \"76d596c0-6a41-43e1-9516-aee9ad834ec2\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" Mar 12 14:12:21.266932 master-0 kubenswrapper[7440]: I0312 14:12:21.266858 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/95c11263-0d68-4b11-bcfd-bcb0e96a6988-cni-binary-copy\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.266932 master-0 kubenswrapper[7440]: I0312 14:12:21.266891 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f72fbbe-69f0-4622-be05-b839ff9b4d45-config\") pod \"openshift-apiserver-operator-799b6db4d7-gt2tw\" (UID: \"3f72fbbe-69f0-4622-be05-b839ff9b4d45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" Mar 12 14:12:21.267017 master-0 kubenswrapper[7440]: I0312 14:12:21.266950 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-cni-binary-copy\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:12:21.267017 master-0 kubenswrapper[7440]: I0312 14:12:21.266970 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76d596c0-6a41-43e1-9516-aee9ad834ec2-serving-cert\") pod \"service-ca-operator-69b6fc6b88-fv6pp\" (UID: \"76d596c0-6a41-43e1-9516-aee9ad834ec2\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" Mar 12 14:12:21.267096 master-0 kubenswrapper[7440]: I0312 14:12:21.267047 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvngn\" (UniqueName: \"kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn\") pod \"network-check-target-8q2fv\" (UID: \"8e733069-752a-4140-83eb-8287f1bce1a7\") " pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:12:21.267096 master-0 kubenswrapper[7440]: I0312 14:12:21.267082 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-serving-cert\") pod \"kube-apiserver-operator-68bd585b-smpl5\" (UID: \"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" Mar 12 14:12:21.267177 master-0 kubenswrapper[7440]: I0312 14:12:21.267152 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-etc-kubernetes\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.267215 master-0 kubenswrapper[7440]: I0312 14:12:21.267183 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-cni-bin\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.267215 master-0 kubenswrapper[7440]: I0312 14:12:21.267206 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/761993bb-2cba-4e1a-b304-36a24817af94-ovn-node-metrics-cert\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.267295 master-0 kubenswrapper[7440]: I0312 14:12:21.267260 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwtr9\" (UniqueName: \"kubernetes.io/projected/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-kube-api-access-wwtr9\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:12:21.267295 master-0 kubenswrapper[7440]: I0312 14:12:21.267271 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f72fbbe-69f0-4622-be05-b839ff9b4d45-config\") pod \"openshift-apiserver-operator-799b6db4d7-gt2tw\" (UID: \"3f72fbbe-69f0-4622-be05-b839ff9b4d45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" Mar 12 14:12:21.267295 master-0 kubenswrapper[7440]: I0312 14:12:21.267278 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcwrv\" (UniqueName: \"kubernetes.io/projected/8d775283-2696-4411-8ddf-d4e6000f0a0c-kube-api-access-lcwrv\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:21.267410 master-0 kubenswrapper[7440]: I0312 14:12:21.267303 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-config\") pod \"kube-apiserver-operator-68bd585b-smpl5\" (UID: \"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" Mar 12 14:12:21.267410 master-0 kubenswrapper[7440]: I0312 14:12:21.267370 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-whereabouts-configmap\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:12:21.267484 master-0 kubenswrapper[7440]: I0312 14:12:21.267425 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-ovn\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.267484 master-0 kubenswrapper[7440]: I0312 14:12:21.267456 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a898118-6d01-4211-92f0-43967b75405c-serving-cert\") pod \"openshift-config-operator-64488f9d78-ljnjj\" (UID: \"0a898118-6d01-4211-92f0-43967b75405c\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:12:21.267484 master-0 kubenswrapper[7440]: I0312 14:12:21.267466 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/761993bb-2cba-4e1a-b304-36a24817af94-ovn-node-metrics-cert\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.267590 master-0 kubenswrapper[7440]: I0312 14:12:21.267487 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vnhl\" (UniqueName: \"kubernetes.io/projected/8c6b9f13-4a3a-4920-a84b-f76516501f81-kube-api-access-2vnhl\") pod \"dns-operator-589895fbb7-q4wwv\" (UID: \"8c6b9f13-4a3a-4920-a84b-f76516501f81\") " pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" Mar 12 14:12:21.267590 master-0 kubenswrapper[7440]: I0312 14:12:21.267561 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-env-overrides\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:12:21.267667 master-0 kubenswrapper[7440]: I0312 14:12:21.267591 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-netns\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.267667 master-0 kubenswrapper[7440]: I0312 14:12:21.267621 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:21.267667 master-0 kubenswrapper[7440]: I0312 14:12:21.267644 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-ovnkube-script-lib\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.267776 master-0 kubenswrapper[7440]: I0312 14:12:21.267668 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2435b91-86d6-415b-a978-34cc859e74f2-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:21.267776 master-0 kubenswrapper[7440]: I0312 14:12:21.267694 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-hostroot\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.267776 master-0 kubenswrapper[7440]: I0312 14:12:21.267759 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-config\") pod \"kube-apiserver-operator-68bd585b-smpl5\" (UID: \"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" Mar 12 14:12:21.267888 master-0 kubenswrapper[7440]: I0312 14:12:21.267799 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-daemon-config\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.267888 master-0 kubenswrapper[7440]: I0312 14:12:21.267831 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-kubelet\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.267888 master-0 kubenswrapper[7440]: I0312 14:12:21.267858 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-config\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:21.267888 master-0 kubenswrapper[7440]: I0312 14:12:21.267872 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-serving-cert\") pod \"kube-apiserver-operator-68bd585b-smpl5\" (UID: \"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" Mar 12 14:12:21.267888 master-0 kubenswrapper[7440]: I0312 14:12:21.267880 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-kube-api-access\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:12:21.268088 master-0 kubenswrapper[7440]: I0312 14:12:21.267924 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls\") pod \"dns-operator-589895fbb7-q4wwv\" (UID: \"8c6b9f13-4a3a-4920-a84b-f76516501f81\") " pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" Mar 12 14:12:21.268088 master-0 kubenswrapper[7440]: I0312 14:12:21.267955 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxnzm\" (UniqueName: \"kubernetes.io/projected/9757756c-cb67-4b6f-99c3-dd63f904897a-kube-api-access-hxnzm\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:12:21.268088 master-0 kubenswrapper[7440]: I0312 14:12:21.267978 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpq4d\" (UniqueName: \"kubernetes.io/projected/1bc0d552-01c7-4212-a551-d16419f2dc80-kube-api-access-vpq4d\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:21.268088 master-0 kubenswrapper[7440]: I0312 14:12:21.268004 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dc73c14-852d-4957-b6ac-84366ba0594f-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hkf2t\" (UID: \"3dc73c14-852d-4957-b6ac-84366ba0594f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" Mar 12 14:12:21.268088 master-0 kubenswrapper[7440]: I0312 14:12:21.268032 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-system-cni-dir\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:12:21.268344 master-0 kubenswrapper[7440]: I0312 14:12:21.268318 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/7433d9bf-4edf-4787-a7a1-e5102c7264c7-host-etc-kube\") pod \"network-operator-7c649bf6d4-ldxfn\" (UID: \"7433d9bf-4edf-4787-a7a1-e5102c7264c7\") " pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" Mar 12 14:12:21.268344 master-0 kubenswrapper[7440]: I0312 14:12:21.268338 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-whereabouts-configmap\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:12:21.268422 master-0 kubenswrapper[7440]: I0312 14:12:21.268357 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0a898118-6d01-4211-92f0-43967b75405c-available-featuregates\") pod \"openshift-config-operator-64488f9d78-ljnjj\" (UID: \"0a898118-6d01-4211-92f0-43967b75405c\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:12:21.268422 master-0 kubenswrapper[7440]: I0312 14:12:21.268387 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7433d9bf-4edf-4787-a7a1-e5102c7264c7-metrics-tls\") pod \"network-operator-7c649bf6d4-ldxfn\" (UID: \"7433d9bf-4edf-4787-a7a1-e5102c7264c7\") " pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" Mar 12 14:12:21.268422 master-0 kubenswrapper[7440]: I0312 14:12:21.268409 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-socket-dir-parent\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.268546 master-0 kubenswrapper[7440]: I0312 14:12:21.268437 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-etc-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.268546 master-0 kubenswrapper[7440]: I0312 14:12:21.268475 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dc73c14-852d-4957-b6ac-84366ba0594f-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hkf2t\" (UID: \"3dc73c14-852d-4957-b6ac-84366ba0594f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" Mar 12 14:12:21.268546 master-0 kubenswrapper[7440]: I0312 14:12:21.268491 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-daemon-config\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.268546 master-0 kubenswrapper[7440]: I0312 14:12:21.268503 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:21.268758 master-0 kubenswrapper[7440]: I0312 14:12:21.268729 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-config\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:21.268995 master-0 kubenswrapper[7440]: I0312 14:12:21.268971 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dc73c14-852d-4957-b6ac-84366ba0594f-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hkf2t\" (UID: \"3dc73c14-852d-4957-b6ac-84366ba0594f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" Mar 12 14:12:21.269078 master-0 kubenswrapper[7440]: I0312 14:12:21.269051 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b9d51570-06dd-4e2f-9c19-07fb694279ae-host-slash\") pod \"iptables-alerter-vb4v5\" (UID: \"b9d51570-06dd-4e2f-9c19-07fb694279ae\") " pod="openshift-network-operator/iptables-alerter-vb4v5" Mar 12 14:12:21.269135 master-0 kubenswrapper[7440]: I0312 14:12:21.269097 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bba274a-38c7-4d13-88a5-6bc39228416c-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-qtql5\" (UID: \"1bba274a-38c7-4d13-88a5-6bc39228416c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" Mar 12 14:12:21.269135 master-0 kubenswrapper[7440]: I0312 14:12:21.269125 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-ovnkube-identity-cm\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:12:21.269206 master-0 kubenswrapper[7440]: I0312 14:12:21.269149 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/8106d14a-b448-4dd1-bccd-926f85394b5d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-h8sq4\" (UID: \"8106d14a-b448-4dd1-bccd-926f85394b5d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" Mar 12 14:12:21.269206 master-0 kubenswrapper[7440]: I0312 14:12:21.269173 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.269206 master-0 kubenswrapper[7440]: I0312 14:12:21.269194 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-env-overrides\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.269315 master-0 kubenswrapper[7440]: I0312 14:12:21.269218 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clj2j\" (UniqueName: \"kubernetes.io/projected/8660cef9-0ab3-453e-a4b9-c243daa6ddb0-kube-api-access-clj2j\") pod \"csi-snapshot-controller-operator-5685fbc7d-ckmlv\" (UID: \"8660cef9-0ab3-453e-a4b9-c243daa6ddb0\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv" Mar 12 14:12:21.269315 master-0 kubenswrapper[7440]: I0312 14:12:21.269247 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:21.269315 master-0 kubenswrapper[7440]: I0312 14:12:21.269271 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/42dbcb8f-e8c4-413e-977d-40aa6df226aa-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:21.269315 master-0 kubenswrapper[7440]: I0312 14:12:21.269287 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:21.269315 master-0 kubenswrapper[7440]: I0312 14:12:21.269295 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6defef79-6058-466a-ae0b-8eb9258126be-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:12:21.269623 master-0 kubenswrapper[7440]: I0312 14:12:21.269596 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-ovnkube-identity-cm\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:12:21.269672 master-0 kubenswrapper[7440]: I0312 14:12:21.269652 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/42dbcb8f-e8c4-413e-977d-40aa6df226aa-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:21.270031 master-0 kubenswrapper[7440]: I0312 14:12:21.270006 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-env-overrides\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.270138 master-0 kubenswrapper[7440]: I0312 14:12:21.270114 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-env-overrides\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:12:21.270266 master-0 kubenswrapper[7440]: I0312 14:12:21.270243 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/8106d14a-b448-4dd1-bccd-926f85394b5d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-h8sq4\" (UID: \"8106d14a-b448-4dd1-bccd-926f85394b5d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" Mar 12 14:12:21.270359 master-0 kubenswrapper[7440]: I0312 14:12:21.270304 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a898118-6d01-4211-92f0-43967b75405c-serving-cert\") pod \"openshift-config-operator-64488f9d78-ljnjj\" (UID: \"0a898118-6d01-4211-92f0-43967b75405c\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:12:21.270500 master-0 kubenswrapper[7440]: I0312 14:12:21.270473 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7433d9bf-4edf-4787-a7a1-e5102c7264c7-metrics-tls\") pod \"network-operator-7c649bf6d4-ldxfn\" (UID: \"7433d9bf-4edf-4787-a7a1-e5102c7264c7\") " pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" Mar 12 14:12:21.270554 master-0 kubenswrapper[7440]: I0312 14:12:21.270523 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:12:21.270615 master-0 kubenswrapper[7440]: I0312 14:12:21.270535 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0a898118-6d01-4211-92f0-43967b75405c-available-featuregates\") pod \"openshift-config-operator-64488f9d78-ljnjj\" (UID: \"0a898118-6d01-4211-92f0-43967b75405c\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:12:21.270665 master-0 kubenswrapper[7440]: I0312 14:12:21.270607 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs\") pod \"network-metrics-daemon-n9v7g\" (UID: \"7fdce71e-8085-4316-be40-e535530c2ca4\") " pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:12:21.270708 master-0 kubenswrapper[7440]: I0312 14:12:21.270666 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-smpl5\" (UID: \"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" Mar 12 14:12:21.270774 master-0 kubenswrapper[7440]: I0312 14:12:21.270745 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6defef79-6058-466a-ae0b-8eb9258126be-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:12:21.270834 master-0 kubenswrapper[7440]: I0312 14:12:21.270805 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhdq5\" (UniqueName: \"kubernetes.io/projected/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-kube-api-access-qhdq5\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:21.270881 master-0 kubenswrapper[7440]: I0312 14:12:21.270857 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dc73c14-852d-4957-b6ac-84366ba0594f-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hkf2t\" (UID: \"3dc73c14-852d-4957-b6ac-84366ba0594f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" Mar 12 14:12:21.270881 master-0 kubenswrapper[7440]: I0312 14:12:21.270859 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-slash\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.271026 master-0 kubenswrapper[7440]: I0312 14:12:21.270949 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rfxl\" (UniqueName: \"kubernetes.io/projected/0a898118-6d01-4211-92f0-43967b75405c-kube-api-access-8rfxl\") pod \"openshift-config-operator-64488f9d78-ljnjj\" (UID: \"0a898118-6d01-4211-92f0-43967b75405c\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:12:21.271026 master-0 kubenswrapper[7440]: I0312 14:12:21.270990 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bdqv\" (UniqueName: \"kubernetes.io/projected/7fdce71e-8085-4316-be40-e535530c2ca4-kube-api-access-5bdqv\") pod \"network-metrics-daemon-n9v7g\" (UID: \"7fdce71e-8085-4316-be40-e535530c2ca4\") " pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:12:21.271026 master-0 kubenswrapper[7440]: I0312 14:12:21.271017 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-webhook-cert\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:12:21.271136 master-0 kubenswrapper[7440]: I0312 14:12:21.271101 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-cnibin\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:12:21.271204 master-0 kubenswrapper[7440]: I0312 14:12:21.271177 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-var-lib-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.271327 master-0 kubenswrapper[7440]: I0312 14:12:21.271308 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-webhook-cert\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:12:21.271479 master-0 kubenswrapper[7440]: I0312 14:12:21.271461 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-ovnkube-script-lib\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.271565 master-0 kubenswrapper[7440]: I0312 14:12:21.271229 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-ca\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:21.271676 master-0 kubenswrapper[7440]: I0312 14:12:21.271660 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-config\") pod \"openshift-controller-manager-operator-8565d84698-zwdgk\" (UID: \"d00a8cc7-7774-40bd-94a1-9ac2d0f63234\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" Mar 12 14:12:21.271776 master-0 kubenswrapper[7440]: I0312 14:12:21.271761 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert\") pod \"catalog-operator-7d9c49f57b-whr79\" (UID: \"272b53c4-134c-404d-9a27-c7371415b1f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:12:21.271880 master-0 kubenswrapper[7440]: I0312 14:12:21.271859 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-config\") pod \"openshift-controller-manager-operator-8565d84698-zwdgk\" (UID: \"d00a8cc7-7774-40bd-94a1-9ac2d0f63234\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" Mar 12 14:12:21.271942 master-0 kubenswrapper[7440]: I0312 14:12:21.271861 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-systemd\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.271942 master-0 kubenswrapper[7440]: I0312 14:12:21.271601 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-ca\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:21.271942 master-0 kubenswrapper[7440]: I0312 14:12:21.271917 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxt4g\" (UniqueName: \"kubernetes.io/projected/6defef79-6058-466a-ae0b-8eb9258126be-kube-api-access-zxt4g\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:12:21.271942 master-0 kubenswrapper[7440]: I0312 14:12:21.271941 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:21.272092 master-0 kubenswrapper[7440]: I0312 14:12:21.272032 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-cni-bin\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.272092 master-0 kubenswrapper[7440]: I0312 14:12:21.272060 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-os-release\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.272092 master-0 kubenswrapper[7440]: I0312 14:12:21.272083 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-client\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:21.272218 master-0 kubenswrapper[7440]: I0312 14:12:21.272106 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/8106d14a-b448-4dd1-bccd-926f85394b5d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-h8sq4\" (UID: \"8106d14a-b448-4dd1-bccd-926f85394b5d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" Mar 12 14:12:21.272218 master-0 kubenswrapper[7440]: I0312 14:12:21.272126 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtqp6\" (UniqueName: \"kubernetes.io/projected/8106d14a-b448-4dd1-bccd-926f85394b5d-kube-api-access-jtqp6\") pod \"cluster-olm-operator-77899cf6d-h8sq4\" (UID: \"8106d14a-b448-4dd1-bccd-926f85394b5d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" Mar 12 14:12:21.272218 master-0 kubenswrapper[7440]: I0312 14:12:21.272148 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bba274a-38c7-4d13-88a5-6bc39228416c-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-qtql5\" (UID: \"1bba274a-38c7-4d13-88a5-6bc39228416c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" Mar 12 14:12:21.272382 master-0 kubenswrapper[7440]: I0312 14:12:21.272361 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/8106d14a-b448-4dd1-bccd-926f85394b5d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-h8sq4\" (UID: \"8106d14a-b448-4dd1-bccd-926f85394b5d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" Mar 12 14:12:21.272464 master-0 kubenswrapper[7440]: I0312 14:12:21.272447 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-client\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:21.272725 master-0 kubenswrapper[7440]: I0312 14:12:21.272707 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8tts\" (UniqueName: \"kubernetes.io/projected/85459175-2c9c-425d-bdfb-0a79c92ed110-kube-api-access-v8tts\") pod \"package-server-manager-854648ff6d-dvv78\" (UID: \"85459175-2c9c-425d-bdfb-0a79c92ed110\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:12:21.272823 master-0 kubenswrapper[7440]: I0312 14:12:21.272787 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bba274a-38c7-4d13-88a5-6bc39228416c-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-qtql5\" (UID: \"1bba274a-38c7-4d13-88a5-6bc39228416c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" Mar 12 14:12:21.273465 master-0 kubenswrapper[7440]: I0312 14:12:21.273428 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57930a54-89ab-4ec8-a504-74035bb74d63-serving-cert\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:21.273693 master-0 kubenswrapper[7440]: I0312 14:12:21.273674 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57930a54-89ab-4ec8-a504-74035bb74d63-serving-cert\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:21.273797 master-0 kubenswrapper[7440]: I0312 14:12:21.273777 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08ea0d9f-0635-4759-803e-572eca2f2d34-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-vpn8v\" (UID: \"08ea0d9f-0635-4759-803e-572eca2f2d34\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" Mar 12 14:12:21.274400 master-0 kubenswrapper[7440]: I0312 14:12:21.273460 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08ea0d9f-0635-4759-803e-572eca2f2d34-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-vpn8v\" (UID: \"08ea0d9f-0635-4759-803e-572eca2f2d34\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" Mar 12 14:12:21.274459 master-0 kubenswrapper[7440]: I0312 14:12:21.274405 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08ea0d9f-0635-4759-803e-572eca2f2d34-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-vpn8v\" (UID: \"08ea0d9f-0635-4759-803e-572eca2f2d34\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" Mar 12 14:12:21.274459 master-0 kubenswrapper[7440]: I0312 14:12:21.274425 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbv7q\" (UniqueName: \"kubernetes.io/projected/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-kube-api-access-bbv7q\") pod \"openshift-controller-manager-operator-8565d84698-zwdgk\" (UID: \"d00a8cc7-7774-40bd-94a1-9ac2d0f63234\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" Mar 12 14:12:21.274459 master-0 kubenswrapper[7440]: I0312 14:12:21.274447 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-dvv78\" (UID: \"85459175-2c9c-425d-bdfb-0a79c92ed110\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:12:21.274569 master-0 kubenswrapper[7440]: I0312 14:12:21.274465 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2435b91-86d6-415b-a978-34cc859e74f2-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:21.274569 master-0 kubenswrapper[7440]: I0312 14:12:21.274486 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs\") pod \"multus-admission-controller-8d675b596-sm9nb\" (UID: \"7023af8b-bfcc-4253-85cd-d891dff1c86e\") " pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" Mar 12 14:12:21.274569 master-0 kubenswrapper[7440]: I0312 14:12:21.274506 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm476\" (UniqueName: \"kubernetes.io/projected/7023af8b-bfcc-4253-85cd-d891dff1c86e-kube-api-access-dm476\") pod \"multus-admission-controller-8d675b596-sm9nb\" (UID: \"7023af8b-bfcc-4253-85cd-d891dff1c86e\") " pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" Mar 12 14:12:21.274569 master-0 kubenswrapper[7440]: I0312 14:12:21.274524 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-systemd-units\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.274569 master-0 kubenswrapper[7440]: I0312 14:12:21.274544 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:12:21.274569 master-0 kubenswrapper[7440]: I0312 14:12:21.274563 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4q4w\" (UniqueName: \"kubernetes.io/projected/7433d9bf-4edf-4787-a7a1-e5102c7264c7-kube-api-access-t4q4w\") pod \"network-operator-7c649bf6d4-ldxfn\" (UID: \"7433d9bf-4edf-4787-a7a1-e5102c7264c7\") " pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" Mar 12 14:12:21.274776 master-0 kubenswrapper[7440]: I0312 14:12:21.274580 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-cni-multus\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.274776 master-0 kubenswrapper[7440]: I0312 14:12:21.274598 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-run-ovn-kubernetes\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.274776 master-0 kubenswrapper[7440]: I0312 14:12:21.274617 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cqkl\" (UniqueName: \"kubernetes.io/projected/b9d51570-06dd-4e2f-9c19-07fb694279ae-kube-api-access-2cqkl\") pod \"iptables-alerter-vb4v5\" (UID: \"b9d51570-06dd-4e2f-9c19-07fb694279ae\") " pod="openshift-network-operator/iptables-alerter-vb4v5" Mar 12 14:12:21.274776 master-0 kubenswrapper[7440]: I0312 14:12:21.274634 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-system-cni-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.274776 master-0 kubenswrapper[7440]: I0312 14:12:21.274649 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-run-netns\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.274776 master-0 kubenswrapper[7440]: I0312 14:12:21.274667 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-zwdgk\" (UID: \"d00a8cc7-7774-40bd-94a1-9ac2d0f63234\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" Mar 12 14:12:21.274776 master-0 kubenswrapper[7440]: I0312 14:12:21.274685 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b9d51570-06dd-4e2f-9c19-07fb694279ae-iptables-alerter-script\") pod \"iptables-alerter-vb4v5\" (UID: \"b9d51570-06dd-4e2f-9c19-07fb694279ae\") " pod="openshift-network-operator/iptables-alerter-vb4v5" Mar 12 14:12:21.274776 master-0 kubenswrapper[7440]: I0312 14:12:21.274701 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-cni-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.275119 master-0 kubenswrapper[7440]: I0312 14:12:21.274927 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08ea0d9f-0635-4759-803e-572eca2f2d34-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-vpn8v\" (UID: \"08ea0d9f-0635-4759-803e-572eca2f2d34\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" Mar 12 14:12:21.275502 master-0 kubenswrapper[7440]: I0312 14:12:21.275207 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2435b91-86d6-415b-a978-34cc859e74f2-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:21.275502 master-0 kubenswrapper[7440]: I0312 14:12:21.275464 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b9d51570-06dd-4e2f-9c19-07fb694279ae-iptables-alerter-script\") pod \"iptables-alerter-vb4v5\" (UID: \"b9d51570-06dd-4e2f-9c19-07fb694279ae\") " pod="openshift-network-operator/iptables-alerter-vb4v5" Mar 12 14:12:21.275502 master-0 kubenswrapper[7440]: I0312 14:12:21.275494 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-zwdgk\" (UID: \"d00a8cc7-7774-40bd-94a1-9ac2d0f63234\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" Mar 12 14:12:21.340608 master-0 kubenswrapper[7440]: I0312 14:12:21.340522 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z8pd\" (UniqueName: \"kubernetes.io/projected/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-kube-api-access-2z8pd\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:21.342032 master-0 kubenswrapper[7440]: I0312 14:12:21.341996 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08ea0d9f-0635-4759-803e-572eca2f2d34-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-vpn8v\" (UID: \"08ea0d9f-0635-4759-803e-572eca2f2d34\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" Mar 12 14:12:21.342935 master-0 kubenswrapper[7440]: I0312 14:12:21.342806 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j47xv\" (UniqueName: \"kubernetes.io/projected/42dbcb8f-e8c4-413e-977d-40aa6df226aa-kube-api-access-j47xv\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:21.343434 master-0 kubenswrapper[7440]: I0312 14:12:21.343397 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqqcc\" (UniqueName: \"kubernetes.io/projected/272b53c4-134c-404d-9a27-c7371415b1f7-kube-api-access-nqqcc\") pod \"catalog-operator-7d9c49f57b-whr79\" (UID: \"272b53c4-134c-404d-9a27-c7371415b1f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:12:21.352499 master-0 kubenswrapper[7440]: I0312 14:12:21.352469 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkmrv\" (UniqueName: \"kubernetes.io/projected/a2435b91-86d6-415b-a978-34cc859e74f2-kube-api-access-qkmrv\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:21.371765 master-0 kubenswrapper[7440]: I0312 14:12:21.370845 7440 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 14:12:21.371765 master-0 kubenswrapper[7440]: I0312 14:12:21.370920 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pfns\" (UniqueName: \"kubernetes.io/projected/95c11263-0d68-4b11-bcfd-bcb0e96a6988-kube-api-access-6pfns\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.375838 master-0 kubenswrapper[7440]: I0312 14:12:21.375798 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-os-release\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.375956 master-0 kubenswrapper[7440]: I0312 14:12:21.375844 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-dvv78\" (UID: \"85459175-2c9c-425d-bdfb-0a79c92ed110\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:12:21.375956 master-0 kubenswrapper[7440]: I0312 14:12:21.375875 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs\") pod \"multus-admission-controller-8d675b596-sm9nb\" (UID: \"7023af8b-bfcc-4253-85cd-d891dff1c86e\") " pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" Mar 12 14:12:21.375956 master-0 kubenswrapper[7440]: I0312 14:12:21.375912 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:12:21.375956 master-0 kubenswrapper[7440]: I0312 14:12:21.375940 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-cni-multus\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.376086 master-0 kubenswrapper[7440]: I0312 14:12:21.375965 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-systemd-units\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.376086 master-0 kubenswrapper[7440]: I0312 14:12:21.375992 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-system-cni-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.376086 master-0 kubenswrapper[7440]: I0312 14:12:21.376011 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-run-netns\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.376086 master-0 kubenswrapper[7440]: I0312 14:12:21.376030 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-run-ovn-kubernetes\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.376086 master-0 kubenswrapper[7440]: I0312 14:12:21.376050 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-cni-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.376086 master-0 kubenswrapper[7440]: I0312 14:12:21.376070 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.376265 master-0 kubenswrapper[7440]: I0312 14:12:21.376091 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-cni-netd\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.376265 master-0 kubenswrapper[7440]: I0312 14:12:21.376111 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:12:21.376265 master-0 kubenswrapper[7440]: I0312 14:12:21.376131 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:21.376265 master-0 kubenswrapper[7440]: I0312 14:12:21.376150 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-multus-certs\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.376265 master-0 kubenswrapper[7440]: I0312 14:12:21.376181 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-os-release\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:12:21.376265 master-0 kubenswrapper[7440]: I0312 14:12:21.376209 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-k8s-cni-cncf-io\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.376265 master-0 kubenswrapper[7440]: I0312 14:12:21.376227 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-node-log\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.376265 master-0 kubenswrapper[7440]: I0312 14:12:21.376247 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:21.376564 master-0 kubenswrapper[7440]: I0312 14:12:21.376272 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-kubelet\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.376564 master-0 kubenswrapper[7440]: I0312 14:12:21.376292 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-conf-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.376564 master-0 kubenswrapper[7440]: I0312 14:12:21.376317 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-cnibin\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.376564 master-0 kubenswrapper[7440]: I0312 14:12:21.376341 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert\") pod \"olm-operator-d64cfc9db-f48hv\" (UID: \"07a6a1d6-fecf-4847-b7c1-160d5d7320fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:12:21.376564 master-0 kubenswrapper[7440]: I0312 14:12:21.376367 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-log-socket\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.376564 master-0 kubenswrapper[7440]: I0312 14:12:21.376407 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:21.376564 master-0 kubenswrapper[7440]: I0312 14:12:21.376432 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvngn\" (UniqueName: \"kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn\") pod \"network-check-target-8q2fv\" (UID: \"8e733069-752a-4140-83eb-8287f1bce1a7\") " pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:12:21.376564 master-0 kubenswrapper[7440]: I0312 14:12:21.376452 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:12:21.376564 master-0 kubenswrapper[7440]: I0312 14:12:21.376475 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-etc-kubernetes\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.377114 master-0 kubenswrapper[7440]: E0312 14:12:21.376628 7440 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 14:12:21.377114 master-0 kubenswrapper[7440]: I0312 14:12:21.376669 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-conf-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.377114 master-0 kubenswrapper[7440]: E0312 14:12:21.376688 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls podName:4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:21.876670407 +0000 UTC m=+2.212048976 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls") pod "ingress-operator-677db989d6-44hhf" (UID: "4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6") : secret "metrics-tls" not found Mar 12 14:12:21.377114 master-0 kubenswrapper[7440]: I0312 14:12:21.376740 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-node-log\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.377114 master-0 kubenswrapper[7440]: E0312 14:12:21.376795 7440 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 14:12:21.377114 master-0 kubenswrapper[7440]: E0312 14:12:21.376824 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics podName:1bc0d552-01c7-4212-a551-d16419f2dc80 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:21.87681321 +0000 UTC m=+2.212191769 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-qzdff" (UID: "1bc0d552-01c7-4212-a551-d16419f2dc80") : secret "marketplace-operator-metrics" not found Mar 12 14:12:21.377114 master-0 kubenswrapper[7440]: I0312 14:12:21.376849 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-kubelet\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.377114 master-0 kubenswrapper[7440]: I0312 14:12:21.376879 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-run-ovn-kubernetes\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.377114 master-0 kubenswrapper[7440]: I0312 14:12:21.376922 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:12:21.377114 master-0 kubenswrapper[7440]: I0312 14:12:21.376945 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-cni-multus\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.377114 master-0 kubenswrapper[7440]: I0312 14:12:21.376968 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-systemd-units\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.377114 master-0 kubenswrapper[7440]: I0312 14:12:21.376968 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-os-release\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.377114 master-0 kubenswrapper[7440]: I0312 14:12:21.377011 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-cni-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.377114 master-0 kubenswrapper[7440]: E0312 14:12:21.377026 7440 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 14:12:21.377114 master-0 kubenswrapper[7440]: I0312 14:12:21.377034 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-k8s-cni-cncf-io\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.377114 master-0 kubenswrapper[7440]: E0312 14:12:21.377047 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert podName:879e9bf1-ce4a-40b7-a72c-fe4c61e96cea nodeName:}" failed. No retries permitted until 2026-03-12 14:12:21.877038875 +0000 UTC m=+2.212417434 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-zghs6" (UID: "879e9bf1-ce4a-40b7-a72c-fe4c61e96cea") : secret "performance-addon-operator-webhook-cert" not found Mar 12 14:12:21.377114 master-0 kubenswrapper[7440]: E0312 14:12:21.377072 7440 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 14:12:21.377114 master-0 kubenswrapper[7440]: I0312 14:12:21.377078 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-cnibin\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.377114 master-0 kubenswrapper[7440]: E0312 14:12:21.377094 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs podName:7023af8b-bfcc-4253-85cd-d891dff1c86e nodeName:}" failed. No retries permitted until 2026-03-12 14:12:21.877086597 +0000 UTC m=+2.212465166 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs") pod "multus-admission-controller-8d675b596-sm9nb" (UID: "7023af8b-bfcc-4253-85cd-d891dff1c86e") : secret "multus-admission-controller-secret" not found Mar 12 14:12:21.377114 master-0 kubenswrapper[7440]: E0312 14:12:21.377119 7440 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 14:12:21.377114 master-0 kubenswrapper[7440]: E0312 14:12:21.377138 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert podName:07a6a1d6-fecf-4847-b7c1-160d5d7320fb nodeName:}" failed. No retries permitted until 2026-03-12 14:12:21.877131969 +0000 UTC m=+2.212510528 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert") pod "olm-operator-d64cfc9db-f48hv" (UID: "07a6a1d6-fecf-4847-b7c1-160d5d7320fb") : secret "olm-operator-serving-cert" not found Mar 12 14:12:21.377114 master-0 kubenswrapper[7440]: I0312 14:12:21.377138 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-cni-netd\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.377157 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: E0312 14:12:21.377181 7440 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.377192 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-system-cni-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: E0312 14:12:21.377201 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert podName:85459175-2c9c-425d-bdfb-0a79c92ed110 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:21.87719496 +0000 UTC m=+2.212573519 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-dvv78" (UID: "85459175-2c9c-425d-bdfb-0a79c92ed110") : secret "package-server-manager-serving-cert" not found Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.377222 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-os-release\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.377223 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-log-socket\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.377118 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.377246 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-run-netns\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.377256 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-multus-certs\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.377289 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.377315 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-etc-kubernetes\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.376494 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-cni-bin\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.377398 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-netns\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.377413 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-cni-bin\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.377428 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-ovn\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.377443 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-netns\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.377447 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.377466 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-ovn\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.377490 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls\") pod \"dns-operator-589895fbb7-q4wwv\" (UID: \"8c6b9f13-4a3a-4920-a84b-f76516501f81\") " pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.377522 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-hostroot\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: E0312 14:12:21.377495 7440 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.377541 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-kubelet\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: E0312 14:12:21.377534 7440 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.377602 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-hostroot\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.377585 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-system-cni-dir\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: E0312 14:12:21.377633 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls podName:8c6b9f13-4a3a-4920-a84b-f76516501f81 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:21.87762184 +0000 UTC m=+2.213000399 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls") pod "dns-operator-589895fbb7-q4wwv" (UID: "8c6b9f13-4a3a-4920-a84b-f76516501f81") : secret "metrics-tls" not found Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: E0312 14:12:21.377763 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls podName:42dbcb8f-e8c4-413e-977d-40aa6df226aa nodeName:}" failed. No retries permitted until 2026-03-12 14:12:21.877751634 +0000 UTC m=+2.213130193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-6w5nv" (UID: "42dbcb8f-e8c4-413e-977d-40aa6df226aa") : secret "cluster-monitoring-operator-tls" not found Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.377567 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-system-cni-dir\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.377655 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-kubelet\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.377788 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/7433d9bf-4edf-4787-a7a1-e5102c7264c7-host-etc-kube\") pod \"network-operator-7c649bf6d4-ldxfn\" (UID: \"7433d9bf-4edf-4787-a7a1-e5102c7264c7\") " pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.377806 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-socket-dir-parent\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.377819 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/7433d9bf-4edf-4787-a7a1-e5102c7264c7-host-etc-kube\") pod \"network-operator-7c649bf6d4-ldxfn\" (UID: \"7433d9bf-4edf-4787-a7a1-e5102c7264c7\") " pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" Mar 12 14:12:21.377800 master-0 kubenswrapper[7440]: I0312 14:12:21.377824 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-etc-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: I0312 14:12:21.377844 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b9d51570-06dd-4e2f-9c19-07fb694279ae-host-slash\") pod \"iptables-alerter-vb4v5\" (UID: \"b9d51570-06dd-4e2f-9c19-07fb694279ae\") " pod="openshift-network-operator/iptables-alerter-vb4v5" Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: I0312 14:12:21.377853 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-socket-dir-parent\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: I0312 14:12:21.377885 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b9d51570-06dd-4e2f-9c19-07fb694279ae-host-slash\") pod \"iptables-alerter-vb4v5\" (UID: \"b9d51570-06dd-4e2f-9c19-07fb694279ae\") " pod="openshift-network-operator/iptables-alerter-vb4v5" Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: I0312 14:12:21.377928 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-etc-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: I0312 14:12:21.377960 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: I0312 14:12:21.377979 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: I0312 14:12:21.377995 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: I0312 14:12:21.378013 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs\") pod \"network-metrics-daemon-n9v7g\" (UID: \"7fdce71e-8085-4316-be40-e535530c2ca4\") " pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: I0312 14:12:21.378049 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-cnibin\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: I0312 14:12:21.378070 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-slash\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: I0312 14:12:21.378087 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert\") pod \"catalog-operator-7d9c49f57b-whr79\" (UID: \"272b53c4-134c-404d-9a27-c7371415b1f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: I0312 14:12:21.378103 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-systemd\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: I0312 14:12:21.378119 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-var-lib-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: I0312 14:12:21.378144 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: I0312 14:12:21.378160 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-cni-bin\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: I0312 14:12:21.378200 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-cni-bin\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: E0312 14:12:21.378243 7440 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: E0312 14:12:21.378265 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls podName:879e9bf1-ce4a-40b7-a72c-fe4c61e96cea nodeName:}" failed. No retries permitted until 2026-03-12 14:12:21.878257636 +0000 UTC m=+2.213636195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-zghs6" (UID: "879e9bf1-ce4a-40b7-a72c-fe4c61e96cea") : secret "node-tuning-operator-tls" not found Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: I0312 14:12:21.378285 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: E0312 14:12:21.378321 7440 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: E0312 14:12:21.378339 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert podName:29ab0e68-ebc6-48a3-b234-e1794c4c5ad6 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:21.878333098 +0000 UTC m=+2.213711657 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert") pod "cluster-version-operator-745944c6b7-vs878" (UID: "29ab0e68-ebc6-48a3-b234-e1794c4c5ad6") : secret "cluster-version-operator-serving-cert" not found Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: E0312 14:12:21.378372 7440 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: E0312 14:12:21.378389 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs podName:7fdce71e-8085-4316-be40-e535530c2ca4 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:21.878383329 +0000 UTC m=+2.213761888 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs") pod "network-metrics-daemon-n9v7g" (UID: "7fdce71e-8085-4316-be40-e535530c2ca4") : secret "metrics-daemon-secret" not found Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: I0312 14:12:21.378421 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-slash\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: E0312 14:12:21.378429 7440 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: E0312 14:12:21.378475 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls podName:a2435b91-86d6-415b-a978-34cc859e74f2 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:21.878466391 +0000 UTC m=+2.213844950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-54cr9" (UID: "a2435b91-86d6-415b-a978-34cc859e74f2") : secret "image-registry-operator-tls" not found Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: E0312 14:12:21.378510 7440 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: E0312 14:12:21.378529 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert podName:272b53c4-134c-404d-9a27-c7371415b1f7 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:21.878522863 +0000 UTC m=+2.213901412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert") pod "catalog-operator-7d9c49f57b-whr79" (UID: "272b53c4-134c-404d-9a27-c7371415b1f7") : secret "catalog-operator-serving-cert" not found Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: I0312 14:12:21.378551 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-cnibin\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: I0312 14:12:21.378569 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-systemd\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.379006 master-0 kubenswrapper[7440]: I0312 14:12:21.378596 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-var-lib-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:21.929619 master-0 kubenswrapper[7440]: I0312 14:12:21.927986 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:21.929619 master-0 kubenswrapper[7440]: I0312 14:12:21.928078 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:21.929619 master-0 kubenswrapper[7440]: E0312 14:12:21.928247 7440 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 14:12:21.929619 master-0 kubenswrapper[7440]: E0312 14:12:21.928315 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics podName:1bc0d552-01c7-4212-a551-d16419f2dc80 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:22.928296376 +0000 UTC m=+3.263674935 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-qzdff" (UID: "1bc0d552-01c7-4212-a551-d16419f2dc80") : secret "marketplace-operator-metrics" not found Mar 12 14:12:21.929619 master-0 kubenswrapper[7440]: I0312 14:12:21.928343 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert\") pod \"olm-operator-d64cfc9db-f48hv\" (UID: \"07a6a1d6-fecf-4847-b7c1-160d5d7320fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:12:21.929619 master-0 kubenswrapper[7440]: I0312 14:12:21.928384 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:21.930831 master-0 kubenswrapper[7440]: E0312 14:12:21.928250 7440 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 14:12:21.930831 master-0 kubenswrapper[7440]: E0312 14:12:21.930718 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls podName:4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:22.930697986 +0000 UTC m=+3.266076545 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls") pod "ingress-operator-677db989d6-44hhf" (UID: "4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6") : secret "metrics-tls" not found Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: E0312 14:12:21.930838 7440 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: E0312 14:12:21.930870 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert podName:07a6a1d6-fecf-4847-b7c1-160d5d7320fb nodeName:}" failed. No retries permitted until 2026-03-12 14:12:22.93086037 +0000 UTC m=+3.266238929 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert") pod "olm-operator-d64cfc9db-f48hv" (UID: "07a6a1d6-fecf-4847-b7c1-160d5d7320fb") : secret "olm-operator-serving-cert" not found Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: E0312 14:12:21.930940 7440 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: I0312 14:12:21.928483 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: E0312 14:12:21.931156 7440 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: E0312 14:12:21.931355 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert podName:879e9bf1-ce4a-40b7-a72c-fe4c61e96cea nodeName:}" failed. No retries permitted until 2026-03-12 14:12:22.930963052 +0000 UTC m=+3.266341611 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-zghs6" (UID: "879e9bf1-ce4a-40b7-a72c-fe4c61e96cea") : secret "performance-addon-operator-webhook-cert" not found Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: I0312 14:12:21.931510 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls\") pod \"dns-operator-589895fbb7-q4wwv\" (UID: \"8c6b9f13-4a3a-4920-a84b-f76516501f81\") " pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: I0312 14:12:21.931588 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: I0312 14:12:21.931619 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: I0312 14:12:21.931646 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs\") pod \"network-metrics-daemon-n9v7g\" (UID: \"7fdce71e-8085-4316-be40-e535530c2ca4\") " pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: I0312 14:12:21.931710 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert\") pod \"catalog-operator-7d9c49f57b-whr79\" (UID: \"272b53c4-134c-404d-9a27-c7371415b1f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: I0312 14:12:21.931747 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: I0312 14:12:21.931804 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-dvv78\" (UID: \"85459175-2c9c-425d-bdfb-0a79c92ed110\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: I0312 14:12:21.931832 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs\") pod \"multus-admission-controller-8d675b596-sm9nb\" (UID: \"7023af8b-bfcc-4253-85cd-d891dff1c86e\") " pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: E0312 14:12:21.931964 7440 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: E0312 14:12:21.931997 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs podName:7023af8b-bfcc-4253-85cd-d891dff1c86e nodeName:}" failed. No retries permitted until 2026-03-12 14:12:22.931987198 +0000 UTC m=+3.267365757 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs") pod "multus-admission-controller-8d675b596-sm9nb" (UID: "7023af8b-bfcc-4253-85cd-d891dff1c86e") : secret "multus-admission-controller-secret" not found Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: E0312 14:12:21.932019 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls podName:42dbcb8f-e8c4-413e-977d-40aa6df226aa nodeName:}" failed. No retries permitted until 2026-03-12 14:12:22.932011748 +0000 UTC m=+3.267390307 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-6w5nv" (UID: "42dbcb8f-e8c4-413e-977d-40aa6df226aa") : secret "cluster-monitoring-operator-tls" not found Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: E0312 14:12:21.932062 7440 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: E0312 14:12:21.932085 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls podName:8c6b9f13-4a3a-4920-a84b-f76516501f81 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:22.93207744 +0000 UTC m=+3.267455999 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls") pod "dns-operator-589895fbb7-q4wwv" (UID: "8c6b9f13-4a3a-4920-a84b-f76516501f81") : secret "metrics-tls" not found Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: E0312 14:12:21.932135 7440 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: E0312 14:12:21.932157 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls podName:879e9bf1-ce4a-40b7-a72c-fe4c61e96cea nodeName:}" failed. No retries permitted until 2026-03-12 14:12:22.932150102 +0000 UTC m=+3.267528661 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-zghs6" (UID: "879e9bf1-ce4a-40b7-a72c-fe4c61e96cea") : secret "node-tuning-operator-tls" not found Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: E0312 14:12:21.932195 7440 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: E0312 14:12:21.932217 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert podName:29ab0e68-ebc6-48a3-b234-e1794c4c5ad6 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:22.932210073 +0000 UTC m=+3.267588632 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert") pod "cluster-version-operator-745944c6b7-vs878" (UID: "29ab0e68-ebc6-48a3-b234-e1794c4c5ad6") : secret "cluster-version-operator-serving-cert" not found Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: E0312 14:12:21.932266 7440 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: E0312 14:12:21.932290 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs podName:7fdce71e-8085-4316-be40-e535530c2ca4 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:22.932282625 +0000 UTC m=+3.267661194 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs") pod "network-metrics-daemon-n9v7g" (UID: "7fdce71e-8085-4316-be40-e535530c2ca4") : secret "metrics-daemon-secret" not found Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: E0312 14:12:21.932371 7440 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: E0312 14:12:21.932397 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert podName:272b53c4-134c-404d-9a27-c7371415b1f7 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:22.932388967 +0000 UTC m=+3.267767526 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert") pod "catalog-operator-7d9c49f57b-whr79" (UID: "272b53c4-134c-404d-9a27-c7371415b1f7") : secret "catalog-operator-serving-cert" not found Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: E0312 14:12:21.932438 7440 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: E0312 14:12:21.932460 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls podName:a2435b91-86d6-415b-a978-34cc859e74f2 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:22.932453559 +0000 UTC m=+3.267832118 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-54cr9" (UID: "a2435b91-86d6-415b-a978-34cc859e74f2") : secret "image-registry-operator-tls" not found Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: E0312 14:12:21.932667 7440 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 14:12:21.932838 master-0 kubenswrapper[7440]: E0312 14:12:21.932743 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert podName:85459175-2c9c-425d-bdfb-0a79c92ed110 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:22.932734096 +0000 UTC m=+3.268112655 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-dvv78" (UID: "85459175-2c9c-425d-bdfb-0a79c92ed110") : secret "package-server-manager-serving-cert" not found Mar 12 14:12:22.033528 master-0 kubenswrapper[7440]: I0312 14:12:22.013040 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bba274a-38c7-4d13-88a5-6bc39228416c-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-qtql5\" (UID: \"1bba274a-38c7-4d13-88a5-6bc39228416c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" Mar 12 14:12:22.034853 master-0 kubenswrapper[7440]: I0312 14:12:22.034820 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-kube-api-access\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:12:22.038685 master-0 kubenswrapper[7440]: I0312 14:12:22.035739 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhdq5\" (UniqueName: \"kubernetes.io/projected/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-kube-api-access-qhdq5\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:22.038685 master-0 kubenswrapper[7440]: I0312 14:12:22.036430 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxt4g\" (UniqueName: \"kubernetes.io/projected/6defef79-6058-466a-ae0b-8eb9258126be-kube-api-access-zxt4g\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:12:22.038685 master-0 kubenswrapper[7440]: I0312 14:12:22.036535 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxnzm\" (UniqueName: \"kubernetes.io/projected/9757756c-cb67-4b6f-99c3-dd63f904897a-kube-api-access-hxnzm\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:12:22.038685 master-0 kubenswrapper[7440]: I0312 14:12:22.036541 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqh9t\" (UniqueName: \"kubernetes.io/projected/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-kube-api-access-cqh9t\") pod \"olm-operator-d64cfc9db-f48hv\" (UID: \"07a6a1d6-fecf-4847-b7c1-160d5d7320fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:12:22.038685 master-0 kubenswrapper[7440]: I0312 14:12:22.037070 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mbjg\" (UniqueName: \"kubernetes.io/projected/3f72fbbe-69f0-4622-be05-b839ff9b4d45-kube-api-access-2mbjg\") pod \"openshift-apiserver-operator-799b6db4d7-gt2tw\" (UID: \"3f72fbbe-69f0-4622-be05-b839ff9b4d45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" Mar 12 14:12:22.038685 master-0 kubenswrapper[7440]: I0312 14:12:22.037177 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4pvp\" (UniqueName: \"kubernetes.io/projected/76d596c0-6a41-43e1-9516-aee9ad834ec2-kube-api-access-c4pvp\") pod \"service-ca-operator-69b6fc6b88-fv6pp\" (UID: \"76d596c0-6a41-43e1-9516-aee9ad834ec2\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" Mar 12 14:12:22.038685 master-0 kubenswrapper[7440]: I0312 14:12:22.037201 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4q4w\" (UniqueName: \"kubernetes.io/projected/7433d9bf-4edf-4787-a7a1-e5102c7264c7-kube-api-access-t4q4w\") pod \"network-operator-7c649bf6d4-ldxfn\" (UID: \"7433d9bf-4edf-4787-a7a1-e5102c7264c7\") " pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" Mar 12 14:12:22.038685 master-0 kubenswrapper[7440]: I0312 14:12:22.037269 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-smpl5\" (UID: \"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" Mar 12 14:12:22.038685 master-0 kubenswrapper[7440]: I0312 14:12:22.037679 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcwrv\" (UniqueName: \"kubernetes.io/projected/8d775283-2696-4411-8ddf-d4e6000f0a0c-kube-api-access-lcwrv\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:12:22.038685 master-0 kubenswrapper[7440]: I0312 14:12:22.037699 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2435b91-86d6-415b-a978-34cc859e74f2-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:22.038685 master-0 kubenswrapper[7440]: I0312 14:12:22.038071 7440 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 12 14:12:22.038685 master-0 kubenswrapper[7440]: I0312 14:12:22.038398 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6z8v\" (UniqueName: \"kubernetes.io/projected/57930a54-89ab-4ec8-a504-74035bb74d63-kube-api-access-d6z8v\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:12:22.038685 master-0 kubenswrapper[7440]: I0312 14:12:22.038662 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8tts\" (UniqueName: \"kubernetes.io/projected/85459175-2c9c-425d-bdfb-0a79c92ed110-kube-api-access-v8tts\") pod \"package-server-manager-854648ff6d-dvv78\" (UID: \"85459175-2c9c-425d-bdfb-0a79c92ed110\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:12:22.039621 master-0 kubenswrapper[7440]: I0312 14:12:22.039164 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bdqv\" (UniqueName: \"kubernetes.io/projected/7fdce71e-8085-4316-be40-e535530c2ca4-kube-api-access-5bdqv\") pod \"network-metrics-daemon-n9v7g\" (UID: \"7fdce71e-8085-4316-be40-e535530c2ca4\") " pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:12:22.039621 master-0 kubenswrapper[7440]: I0312 14:12:22.039221 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vnhl\" (UniqueName: \"kubernetes.io/projected/8c6b9f13-4a3a-4920-a84b-f76516501f81-kube-api-access-2vnhl\") pod \"dns-operator-589895fbb7-q4wwv\" (UID: \"8c6b9f13-4a3a-4920-a84b-f76516501f81\") " pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" Mar 12 14:12:22.039621 master-0 kubenswrapper[7440]: I0312 14:12:22.039390 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwtr9\" (UniqueName: \"kubernetes.io/projected/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-kube-api-access-wwtr9\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:12:22.039801 master-0 kubenswrapper[7440]: I0312 14:12:22.039693 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cqkl\" (UniqueName: \"kubernetes.io/projected/b9d51570-06dd-4e2f-9c19-07fb694279ae-kube-api-access-2cqkl\") pod \"iptables-alerter-vb4v5\" (UID: \"b9d51570-06dd-4e2f-9c19-07fb694279ae\") " pod="openshift-network-operator/iptables-alerter-vb4v5" Mar 12 14:12:22.039876 master-0 kubenswrapper[7440]: I0312 14:12:22.039827 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm476\" (UniqueName: \"kubernetes.io/projected/7023af8b-bfcc-4253-85cd-d891dff1c86e-kube-api-access-dm476\") pod \"multus-admission-controller-8d675b596-sm9nb\" (UID: \"7023af8b-bfcc-4253-85cd-d891dff1c86e\") " pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" Mar 12 14:12:22.040407 master-0 kubenswrapper[7440]: I0312 14:12:22.040361 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clj2j\" (UniqueName: \"kubernetes.io/projected/8660cef9-0ab3-453e-a4b9-c243daa6ddb0-kube-api-access-clj2j\") pod \"csi-snapshot-controller-operator-5685fbc7d-ckmlv\" (UID: \"8660cef9-0ab3-453e-a4b9-c243daa6ddb0\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv" Mar 12 14:12:22.040407 master-0 kubenswrapper[7440]: I0312 14:12:22.040382 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-bound-sa-token\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:22.040740 master-0 kubenswrapper[7440]: I0312 14:12:22.040639 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpq4d\" (UniqueName: \"kubernetes.io/projected/1bc0d552-01c7-4212-a551-d16419f2dc80-kube-api-access-vpq4d\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:22.041358 master-0 kubenswrapper[7440]: I0312 14:12:22.041299 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtqp6\" (UniqueName: \"kubernetes.io/projected/8106d14a-b448-4dd1-bccd-926f85394b5d-kube-api-access-jtqp6\") pod \"cluster-olm-operator-77899cf6d-h8sq4\" (UID: \"8106d14a-b448-4dd1-bccd-926f85394b5d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" Mar 12 14:12:22.042934 master-0 kubenswrapper[7440]: I0312 14:12:22.042888 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k4mx\" (UniqueName: \"kubernetes.io/projected/761993bb-2cba-4e1a-b304-36a24817af94-kube-api-access-2k4mx\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:22.043096 master-0 kubenswrapper[7440]: I0312 14:12:22.043063 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvngn\" (UniqueName: \"kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn\") pod \"network-check-target-8q2fv\" (UID: \"8e733069-752a-4140-83eb-8287f1bce1a7\") " pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:12:22.045450 master-0 kubenswrapper[7440]: I0312 14:12:22.045412 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc9zd\" (UniqueName: \"kubernetes.io/projected/3dc73c14-852d-4957-b6ac-84366ba0594f-kube-api-access-sc9zd\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hkf2t\" (UID: \"3dc73c14-852d-4957-b6ac-84366ba0594f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" Mar 12 14:12:22.046316 master-0 kubenswrapper[7440]: I0312 14:12:22.045577 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbv7q\" (UniqueName: \"kubernetes.io/projected/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-kube-api-access-bbv7q\") pod \"openshift-controller-manager-operator-8565d84698-zwdgk\" (UID: \"d00a8cc7-7774-40bd-94a1-9ac2d0f63234\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" Mar 12 14:12:22.048702 master-0 kubenswrapper[7440]: I0312 14:12:22.048643 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rfxl\" (UniqueName: \"kubernetes.io/projected/0a898118-6d01-4211-92f0-43967b75405c-kube-api-access-8rfxl\") pod \"openshift-config-operator-64488f9d78-ljnjj\" (UID: \"0a898118-6d01-4211-92f0-43967b75405c\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:12:22.061319 master-0 kubenswrapper[7440]: I0312 14:12:22.061179 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:12:22.099960 master-0 kubenswrapper[7440]: E0312 14:12:22.099878 7440 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282" Mar 12 14:12:22.100194 master-0 kubenswrapper[7440]: E0312 14:12:22.100129 7440 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-scheduler-operator-container,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282,Command:[cluster-kube-scheduler-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.31.14,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-kube-scheduler-operator-5c74bfc494-vpn8v_openshift-kube-scheduler-operator(08ea0d9f-0635-4759-803e-572eca2f2d34): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 14:12:22.102233 master-0 kubenswrapper[7440]: E0312 14:12:22.102188 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" podUID="08ea0d9f-0635-4759-803e-572eca2f2d34" Mar 12 14:12:22.942135 master-0 kubenswrapper[7440]: I0312 14:12:22.942083 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:22.942135 master-0 kubenswrapper[7440]: I0312 14:12:22.942140 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls\") pod \"dns-operator-589895fbb7-q4wwv\" (UID: \"8c6b9f13-4a3a-4920-a84b-f76516501f81\") " pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: E0312 14:12:22.942232 7440 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: E0312 14:12:22.942254 7440 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: E0312 14:12:22.942272 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls podName:8c6b9f13-4a3a-4920-a84b-f76516501f81 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:24.942258152 +0000 UTC m=+5.277636711 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls") pod "dns-operator-589895fbb7-q4wwv" (UID: "8c6b9f13-4a3a-4920-a84b-f76516501f81") : secret "metrics-tls" not found Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: I0312 14:12:22.942361 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: E0312 14:12:22.942410 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls podName:42dbcb8f-e8c4-413e-977d-40aa6df226aa nodeName:}" failed. No retries permitted until 2026-03-12 14:12:24.942383005 +0000 UTC m=+5.277761574 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-6w5nv" (UID: "42dbcb8f-e8c4-413e-977d-40aa6df226aa") : secret "cluster-monitoring-operator-tls" not found Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: E0312 14:12:22.942424 7440 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: E0312 14:12:22.942471 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls podName:879e9bf1-ce4a-40b7-a72c-fe4c61e96cea nodeName:}" failed. No retries permitted until 2026-03-12 14:12:24.942460938 +0000 UTC m=+5.277839497 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-zghs6" (UID: "879e9bf1-ce4a-40b7-a72c-fe4c61e96cea") : secret "node-tuning-operator-tls" not found Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: I0312 14:12:22.942469 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: I0312 14:12:22.942506 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs\") pod \"network-metrics-daemon-n9v7g\" (UID: \"7fdce71e-8085-4316-be40-e535530c2ca4\") " pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: E0312 14:12:22.942514 7440 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: I0312 14:12:22.942547 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert\") pod \"catalog-operator-7d9c49f57b-whr79\" (UID: \"272b53c4-134c-404d-9a27-c7371415b1f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: E0312 14:12:22.942542 7440 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: E0312 14:12:22.942644 7440 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: E0312 14:12:22.942656 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert podName:29ab0e68-ebc6-48a3-b234-e1794c4c5ad6 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:24.942617901 +0000 UTC m=+5.277996520 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert") pod "cluster-version-operator-745944c6b7-vs878" (UID: "29ab0e68-ebc6-48a3-b234-e1794c4c5ad6") : secret "cluster-version-operator-serving-cert" not found Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: E0312 14:12:22.942716 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert podName:272b53c4-134c-404d-9a27-c7371415b1f7 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:24.942671513 +0000 UTC m=+5.278050142 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert") pod "catalog-operator-7d9c49f57b-whr79" (UID: "272b53c4-134c-404d-9a27-c7371415b1f7") : secret "catalog-operator-serving-cert" not found Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: I0312 14:12:22.942743 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: E0312 14:12:22.942776 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs podName:7fdce71e-8085-4316-be40-e535530c2ca4 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:24.942758015 +0000 UTC m=+5.278136644 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs") pod "network-metrics-daemon-n9v7g" (UID: "7fdce71e-8085-4316-be40-e535530c2ca4") : secret "metrics-daemon-secret" not found Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: E0312 14:12:22.942784 7440 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: I0312 14:12:22.942817 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-dvv78\" (UID: \"85459175-2c9c-425d-bdfb-0a79c92ed110\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: E0312 14:12:22.942842 7440 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: E0312 14:12:22.942848 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls podName:a2435b91-86d6-415b-a978-34cc859e74f2 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:24.942836247 +0000 UTC m=+5.278214806 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-54cr9" (UID: "a2435b91-86d6-415b-a978-34cc859e74f2") : secret "image-registry-operator-tls" not found Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: I0312 14:12:22.942872 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs\") pod \"multus-admission-controller-8d675b596-sm9nb\" (UID: \"7023af8b-bfcc-4253-85cd-d891dff1c86e\") " pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: E0312 14:12:22.942891 7440 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: E0312 14:12:22.942936 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs podName:7023af8b-bfcc-4253-85cd-d891dff1c86e nodeName:}" failed. No retries permitted until 2026-03-12 14:12:24.942927299 +0000 UTC m=+5.278305858 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs") pod "multus-admission-controller-8d675b596-sm9nb" (UID: "7023af8b-bfcc-4253-85cd-d891dff1c86e") : secret "multus-admission-controller-secret" not found Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: E0312 14:12:22.942956 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert podName:85459175-2c9c-425d-bdfb-0a79c92ed110 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:24.942942729 +0000 UTC m=+5.278321338 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-dvv78" (UID: "85459175-2c9c-425d-bdfb-0a79c92ed110") : secret "package-server-manager-serving-cert" not found Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: I0312 14:12:22.942986 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: I0312 14:12:22.943029 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: I0312 14:12:22.943058 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert\") pod \"olm-operator-d64cfc9db-f48hv\" (UID: \"07a6a1d6-fecf-4847-b7c1-160d5d7320fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: E0312 14:12:22.943107 7440 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 14:12:22.943081 master-0 kubenswrapper[7440]: E0312 14:12:22.943120 7440 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 14:12:22.944526 master-0 kubenswrapper[7440]: E0312 14:12:22.943135 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics podName:1bc0d552-01c7-4212-a551-d16419f2dc80 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:24.943127724 +0000 UTC m=+5.278506283 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-qzdff" (UID: "1bc0d552-01c7-4212-a551-d16419f2dc80") : secret "marketplace-operator-metrics" not found Mar 12 14:12:22.944526 master-0 kubenswrapper[7440]: E0312 14:12:22.943150 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert podName:07a6a1d6-fecf-4847-b7c1-160d5d7320fb nodeName:}" failed. No retries permitted until 2026-03-12 14:12:24.943143384 +0000 UTC m=+5.278521943 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert") pod "olm-operator-d64cfc9db-f48hv" (UID: "07a6a1d6-fecf-4847-b7c1-160d5d7320fb") : secret "olm-operator-serving-cert" not found Mar 12 14:12:22.944526 master-0 kubenswrapper[7440]: E0312 14:12:22.943176 7440 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 14:12:22.944526 master-0 kubenswrapper[7440]: E0312 14:12:22.943207 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls podName:4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:24.943199505 +0000 UTC m=+5.278578064 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls") pod "ingress-operator-677db989d6-44hhf" (UID: "4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6") : secret "metrics-tls" not found Mar 12 14:12:22.944526 master-0 kubenswrapper[7440]: I0312 14:12:22.943230 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:22.944526 master-0 kubenswrapper[7440]: E0312 14:12:22.943328 7440 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 14:12:22.944526 master-0 kubenswrapper[7440]: E0312 14:12:22.943366 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert podName:879e9bf1-ce4a-40b7-a72c-fe4c61e96cea nodeName:}" failed. No retries permitted until 2026-03-12 14:12:24.943356689 +0000 UTC m=+5.278735248 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-zghs6" (UID: "879e9bf1-ce4a-40b7-a72c-fe4c61e96cea") : secret "performance-addon-operator-webhook-cert" not found Mar 12 14:12:23.111066 master-0 kubenswrapper[7440]: E0312 14:12:23.110991 7440 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460" Mar 12 14:12:23.111321 master-0 kubenswrapper[7440]: E0312 14:12:23.111171 7440 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2cqkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-vb4v5_openshift-network-operator(b9d51570-06dd-4e2f-9c19-07fb694279ae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 14:12:23.112541 master-0 kubenswrapper[7440]: E0312 14:12:23.112494 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-network-operator/iptables-alerter-vb4v5" podUID="b9d51570-06dd-4e2f-9c19-07fb694279ae" Mar 12 14:12:23.401409 master-0 kubenswrapper[7440]: I0312 14:12:23.401305 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:12:23.618131 master-0 kubenswrapper[7440]: E0312 14:12:23.618084 7440 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3" Mar 12 14:12:23.618330 master-0 kubenswrapper[7440]: E0312 14:12:23.618274 7440 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:etcd-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3,Command:[cluster-etcd-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml --terminate-on-files=/var/run/secrets/serving-cert/tls.crt --terminate-on-files=/var/run/secrets/serving-cert/tls.key --terminate-on-files=/var/run/secrets/etcd-client/tls.crt --terminate-on-files=/var/run/secrets/etcd-client/tls.key --terminate-on-files=/var/run/configmaps/etcd-ca/ca-bundle.crt --terminate-on-files=/var/run/configmaps/etcd-service-ca/service-ca.crt],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:OPENSHIFT_PROFILE,Value:web,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-ca,ReadOnly:false,MountPath:/var/run/configmaps/etcd-ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-service-ca,ReadOnly:false,MountPath:/var/run/configmaps/etcd-service-ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-client,ReadOnly:false,MountPath:/var/run/secrets/etcd-client,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lcwrv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:30,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod etcd-operator-5884b9cd56-mjxsv_openshift-etcd-operator(8d775283-2696-4411-8ddf-d4e6000f0a0c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 14:12:23.619661 master-0 kubenswrapper[7440]: E0312 14:12:23.619617 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" podUID="8d775283-2696-4411-8ddf-d4e6000f0a0c" Mar 12 14:12:23.825193 master-0 kubenswrapper[7440]: I0312 14:12:23.824828 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-8q2fv"] Mar 12 14:12:24.073158 master-0 kubenswrapper[7440]: I0312 14:12:24.072670 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:12:24.077126 master-0 kubenswrapper[7440]: I0312 14:12:24.077101 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:12:24.182422 master-0 kubenswrapper[7440]: I0312 14:12:24.182367 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" event={"ID":"3f72fbbe-69f0-4622-be05-b839ff9b4d45","Type":"ContainerStarted","Data":"35b73de7804cd72eded0d5a260eb4f658c50b3bf884978dd585c75921ee17b06"} Mar 12 14:12:24.183337 master-0 kubenswrapper[7440]: I0312 14:12:24.183296 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" event={"ID":"d00a8cc7-7774-40bd-94a1-9ac2d0f63234","Type":"ContainerStarted","Data":"4767c99ca8b14443f1382cd9b5a19a4aba786928a26c41b8fce765c6d6383500"} Mar 12 14:12:24.184497 master-0 kubenswrapper[7440]: I0312 14:12:24.184469 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" event={"ID":"3dc73c14-852d-4957-b6ac-84366ba0594f","Type":"ContainerStarted","Data":"fa8693b6924bc011b2e5ff580645ad5ee2dc963897660400a6b7a2add716cfc2"} Mar 12 14:12:24.185560 master-0 kubenswrapper[7440]: I0312 14:12:24.185508 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv" event={"ID":"8660cef9-0ab3-453e-a4b9-c243daa6ddb0","Type":"ContainerStarted","Data":"fa444aaa7916a9b8ce7bfb85bc927673df9636ab7f0f10b61e757d7a6e637d9d"} Mar 12 14:12:24.186684 master-0 kubenswrapper[7440]: I0312 14:12:24.186652 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" event={"ID":"57930a54-89ab-4ec8-a504-74035bb74d63","Type":"ContainerStarted","Data":"7066c3f8af944b7c30200b6b3afe942d0daf91534e053c2a5abd37ae5b0f3435"} Mar 12 14:12:24.189061 master-0 kubenswrapper[7440]: I0312 14:12:24.189030 7440 generic.go:334] "Generic (PLEG): container finished" podID="0a898118-6d01-4211-92f0-43967b75405c" containerID="6060fd0146ead8129b93c5b31730ef60e2eaf7a165dbe7fde9719cb084457eda" exitCode=0 Mar 12 14:12:24.189119 master-0 kubenswrapper[7440]: I0312 14:12:24.189089 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" event={"ID":"0a898118-6d01-4211-92f0-43967b75405c","Type":"ContainerDied","Data":"6060fd0146ead8129b93c5b31730ef60e2eaf7a165dbe7fde9719cb084457eda"} Mar 12 14:12:24.193158 master-0 kubenswrapper[7440]: I0312 14:12:24.193116 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-8q2fv" event={"ID":"8e733069-752a-4140-83eb-8287f1bce1a7","Type":"ContainerStarted","Data":"ff62c021b6b2728ab194d385e1dcbbac9d1a1db7bb9e0282f3a425ca39b12bc0"} Mar 12 14:12:24.193218 master-0 kubenswrapper[7440]: I0312 14:12:24.193162 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-8q2fv" event={"ID":"8e733069-752a-4140-83eb-8287f1bce1a7","Type":"ContainerStarted","Data":"210d19917e7415e5f1763dbc60d79ff661ed77ac9ff9582758b201449af2e08f"} Mar 12 14:12:24.195062 master-0 kubenswrapper[7440]: I0312 14:12:24.195038 7440 generic.go:334] "Generic (PLEG): container finished" podID="8106d14a-b448-4dd1-bccd-926f85394b5d" containerID="be2a07c0fd561c76349af0b4e32d3d5bd9b366ededeeef597a13a0ecfa9560a3" exitCode=0 Mar 12 14:12:24.195271 master-0 kubenswrapper[7440]: I0312 14:12:24.195101 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" event={"ID":"8106d14a-b448-4dd1-bccd-926f85394b5d","Type":"ContainerDied","Data":"be2a07c0fd561c76349af0b4e32d3d5bd9b366ededeeef597a13a0ecfa9560a3"} Mar 12 14:12:24.195976 master-0 kubenswrapper[7440]: I0312 14:12:24.195949 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" event={"ID":"76d596c0-6a41-43e1-9516-aee9ad834ec2","Type":"ContainerStarted","Data":"1841efbfaab3b877f3dc66a0b9aac7bcfbfafdb9f154e9dca3b878d156db51a3"} Mar 12 14:12:24.197668 master-0 kubenswrapper[7440]: I0312 14:12:24.197637 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" event={"ID":"1bba274a-38c7-4d13-88a5-6bc39228416c","Type":"ContainerStarted","Data":"b4956129e01655acfb40ce60e009de2d9707827560481d924db590d2b05e8343"} Mar 12 14:12:24.327948 master-0 kubenswrapper[7440]: I0312 14:12:24.325484 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:12:24.331145 master-0 kubenswrapper[7440]: I0312 14:12:24.330253 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:12:24.859917 master-0 kubenswrapper[7440]: I0312 14:12:24.859083 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:12:24.986323 master-0 kubenswrapper[7440]: I0312 14:12:24.986195 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:24.986323 master-0 kubenswrapper[7440]: I0312 14:12:24.986252 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs\") pod \"multus-admission-controller-8d675b596-sm9nb\" (UID: \"7023af8b-bfcc-4253-85cd-d891dff1c86e\") " pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" Mar 12 14:12:24.986323 master-0 kubenswrapper[7440]: I0312 14:12:24.986287 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-dvv78\" (UID: \"85459175-2c9c-425d-bdfb-0a79c92ed110\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:12:24.986323 master-0 kubenswrapper[7440]: I0312 14:12:24.986320 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:24.986573 master-0 kubenswrapper[7440]: I0312 14:12:24.986400 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:24.986573 master-0 kubenswrapper[7440]: I0312 14:12:24.986431 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert\") pod \"olm-operator-d64cfc9db-f48hv\" (UID: \"07a6a1d6-fecf-4847-b7c1-160d5d7320fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:12:24.986573 master-0 kubenswrapper[7440]: I0312 14:12:24.986453 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:24.986573 master-0 kubenswrapper[7440]: I0312 14:12:24.986484 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:24.986573 master-0 kubenswrapper[7440]: I0312 14:12:24.986506 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls\") pod \"dns-operator-589895fbb7-q4wwv\" (UID: \"8c6b9f13-4a3a-4920-a84b-f76516501f81\") " pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" Mar 12 14:12:24.986573 master-0 kubenswrapper[7440]: I0312 14:12:24.986537 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:24.986573 master-0 kubenswrapper[7440]: I0312 14:12:24.986560 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs\") pod \"network-metrics-daemon-n9v7g\" (UID: \"7fdce71e-8085-4316-be40-e535530c2ca4\") " pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:12:24.986765 master-0 kubenswrapper[7440]: I0312 14:12:24.986583 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:12:24.986765 master-0 kubenswrapper[7440]: I0312 14:12:24.986607 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert\") pod \"catalog-operator-7d9c49f57b-whr79\" (UID: \"272b53c4-134c-404d-9a27-c7371415b1f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:12:24.986765 master-0 kubenswrapper[7440]: E0312 14:12:24.986723 7440 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 14:12:24.986885 master-0 kubenswrapper[7440]: E0312 14:12:24.986811 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert podName:272b53c4-134c-404d-9a27-c7371415b1f7 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:28.986760065 +0000 UTC m=+9.322138624 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert") pod "catalog-operator-7d9c49f57b-whr79" (UID: "272b53c4-134c-404d-9a27-c7371415b1f7") : secret "catalog-operator-serving-cert" not found Mar 12 14:12:24.987013 master-0 kubenswrapper[7440]: E0312 14:12:24.986979 7440 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 14:12:24.987055 master-0 kubenswrapper[7440]: E0312 14:12:24.987020 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert podName:879e9bf1-ce4a-40b7-a72c-fe4c61e96cea nodeName:}" failed. No retries permitted until 2026-03-12 14:12:28.987010781 +0000 UTC m=+9.322389340 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-zghs6" (UID: "879e9bf1-ce4a-40b7-a72c-fe4c61e96cea") : secret "performance-addon-operator-webhook-cert" not found Mar 12 14:12:24.987087 master-0 kubenswrapper[7440]: E0312 14:12:24.987061 7440 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 14:12:24.987087 master-0 kubenswrapper[7440]: E0312 14:12:24.987080 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls podName:a2435b91-86d6-415b-a978-34cc859e74f2 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:28.987074563 +0000 UTC m=+9.322453112 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-54cr9" (UID: "a2435b91-86d6-415b-a978-34cc859e74f2") : secret "image-registry-operator-tls" not found Mar 12 14:12:24.987146 master-0 kubenswrapper[7440]: E0312 14:12:24.987119 7440 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 14:12:24.987146 master-0 kubenswrapper[7440]: E0312 14:12:24.987136 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs podName:7023af8b-bfcc-4253-85cd-d891dff1c86e nodeName:}" failed. No retries permitted until 2026-03-12 14:12:28.987130674 +0000 UTC m=+9.322509233 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs") pod "multus-admission-controller-8d675b596-sm9nb" (UID: "7023af8b-bfcc-4253-85cd-d891dff1c86e") : secret "multus-admission-controller-secret" not found Mar 12 14:12:24.987204 master-0 kubenswrapper[7440]: E0312 14:12:24.987166 7440 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 14:12:24.987204 master-0 kubenswrapper[7440]: E0312 14:12:24.987183 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert podName:85459175-2c9c-425d-bdfb-0a79c92ed110 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:28.987178246 +0000 UTC m=+9.322556805 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-dvv78" (UID: "85459175-2c9c-425d-bdfb-0a79c92ed110") : secret "package-server-manager-serving-cert" not found Mar 12 14:12:24.987260 master-0 kubenswrapper[7440]: E0312 14:12:24.987214 7440 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 14:12:24.987260 master-0 kubenswrapper[7440]: E0312 14:12:24.987230 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls podName:4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:28.987225607 +0000 UTC m=+9.322604166 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls") pod "ingress-operator-677db989d6-44hhf" (UID: "4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6") : secret "metrics-tls" not found Mar 12 14:12:24.987260 master-0 kubenswrapper[7440]: E0312 14:12:24.987244 7440 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 14:12:24.987346 master-0 kubenswrapper[7440]: E0312 14:12:24.987262 7440 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 14:12:24.987346 master-0 kubenswrapper[7440]: E0312 14:12:24.987278 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls podName:42dbcb8f-e8c4-413e-977d-40aa6df226aa nodeName:}" failed. No retries permitted until 2026-03-12 14:12:28.987267348 +0000 UTC m=+9.322645907 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-6w5nv" (UID: "42dbcb8f-e8c4-413e-977d-40aa6df226aa") : secret "cluster-monitoring-operator-tls" not found Mar 12 14:12:24.987346 master-0 kubenswrapper[7440]: E0312 14:12:24.987294 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics podName:1bc0d552-01c7-4212-a551-d16419f2dc80 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:28.987286738 +0000 UTC m=+9.322665297 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-qzdff" (UID: "1bc0d552-01c7-4212-a551-d16419f2dc80") : secret "marketplace-operator-metrics" not found Mar 12 14:12:24.987346 master-0 kubenswrapper[7440]: E0312 14:12:24.987300 7440 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 14:12:24.987346 master-0 kubenswrapper[7440]: E0312 14:12:24.987316 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert podName:07a6a1d6-fecf-4847-b7c1-160d5d7320fb nodeName:}" failed. No retries permitted until 2026-03-12 14:12:28.987310669 +0000 UTC m=+9.322689228 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert") pod "olm-operator-d64cfc9db-f48hv" (UID: "07a6a1d6-fecf-4847-b7c1-160d5d7320fb") : secret "olm-operator-serving-cert" not found Mar 12 14:12:24.987346 master-0 kubenswrapper[7440]: E0312 14:12:24.987340 7440 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 14:12:24.987508 master-0 kubenswrapper[7440]: E0312 14:12:24.987366 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls podName:8c6b9f13-4a3a-4920-a84b-f76516501f81 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:28.98735856 +0000 UTC m=+9.322737119 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls") pod "dns-operator-589895fbb7-q4wwv" (UID: "8c6b9f13-4a3a-4920-a84b-f76516501f81") : secret "metrics-tls" not found Mar 12 14:12:24.987508 master-0 kubenswrapper[7440]: E0312 14:12:24.987346 7440 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 14:12:24.987508 master-0 kubenswrapper[7440]: E0312 14:12:24.987392 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert podName:29ab0e68-ebc6-48a3-b234-e1794c4c5ad6 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:28.987384721 +0000 UTC m=+9.322763280 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert") pod "cluster-version-operator-745944c6b7-vs878" (UID: "29ab0e68-ebc6-48a3-b234-e1794c4c5ad6") : secret "cluster-version-operator-serving-cert" not found Mar 12 14:12:24.987508 master-0 kubenswrapper[7440]: E0312 14:12:24.987372 7440 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 14:12:24.987508 master-0 kubenswrapper[7440]: E0312 14:12:24.987398 7440 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 12 14:12:24.987508 master-0 kubenswrapper[7440]: E0312 14:12:24.987416 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls podName:879e9bf1-ce4a-40b7-a72c-fe4c61e96cea nodeName:}" failed. No retries permitted until 2026-03-12 14:12:28.987409661 +0000 UTC m=+9.322788220 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-zghs6" (UID: "879e9bf1-ce4a-40b7-a72c-fe4c61e96cea") : secret "node-tuning-operator-tls" not found Mar 12 14:12:24.987508 master-0 kubenswrapper[7440]: E0312 14:12:24.987493 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs podName:7fdce71e-8085-4316-be40-e535530c2ca4 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:28.987481353 +0000 UTC m=+9.322859912 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs") pod "network-metrics-daemon-n9v7g" (UID: "7fdce71e-8085-4316-be40-e535530c2ca4") : secret "metrics-daemon-secret" not found Mar 12 14:12:25.045149 master-0 kubenswrapper[7440]: I0312 14:12:25.043036 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg"] Mar 12 14:12:25.045149 master-0 kubenswrapper[7440]: E0312 14:12:25.043202 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e7877fc-0d91-4dbe-b2ae-fa50012ced6c" containerName="prober" Mar 12 14:12:25.045149 master-0 kubenswrapper[7440]: I0312 14:12:25.043216 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e7877fc-0d91-4dbe-b2ae-fa50012ced6c" containerName="prober" Mar 12 14:12:25.045149 master-0 kubenswrapper[7440]: E0312 14:12:25.043234 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="146495bf-0787-483f-a9fc-0e8925b89150" containerName="assisted-installer-controller" Mar 12 14:12:25.045149 master-0 kubenswrapper[7440]: I0312 14:12:25.043243 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="146495bf-0787-483f-a9fc-0e8925b89150" containerName="assisted-installer-controller" Mar 12 14:12:25.045149 master-0 kubenswrapper[7440]: I0312 14:12:25.043339 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e7877fc-0d91-4dbe-b2ae-fa50012ced6c" containerName="prober" Mar 12 14:12:25.045149 master-0 kubenswrapper[7440]: I0312 14:12:25.043353 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="146495bf-0787-483f-a9fc-0e8925b89150" containerName="assisted-installer-controller" Mar 12 14:12:25.045149 master-0 kubenswrapper[7440]: I0312 14:12:25.043650 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" Mar 12 14:12:25.058499 master-0 kubenswrapper[7440]: I0312 14:12:25.058455 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg"] Mar 12 14:12:25.089650 master-0 kubenswrapper[7440]: I0312 14:12:25.089571 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2gnl\" (UniqueName: \"kubernetes.io/projected/d56089bf-177c-492d-8964-73a45574e7ed-kube-api-access-f2gnl\") pod \"csi-snapshot-controller-7577d6f48-z9hzg\" (UID: \"d56089bf-177c-492d-8964-73a45574e7ed\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" Mar 12 14:12:25.191025 master-0 kubenswrapper[7440]: I0312 14:12:25.190973 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2gnl\" (UniqueName: \"kubernetes.io/projected/d56089bf-177c-492d-8964-73a45574e7ed-kube-api-access-f2gnl\") pod \"csi-snapshot-controller-7577d6f48-z9hzg\" (UID: \"d56089bf-177c-492d-8964-73a45574e7ed\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" Mar 12 14:12:25.203286 master-0 kubenswrapper[7440]: I0312 14:12:25.203243 7440 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 14:12:25.215943 master-0 kubenswrapper[7440]: I0312 14:12:25.210710 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2gnl\" (UniqueName: \"kubernetes.io/projected/d56089bf-177c-492d-8964-73a45574e7ed-kube-api-access-f2gnl\") pod \"csi-snapshot-controller-7577d6f48-z9hzg\" (UID: \"d56089bf-177c-492d-8964-73a45574e7ed\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" Mar 12 14:12:25.322937 master-0 kubenswrapper[7440]: I0312 14:12:25.322774 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zswp"] Mar 12 14:12:25.324206 master-0 kubenswrapper[7440]: I0312 14:12:25.323583 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zswp" Mar 12 14:12:25.326665 master-0 kubenswrapper[7440]: I0312 14:12:25.326419 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 12 14:12:25.326665 master-0 kubenswrapper[7440]: I0312 14:12:25.326574 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 12 14:12:25.364185 master-0 kubenswrapper[7440]: I0312 14:12:25.360764 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zswp"] Mar 12 14:12:25.375386 master-0 kubenswrapper[7440]: I0312 14:12:25.374775 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" Mar 12 14:12:25.396976 master-0 kubenswrapper[7440]: I0312 14:12:25.396522 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdzwp\" (UniqueName: \"kubernetes.io/projected/4ef01b7f-f7cb-4fd4-a75d-fe7a657d68d4-kube-api-access-fdzwp\") pod \"migrator-57ccdf9b5-5zswp\" (UID: \"4ef01b7f-f7cb-4fd4-a75d-fe7a657d68d4\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zswp" Mar 12 14:12:25.500125 master-0 kubenswrapper[7440]: I0312 14:12:25.499211 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdzwp\" (UniqueName: \"kubernetes.io/projected/4ef01b7f-f7cb-4fd4-a75d-fe7a657d68d4-kube-api-access-fdzwp\") pod \"migrator-57ccdf9b5-5zswp\" (UID: \"4ef01b7f-f7cb-4fd4-a75d-fe7a657d68d4\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zswp" Mar 12 14:12:25.522145 master-0 kubenswrapper[7440]: I0312 14:12:25.521970 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdzwp\" (UniqueName: \"kubernetes.io/projected/4ef01b7f-f7cb-4fd4-a75d-fe7a657d68d4-kube-api-access-fdzwp\") pod \"migrator-57ccdf9b5-5zswp\" (UID: \"4ef01b7f-f7cb-4fd4-a75d-fe7a657d68d4\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zswp" Mar 12 14:12:25.581593 master-0 kubenswrapper[7440]: I0312 14:12:25.581473 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg"] Mar 12 14:12:25.589571 master-0 kubenswrapper[7440]: W0312 14:12:25.589475 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd56089bf_177c_492d_8964_73a45574e7ed.slice/crio-c0057d7bbbc9bd9f44bd51e3c80dfbe61d922316757a135f4fb3b8485ad4e5e9 WatchSource:0}: Error finding container c0057d7bbbc9bd9f44bd51e3c80dfbe61d922316757a135f4fb3b8485ad4e5e9: Status 404 returned error can't find the container with id c0057d7bbbc9bd9f44bd51e3c80dfbe61d922316757a135f4fb3b8485ad4e5e9 Mar 12 14:12:25.645929 master-0 kubenswrapper[7440]: I0312 14:12:25.645836 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zswp" Mar 12 14:12:25.814597 master-0 kubenswrapper[7440]: I0312 14:12:25.814557 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zswp"] Mar 12 14:12:25.819346 master-0 kubenswrapper[7440]: W0312 14:12:25.819307 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ef01b7f_f7cb_4fd4_a75d_fe7a657d68d4.slice/crio-6724dfeb711ea97e4c0311828871b84e605df95c88e47b984ac33b84e0c182f2 WatchSource:0}: Error finding container 6724dfeb711ea97e4c0311828871b84e605df95c88e47b984ac33b84e0c182f2: Status 404 returned error can't find the container with id 6724dfeb711ea97e4c0311828871b84e605df95c88e47b984ac33b84e0c182f2 Mar 12 14:12:26.169002 master-0 kubenswrapper[7440]: I0312 14:12:26.168963 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:12:26.179333 master-0 kubenswrapper[7440]: I0312 14:12:26.179291 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:12:26.207179 master-0 kubenswrapper[7440]: I0312 14:12:26.207136 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" event={"ID":"d56089bf-177c-492d-8964-73a45574e7ed","Type":"ContainerStarted","Data":"c0057d7bbbc9bd9f44bd51e3c80dfbe61d922316757a135f4fb3b8485ad4e5e9"} Mar 12 14:12:26.209523 master-0 kubenswrapper[7440]: I0312 14:12:26.209476 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zswp" event={"ID":"4ef01b7f-f7cb-4fd4-a75d-fe7a657d68d4","Type":"ContainerStarted","Data":"6724dfeb711ea97e4c0311828871b84e605df95c88e47b984ac33b84e0c182f2"} Mar 12 14:12:26.355934 master-0 kubenswrapper[7440]: I0312 14:12:26.354933 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q"] Mar 12 14:12:26.355934 master-0 kubenswrapper[7440]: I0312 14:12:26.355648 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q" Mar 12 14:12:26.363116 master-0 kubenswrapper[7440]: I0312 14:12:26.362636 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 14:12:26.363116 master-0 kubenswrapper[7440]: I0312 14:12:26.362989 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 14:12:26.363116 master-0 kubenswrapper[7440]: I0312 14:12:26.363104 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 14:12:26.368541 master-0 kubenswrapper[7440]: I0312 14:12:26.364103 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 14:12:26.368541 master-0 kubenswrapper[7440]: I0312 14:12:26.364198 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 14:12:26.368541 master-0 kubenswrapper[7440]: I0312 14:12:26.364343 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 14:12:26.368541 master-0 kubenswrapper[7440]: I0312 14:12:26.368263 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q"] Mar 12 14:12:26.409067 master-0 kubenswrapper[7440]: I0312 14:12:26.409013 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-client-ca\") pod \"controller-manager-6f7fd6c796-rcj4q\" (UID: \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q" Mar 12 14:12:26.409265 master-0 kubenswrapper[7440]: I0312 14:12:26.409188 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1b35d54-9d3f-40e2-90a6-813f5e51a208-serving-cert\") pod \"controller-manager-6f7fd6c796-rcj4q\" (UID: \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q" Mar 12 14:12:26.409265 master-0 kubenswrapper[7440]: I0312 14:12:26.409240 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6h2r\" (UniqueName: \"kubernetes.io/projected/d1b35d54-9d3f-40e2-90a6-813f5e51a208-kube-api-access-v6h2r\") pod \"controller-manager-6f7fd6c796-rcj4q\" (UID: \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q" Mar 12 14:12:26.409482 master-0 kubenswrapper[7440]: I0312 14:12:26.409447 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-rcj4q\" (UID: \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q" Mar 12 14:12:26.409617 master-0 kubenswrapper[7440]: I0312 14:12:26.409527 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-config\") pod \"controller-manager-6f7fd6c796-rcj4q\" (UID: \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q" Mar 12 14:12:26.510581 master-0 kubenswrapper[7440]: I0312 14:12:26.510517 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-rcj4q\" (UID: \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q" Mar 12 14:12:26.510775 master-0 kubenswrapper[7440]: E0312 14:12:26.510657 7440 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 12 14:12:26.510814 master-0 kubenswrapper[7440]: I0312 14:12:26.510766 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-config\") pod \"controller-manager-6f7fd6c796-rcj4q\" (UID: \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q" Mar 12 14:12:26.510814 master-0 kubenswrapper[7440]: E0312 14:12:26.510777 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-proxy-ca-bundles podName:d1b35d54-9d3f-40e2-90a6-813f5e51a208 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:27.01075652 +0000 UTC m=+7.346135079 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-proxy-ca-bundles") pod "controller-manager-6f7fd6c796-rcj4q" (UID: "d1b35d54-9d3f-40e2-90a6-813f5e51a208") : configmap "openshift-global-ca" not found Mar 12 14:12:26.510880 master-0 kubenswrapper[7440]: I0312 14:12:26.510842 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-client-ca\") pod \"controller-manager-6f7fd6c796-rcj4q\" (UID: \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q" Mar 12 14:12:26.510934 master-0 kubenswrapper[7440]: E0312 14:12:26.510891 7440 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 12 14:12:26.510970 master-0 kubenswrapper[7440]: I0312 14:12:26.510933 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1b35d54-9d3f-40e2-90a6-813f5e51a208-serving-cert\") pod \"controller-manager-6f7fd6c796-rcj4q\" (UID: \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q" Mar 12 14:12:26.510970 master-0 kubenswrapper[7440]: E0312 14:12:26.510960 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-config podName:d1b35d54-9d3f-40e2-90a6-813f5e51a208 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:27.010945464 +0000 UTC m=+7.346324023 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-config") pod "controller-manager-6f7fd6c796-rcj4q" (UID: "d1b35d54-9d3f-40e2-90a6-813f5e51a208") : configmap "config" not found Mar 12 14:12:26.511029 master-0 kubenswrapper[7440]: I0312 14:12:26.510977 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6h2r\" (UniqueName: \"kubernetes.io/projected/d1b35d54-9d3f-40e2-90a6-813f5e51a208-kube-api-access-v6h2r\") pod \"controller-manager-6f7fd6c796-rcj4q\" (UID: \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q" Mar 12 14:12:26.511029 master-0 kubenswrapper[7440]: E0312 14:12:26.511001 7440 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 14:12:26.511084 master-0 kubenswrapper[7440]: E0312 14:12:26.511045 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1b35d54-9d3f-40e2-90a6-813f5e51a208-serving-cert podName:d1b35d54-9d3f-40e2-90a6-813f5e51a208 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:27.011030176 +0000 UTC m=+7.346408835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d1b35d54-9d3f-40e2-90a6-813f5e51a208-serving-cert") pod "controller-manager-6f7fd6c796-rcj4q" (UID: "d1b35d54-9d3f-40e2-90a6-813f5e51a208") : secret "serving-cert" not found Mar 12 14:12:26.511084 master-0 kubenswrapper[7440]: E0312 14:12:26.511056 7440 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 14:12:26.511084 master-0 kubenswrapper[7440]: E0312 14:12:26.511081 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-client-ca podName:d1b35d54-9d3f-40e2-90a6-813f5e51a208 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:27.011074007 +0000 UTC m=+7.346452566 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-client-ca") pod "controller-manager-6f7fd6c796-rcj4q" (UID: "d1b35d54-9d3f-40e2-90a6-813f5e51a208") : configmap "client-ca" not found Mar 12 14:12:26.528925 master-0 kubenswrapper[7440]: I0312 14:12:26.528861 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6h2r\" (UniqueName: \"kubernetes.io/projected/d1b35d54-9d3f-40e2-90a6-813f5e51a208-kube-api-access-v6h2r\") pod \"controller-manager-6f7fd6c796-rcj4q\" (UID: \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q" Mar 12 14:12:26.727764 master-0 kubenswrapper[7440]: I0312 14:12:26.727703 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:26.778998 master-0 kubenswrapper[7440]: I0312 14:12:26.775445 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:27.019919 master-0 kubenswrapper[7440]: I0312 14:12:27.019850 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-config\") pod \"controller-manager-6f7fd6c796-rcj4q\" (UID: \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q" Mar 12 14:12:27.020416 master-0 kubenswrapper[7440]: E0312 14:12:27.020376 7440 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 14:12:27.020473 master-0 kubenswrapper[7440]: E0312 14:12:27.020460 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-client-ca podName:d1b35d54-9d3f-40e2-90a6-813f5e51a208 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:28.02044378 +0000 UTC m=+8.355822339 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-client-ca") pod "controller-manager-6f7fd6c796-rcj4q" (UID: "d1b35d54-9d3f-40e2-90a6-813f5e51a208") : configmap "client-ca" not found Mar 12 14:12:27.021086 master-0 kubenswrapper[7440]: E0312 14:12:27.021054 7440 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 12 14:12:27.021142 master-0 kubenswrapper[7440]: E0312 14:12:27.021126 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-config podName:d1b35d54-9d3f-40e2-90a6-813f5e51a208 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:28.021108327 +0000 UTC m=+8.356486956 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-config") pod "controller-manager-6f7fd6c796-rcj4q" (UID: "d1b35d54-9d3f-40e2-90a6-813f5e51a208") : configmap "config" not found Mar 12 14:12:27.022415 master-0 kubenswrapper[7440]: I0312 14:12:27.019956 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-client-ca\") pod \"controller-manager-6f7fd6c796-rcj4q\" (UID: \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q" Mar 12 14:12:27.022415 master-0 kubenswrapper[7440]: I0312 14:12:27.021311 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1b35d54-9d3f-40e2-90a6-813f5e51a208-serving-cert\") pod \"controller-manager-6f7fd6c796-rcj4q\" (UID: \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q" Mar 12 14:12:27.022415 master-0 kubenswrapper[7440]: I0312 14:12:27.021406 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-rcj4q\" (UID: \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q" Mar 12 14:12:27.022415 master-0 kubenswrapper[7440]: E0312 14:12:27.021442 7440 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 14:12:27.022415 master-0 kubenswrapper[7440]: E0312 14:12:27.021804 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1b35d54-9d3f-40e2-90a6-813f5e51a208-serving-cert podName:d1b35d54-9d3f-40e2-90a6-813f5e51a208 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:28.021779043 +0000 UTC m=+8.357157602 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d1b35d54-9d3f-40e2-90a6-813f5e51a208-serving-cert") pod "controller-manager-6f7fd6c796-rcj4q" (UID: "d1b35d54-9d3f-40e2-90a6-813f5e51a208") : secret "serving-cert" not found Mar 12 14:12:27.022415 master-0 kubenswrapper[7440]: E0312 14:12:27.021475 7440 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 12 14:12:27.022415 master-0 kubenswrapper[7440]: E0312 14:12:27.022120 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-proxy-ca-bundles podName:d1b35d54-9d3f-40e2-90a6-813f5e51a208 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:28.022106991 +0000 UTC m=+8.357485550 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-proxy-ca-bundles") pod "controller-manager-6f7fd6c796-rcj4q" (UID: "d1b35d54-9d3f-40e2-90a6-813f5e51a208") : configmap "openshift-global-ca" not found Mar 12 14:12:27.213044 master-0 kubenswrapper[7440]: I0312 14:12:27.212615 7440 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 14:12:27.213044 master-0 kubenswrapper[7440]: I0312 14:12:27.212973 7440 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 14:12:27.217573 master-0 kubenswrapper[7440]: I0312 14:12:27.216320 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q"] Mar 12 14:12:27.217573 master-0 kubenswrapper[7440]: E0312 14:12:27.216608 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q" podUID="d1b35d54-9d3f-40e2-90a6-813f5e51a208" Mar 12 14:12:27.221023 master-0 kubenswrapper[7440]: I0312 14:12:27.220980 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj"] Mar 12 14:12:27.221781 master-0 kubenswrapper[7440]: I0312 14:12:27.221752 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj" Mar 12 14:12:27.223171 master-0 kubenswrapper[7440]: I0312 14:12:27.223135 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 14:12:27.224263 master-0 kubenswrapper[7440]: I0312 14:12:27.223978 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 14:12:27.224263 master-0 kubenswrapper[7440]: I0312 14:12:27.224149 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 14:12:27.224442 master-0 kubenswrapper[7440]: I0312 14:12:27.224422 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 14:12:27.224604 master-0 kubenswrapper[7440]: I0312 14:12:27.224586 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 14:12:27.234804 master-0 kubenswrapper[7440]: I0312 14:12:27.234336 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj"] Mar 12 14:12:27.324937 master-0 kubenswrapper[7440]: I0312 14:12:27.324855 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/951723ec-2626-45a8-86d4-ee5c5cfabf3b-serving-cert\") pod \"route-controller-manager-5549bf695c-78xdj\" (UID: \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\") " pod="openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj" Mar 12 14:12:27.325182 master-0 kubenswrapper[7440]: I0312 14:12:27.324971 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/951723ec-2626-45a8-86d4-ee5c5cfabf3b-config\") pod \"route-controller-manager-5549bf695c-78xdj\" (UID: \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\") " pod="openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj" Mar 12 14:12:27.325182 master-0 kubenswrapper[7440]: I0312 14:12:27.325008 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/951723ec-2626-45a8-86d4-ee5c5cfabf3b-client-ca\") pod \"route-controller-manager-5549bf695c-78xdj\" (UID: \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\") " pod="openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj" Mar 12 14:12:27.325182 master-0 kubenswrapper[7440]: I0312 14:12:27.325045 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2hrz\" (UniqueName: \"kubernetes.io/projected/951723ec-2626-45a8-86d4-ee5c5cfabf3b-kube-api-access-l2hrz\") pod \"route-controller-manager-5549bf695c-78xdj\" (UID: \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\") " pod="openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj" Mar 12 14:12:27.426593 master-0 kubenswrapper[7440]: I0312 14:12:27.426342 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/951723ec-2626-45a8-86d4-ee5c5cfabf3b-serving-cert\") pod \"route-controller-manager-5549bf695c-78xdj\" (UID: \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\") " pod="openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj" Mar 12 14:12:27.426697 master-0 kubenswrapper[7440]: I0312 14:12:27.426625 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/951723ec-2626-45a8-86d4-ee5c5cfabf3b-config\") pod \"route-controller-manager-5549bf695c-78xdj\" (UID: \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\") " pod="openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj" Mar 12 14:12:27.426697 master-0 kubenswrapper[7440]: I0312 14:12:27.426653 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/951723ec-2626-45a8-86d4-ee5c5cfabf3b-client-ca\") pod \"route-controller-manager-5549bf695c-78xdj\" (UID: \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\") " pod="openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj" Mar 12 14:12:27.426791 master-0 kubenswrapper[7440]: I0312 14:12:27.426716 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2hrz\" (UniqueName: \"kubernetes.io/projected/951723ec-2626-45a8-86d4-ee5c5cfabf3b-kube-api-access-l2hrz\") pod \"route-controller-manager-5549bf695c-78xdj\" (UID: \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\") " pod="openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj" Mar 12 14:12:27.427159 master-0 kubenswrapper[7440]: E0312 14:12:27.427122 7440 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 12 14:12:27.427212 master-0 kubenswrapper[7440]: E0312 14:12:27.427202 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/951723ec-2626-45a8-86d4-ee5c5cfabf3b-client-ca podName:951723ec-2626-45a8-86d4-ee5c5cfabf3b nodeName:}" failed. No retries permitted until 2026-03-12 14:12:27.927182659 +0000 UTC m=+8.262561218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/951723ec-2626-45a8-86d4-ee5c5cfabf3b-client-ca") pod "route-controller-manager-5549bf695c-78xdj" (UID: "951723ec-2626-45a8-86d4-ee5c5cfabf3b") : configmap "client-ca" not found Mar 12 14:12:27.427505 master-0 kubenswrapper[7440]: E0312 14:12:27.427482 7440 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 14:12:27.427568 master-0 kubenswrapper[7440]: E0312 14:12:27.427520 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/951723ec-2626-45a8-86d4-ee5c5cfabf3b-serving-cert podName:951723ec-2626-45a8-86d4-ee5c5cfabf3b nodeName:}" failed. No retries permitted until 2026-03-12 14:12:27.927510377 +0000 UTC m=+8.262889006 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/951723ec-2626-45a8-86d4-ee5c5cfabf3b-serving-cert") pod "route-controller-manager-5549bf695c-78xdj" (UID: "951723ec-2626-45a8-86d4-ee5c5cfabf3b") : secret "serving-cert" not found Mar 12 14:12:27.429057 master-0 kubenswrapper[7440]: I0312 14:12:27.429029 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/951723ec-2626-45a8-86d4-ee5c5cfabf3b-config\") pod \"route-controller-manager-5549bf695c-78xdj\" (UID: \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\") " pod="openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj" Mar 12 14:12:27.446861 master-0 kubenswrapper[7440]: I0312 14:12:27.446804 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2hrz\" (UniqueName: \"kubernetes.io/projected/951723ec-2626-45a8-86d4-ee5c5cfabf3b-kube-api-access-l2hrz\") pod \"route-controller-manager-5549bf695c-78xdj\" (UID: \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\") " pod="openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj" Mar 12 14:12:27.571528 master-0 kubenswrapper[7440]: I0312 14:12:27.570925 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-7lx8p"] Mar 12 14:12:27.571528 master-0 kubenswrapper[7440]: I0312 14:12:27.571337 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" Mar 12 14:12:27.572954 master-0 kubenswrapper[7440]: I0312 14:12:27.572877 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 12 14:12:27.573150 master-0 kubenswrapper[7440]: I0312 14:12:27.573131 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 12 14:12:27.573262 master-0 kubenswrapper[7440]: I0312 14:12:27.573246 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 12 14:12:27.578470 master-0 kubenswrapper[7440]: I0312 14:12:27.578428 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 12 14:12:27.583200 master-0 kubenswrapper[7440]: I0312 14:12:27.583143 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-7lx8p"] Mar 12 14:12:27.629621 master-0 kubenswrapper[7440]: I0312 14:12:27.629199 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/61de099a-410b-4d30-83e8-19cf5901cb27-signing-cabundle\") pod \"service-ca-84bfdbbb7f-7lx8p\" (UID: \"61de099a-410b-4d30-83e8-19cf5901cb27\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" Mar 12 14:12:27.629621 master-0 kubenswrapper[7440]: I0312 14:12:27.629355 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/61de099a-410b-4d30-83e8-19cf5901cb27-signing-key\") pod \"service-ca-84bfdbbb7f-7lx8p\" (UID: \"61de099a-410b-4d30-83e8-19cf5901cb27\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" Mar 12 14:12:27.629621 master-0 kubenswrapper[7440]: I0312 14:12:27.629419 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9czc5\" (UniqueName: \"kubernetes.io/projected/61de099a-410b-4d30-83e8-19cf5901cb27-kube-api-access-9czc5\") pod \"service-ca-84bfdbbb7f-7lx8p\" (UID: \"61de099a-410b-4d30-83e8-19cf5901cb27\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" Mar 12 14:12:27.730684 master-0 kubenswrapper[7440]: I0312 14:12:27.730594 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/61de099a-410b-4d30-83e8-19cf5901cb27-signing-key\") pod \"service-ca-84bfdbbb7f-7lx8p\" (UID: \"61de099a-410b-4d30-83e8-19cf5901cb27\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" Mar 12 14:12:27.730684 master-0 kubenswrapper[7440]: I0312 14:12:27.730681 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9czc5\" (UniqueName: \"kubernetes.io/projected/61de099a-410b-4d30-83e8-19cf5901cb27-kube-api-access-9czc5\") pod \"service-ca-84bfdbbb7f-7lx8p\" (UID: \"61de099a-410b-4d30-83e8-19cf5901cb27\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" Mar 12 14:12:27.731140 master-0 kubenswrapper[7440]: I0312 14:12:27.730846 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/61de099a-410b-4d30-83e8-19cf5901cb27-signing-cabundle\") pod \"service-ca-84bfdbbb7f-7lx8p\" (UID: \"61de099a-410b-4d30-83e8-19cf5901cb27\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" Mar 12 14:12:27.733990 master-0 kubenswrapper[7440]: I0312 14:12:27.733873 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/61de099a-410b-4d30-83e8-19cf5901cb27-signing-cabundle\") pod \"service-ca-84bfdbbb7f-7lx8p\" (UID: \"61de099a-410b-4d30-83e8-19cf5901cb27\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" Mar 12 14:12:27.743160 master-0 kubenswrapper[7440]: I0312 14:12:27.736493 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/61de099a-410b-4d30-83e8-19cf5901cb27-signing-key\") pod \"service-ca-84bfdbbb7f-7lx8p\" (UID: \"61de099a-410b-4d30-83e8-19cf5901cb27\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" Mar 12 14:12:27.750874 master-0 kubenswrapper[7440]: I0312 14:12:27.750813 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9czc5\" (UniqueName: \"kubernetes.io/projected/61de099a-410b-4d30-83e8-19cf5901cb27-kube-api-access-9czc5\") pod \"service-ca-84bfdbbb7f-7lx8p\" (UID: \"61de099a-410b-4d30-83e8-19cf5901cb27\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" Mar 12 14:12:27.938051 master-0 kubenswrapper[7440]: I0312 14:12:27.937996 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/951723ec-2626-45a8-86d4-ee5c5cfabf3b-serving-cert\") pod \"route-controller-manager-5549bf695c-78xdj\" (UID: \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\") " pod="openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj" Mar 12 14:12:27.938254 master-0 kubenswrapper[7440]: E0312 14:12:27.938202 7440 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 14:12:27.938290 master-0 kubenswrapper[7440]: E0312 14:12:27.938276 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/951723ec-2626-45a8-86d4-ee5c5cfabf3b-serving-cert podName:951723ec-2626-45a8-86d4-ee5c5cfabf3b nodeName:}" failed. No retries permitted until 2026-03-12 14:12:28.938256834 +0000 UTC m=+9.273635393 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/951723ec-2626-45a8-86d4-ee5c5cfabf3b-serving-cert") pod "route-controller-manager-5549bf695c-78xdj" (UID: "951723ec-2626-45a8-86d4-ee5c5cfabf3b") : secret "serving-cert" not found Mar 12 14:12:27.938336 master-0 kubenswrapper[7440]: E0312 14:12:27.938294 7440 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 12 14:12:27.938400 master-0 kubenswrapper[7440]: E0312 14:12:27.938380 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/951723ec-2626-45a8-86d4-ee5c5cfabf3b-client-ca podName:951723ec-2626-45a8-86d4-ee5c5cfabf3b nodeName:}" failed. No retries permitted until 2026-03-12 14:12:28.938358886 +0000 UTC m=+9.273737445 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/951723ec-2626-45a8-86d4-ee5c5cfabf3b-client-ca") pod "route-controller-manager-5549bf695c-78xdj" (UID: "951723ec-2626-45a8-86d4-ee5c5cfabf3b") : configmap "client-ca" not found Mar 12 14:12:27.938450 master-0 kubenswrapper[7440]: I0312 14:12:27.938218 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/951723ec-2626-45a8-86d4-ee5c5cfabf3b-client-ca\") pod \"route-controller-manager-5549bf695c-78xdj\" (UID: \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\") " pod="openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj" Mar 12 14:12:27.951856 master-0 kubenswrapper[7440]: I0312 14:12:27.951788 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" Mar 12 14:12:28.040920 master-0 kubenswrapper[7440]: I0312 14:12:28.040846 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-client-ca\") pod \"controller-manager-6f7fd6c796-rcj4q\" (UID: \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q" Mar 12 14:12:28.041109 master-0 kubenswrapper[7440]: E0312 14:12:28.040974 7440 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 14:12:28.041109 master-0 kubenswrapper[7440]: E0312 14:12:28.041057 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-client-ca podName:d1b35d54-9d3f-40e2-90a6-813f5e51a208 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:30.04103969 +0000 UTC m=+10.376418249 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-client-ca") pod "controller-manager-6f7fd6c796-rcj4q" (UID: "d1b35d54-9d3f-40e2-90a6-813f5e51a208") : configmap "client-ca" not found Mar 12 14:12:28.041188 master-0 kubenswrapper[7440]: I0312 14:12:28.041120 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1b35d54-9d3f-40e2-90a6-813f5e51a208-serving-cert\") pod \"controller-manager-6f7fd6c796-rcj4q\" (UID: \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q" Mar 12 14:12:28.041307 master-0 kubenswrapper[7440]: E0312 14:12:28.041280 7440 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 14:12:28.041358 master-0 kubenswrapper[7440]: E0312 14:12:28.041336 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1b35d54-9d3f-40e2-90a6-813f5e51a208-serving-cert podName:d1b35d54-9d3f-40e2-90a6-813f5e51a208 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:30.041320507 +0000 UTC m=+10.376699106 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d1b35d54-9d3f-40e2-90a6-813f5e51a208-serving-cert") pod "controller-manager-6f7fd6c796-rcj4q" (UID: "d1b35d54-9d3f-40e2-90a6-813f5e51a208") : secret "serving-cert" not found Mar 12 14:12:28.041392 master-0 kubenswrapper[7440]: I0312 14:12:28.041379 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-rcj4q\" (UID: \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q" Mar 12 14:12:28.041450 master-0 kubenswrapper[7440]: I0312 14:12:28.041433 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-config\") pod \"controller-manager-6f7fd6c796-rcj4q\" (UID: \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q" Mar 12 14:12:28.042458 master-0 kubenswrapper[7440]: I0312 14:12:28.042430 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-rcj4q\" (UID: \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q" Mar 12 14:12:28.042458 master-0 kubenswrapper[7440]: I0312 14:12:28.042440 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-config\") pod \"controller-manager-6f7fd6c796-rcj4q\" (UID: \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q" Mar 12 14:12:28.160817 master-0 kubenswrapper[7440]: I0312 14:12:28.160767 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:12:28.219042 master-0 kubenswrapper[7440]: I0312 14:12:28.218976 7440 generic.go:334] "Generic (PLEG): container finished" podID="8106d14a-b448-4dd1-bccd-926f85394b5d" containerID="34b14db33a75935753eb07fc5c1da978369413ed001610be1a02068299e72c2a" exitCode=0 Mar 12 14:12:28.219717 master-0 kubenswrapper[7440]: I0312 14:12:28.219073 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" event={"ID":"8106d14a-b448-4dd1-bccd-926f85394b5d","Type":"ContainerDied","Data":"34b14db33a75935753eb07fc5c1da978369413ed001610be1a02068299e72c2a"} Mar 12 14:12:28.220475 master-0 kubenswrapper[7440]: I0312 14:12:28.220423 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q" Mar 12 14:12:28.220737 master-0 kubenswrapper[7440]: I0312 14:12:28.220710 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" event={"ID":"0a898118-6d01-4211-92f0-43967b75405c","Type":"ContainerStarted","Data":"1abac70444f37ebc5d0a9feab691c5f95fb4db1e5c3e7cd1fedbd5970be25447"} Mar 12 14:12:28.221413 master-0 kubenswrapper[7440]: I0312 14:12:28.221115 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:12:28.233771 master-0 kubenswrapper[7440]: I0312 14:12:28.233608 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q" Mar 12 14:12:28.344829 master-0 kubenswrapper[7440]: I0312 14:12:28.344632 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-config\") pod \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\" (UID: \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\") " Mar 12 14:12:28.344829 master-0 kubenswrapper[7440]: I0312 14:12:28.344737 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-proxy-ca-bundles\") pod \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\" (UID: \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\") " Mar 12 14:12:28.344829 master-0 kubenswrapper[7440]: I0312 14:12:28.344763 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6h2r\" (UniqueName: \"kubernetes.io/projected/d1b35d54-9d3f-40e2-90a6-813f5e51a208-kube-api-access-v6h2r\") pod \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\" (UID: \"d1b35d54-9d3f-40e2-90a6-813f5e51a208\") " Mar 12 14:12:28.348557 master-0 kubenswrapper[7440]: I0312 14:12:28.345773 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-config" (OuterVolumeSpecName: "config") pod "d1b35d54-9d3f-40e2-90a6-813f5e51a208" (UID: "d1b35d54-9d3f-40e2-90a6-813f5e51a208"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:12:28.348557 master-0 kubenswrapper[7440]: I0312 14:12:28.346177 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d1b35d54-9d3f-40e2-90a6-813f5e51a208" (UID: "d1b35d54-9d3f-40e2-90a6-813f5e51a208"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:12:28.348557 master-0 kubenswrapper[7440]: I0312 14:12:28.348477 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1b35d54-9d3f-40e2-90a6-813f5e51a208-kube-api-access-v6h2r" (OuterVolumeSpecName: "kube-api-access-v6h2r") pod "d1b35d54-9d3f-40e2-90a6-813f5e51a208" (UID: "d1b35d54-9d3f-40e2-90a6-813f5e51a208"). InnerVolumeSpecName "kube-api-access-v6h2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:12:28.434275 master-0 kubenswrapper[7440]: I0312 14:12:28.433357 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:28.434275 master-0 kubenswrapper[7440]: I0312 14:12:28.433499 7440 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 14:12:28.434275 master-0 kubenswrapper[7440]: I0312 14:12:28.433513 7440 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 14:12:28.446247 master-0 kubenswrapper[7440]: I0312 14:12:28.446090 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6h2r\" (UniqueName: \"kubernetes.io/projected/d1b35d54-9d3f-40e2-90a6-813f5e51a208-kube-api-access-v6h2r\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:28.446247 master-0 kubenswrapper[7440]: I0312 14:12:28.446121 7440 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:28.446247 master-0 kubenswrapper[7440]: I0312 14:12:28.446132 7440 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:28.465310 master-0 kubenswrapper[7440]: I0312 14:12:28.465234 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:28.648910 master-0 kubenswrapper[7440]: I0312 14:12:28.648585 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-7lx8p"] Mar 12 14:12:28.901116 master-0 kubenswrapper[7440]: I0312 14:12:28.900950 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:12:28.905592 master-0 kubenswrapper[7440]: I0312 14:12:28.905558 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:12:28.952774 master-0 kubenswrapper[7440]: I0312 14:12:28.951606 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/951723ec-2626-45a8-86d4-ee5c5cfabf3b-serving-cert\") pod \"route-controller-manager-5549bf695c-78xdj\" (UID: \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\") " pod="openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj" Mar 12 14:12:28.952774 master-0 kubenswrapper[7440]: I0312 14:12:28.951738 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/951723ec-2626-45a8-86d4-ee5c5cfabf3b-client-ca\") pod \"route-controller-manager-5549bf695c-78xdj\" (UID: \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\") " pod="openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj" Mar 12 14:12:28.952774 master-0 kubenswrapper[7440]: E0312 14:12:28.951937 7440 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 14:12:28.952774 master-0 kubenswrapper[7440]: E0312 14:12:28.952001 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/951723ec-2626-45a8-86d4-ee5c5cfabf3b-serving-cert podName:951723ec-2626-45a8-86d4-ee5c5cfabf3b nodeName:}" failed. No retries permitted until 2026-03-12 14:12:30.951982254 +0000 UTC m=+11.287360813 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/951723ec-2626-45a8-86d4-ee5c5cfabf3b-serving-cert") pod "route-controller-manager-5549bf695c-78xdj" (UID: "951723ec-2626-45a8-86d4-ee5c5cfabf3b") : secret "serving-cert" not found Mar 12 14:12:28.952774 master-0 kubenswrapper[7440]: E0312 14:12:28.952474 7440 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 12 14:12:28.952774 master-0 kubenswrapper[7440]: E0312 14:12:28.952502 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/951723ec-2626-45a8-86d4-ee5c5cfabf3b-client-ca podName:951723ec-2626-45a8-86d4-ee5c5cfabf3b nodeName:}" failed. No retries permitted until 2026-03-12 14:12:30.952494786 +0000 UTC m=+11.287873345 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/951723ec-2626-45a8-86d4-ee5c5cfabf3b-client-ca") pod "route-controller-manager-5549bf695c-78xdj" (UID: "951723ec-2626-45a8-86d4-ee5c5cfabf3b") : configmap "client-ca" not found Mar 12 14:12:29.053734 master-0 kubenswrapper[7440]: I0312 14:12:29.053121 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:29.053734 master-0 kubenswrapper[7440]: I0312 14:12:29.053167 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:29.053734 master-0 kubenswrapper[7440]: I0312 14:12:29.053189 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert\") pod \"olm-operator-d64cfc9db-f48hv\" (UID: \"07a6a1d6-fecf-4847-b7c1-160d5d7320fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:12:29.053734 master-0 kubenswrapper[7440]: I0312 14:12:29.053210 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:29.053734 master-0 kubenswrapper[7440]: I0312 14:12:29.053233 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:29.053734 master-0 kubenswrapper[7440]: I0312 14:12:29.053252 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls\") pod \"dns-operator-589895fbb7-q4wwv\" (UID: \"8c6b9f13-4a3a-4920-a84b-f76516501f81\") " pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" Mar 12 14:12:29.053734 master-0 kubenswrapper[7440]: I0312 14:12:29.053283 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:29.053734 master-0 kubenswrapper[7440]: I0312 14:12:29.053321 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:12:29.053734 master-0 kubenswrapper[7440]: I0312 14:12:29.053341 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs\") pod \"network-metrics-daemon-n9v7g\" (UID: \"7fdce71e-8085-4316-be40-e535530c2ca4\") " pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:12:29.053734 master-0 kubenswrapper[7440]: I0312 14:12:29.053371 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert\") pod \"catalog-operator-7d9c49f57b-whr79\" (UID: \"272b53c4-134c-404d-9a27-c7371415b1f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:12:29.053734 master-0 kubenswrapper[7440]: I0312 14:12:29.053393 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:29.053734 master-0 kubenswrapper[7440]: I0312 14:12:29.053445 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-dvv78\" (UID: \"85459175-2c9c-425d-bdfb-0a79c92ed110\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:12:29.053734 master-0 kubenswrapper[7440]: I0312 14:12:29.053465 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs\") pod \"multus-admission-controller-8d675b596-sm9nb\" (UID: \"7023af8b-bfcc-4253-85cd-d891dff1c86e\") " pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" Mar 12 14:12:29.053734 master-0 kubenswrapper[7440]: E0312 14:12:29.053573 7440 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 14:12:29.053734 master-0 kubenswrapper[7440]: E0312 14:12:29.053618 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs podName:7023af8b-bfcc-4253-85cd-d891dff1c86e nodeName:}" failed. No retries permitted until 2026-03-12 14:12:37.053604902 +0000 UTC m=+17.388983461 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs") pod "multus-admission-controller-8d675b596-sm9nb" (UID: "7023af8b-bfcc-4253-85cd-d891dff1c86e") : secret "multus-admission-controller-secret" not found Mar 12 14:12:29.054432 master-0 kubenswrapper[7440]: E0312 14:12:29.054258 7440 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 14:12:29.054432 master-0 kubenswrapper[7440]: E0312 14:12:29.054285 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls podName:4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:37.054277969 +0000 UTC m=+17.389656528 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls") pod "ingress-operator-677db989d6-44hhf" (UID: "4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6") : secret "metrics-tls" not found Mar 12 14:12:29.054432 master-0 kubenswrapper[7440]: E0312 14:12:29.054320 7440 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 14:12:29.054432 master-0 kubenswrapper[7440]: E0312 14:12:29.054338 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics podName:1bc0d552-01c7-4212-a551-d16419f2dc80 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:37.05433257 +0000 UTC m=+17.389711129 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-qzdff" (UID: "1bc0d552-01c7-4212-a551-d16419f2dc80") : secret "marketplace-operator-metrics" not found Mar 12 14:12:29.054432 master-0 kubenswrapper[7440]: E0312 14:12:29.054372 7440 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 14:12:29.054432 master-0 kubenswrapper[7440]: E0312 14:12:29.054388 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert podName:07a6a1d6-fecf-4847-b7c1-160d5d7320fb nodeName:}" failed. No retries permitted until 2026-03-12 14:12:37.054382651 +0000 UTC m=+17.389761210 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert") pod "olm-operator-d64cfc9db-f48hv" (UID: "07a6a1d6-fecf-4847-b7c1-160d5d7320fb") : secret "olm-operator-serving-cert" not found Mar 12 14:12:29.054432 master-0 kubenswrapper[7440]: E0312 14:12:29.054419 7440 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 14:12:29.054432 master-0 kubenswrapper[7440]: E0312 14:12:29.054435 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert podName:879e9bf1-ce4a-40b7-a72c-fe4c61e96cea nodeName:}" failed. No retries permitted until 2026-03-12 14:12:37.054430772 +0000 UTC m=+17.389809331 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-zghs6" (UID: "879e9bf1-ce4a-40b7-a72c-fe4c61e96cea") : secret "performance-addon-operator-webhook-cert" not found Mar 12 14:12:29.055094 master-0 kubenswrapper[7440]: E0312 14:12:29.054468 7440 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 14:12:29.055094 master-0 kubenswrapper[7440]: E0312 14:12:29.054485 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls podName:42dbcb8f-e8c4-413e-977d-40aa6df226aa nodeName:}" failed. No retries permitted until 2026-03-12 14:12:37.054479514 +0000 UTC m=+17.389858073 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-6w5nv" (UID: "42dbcb8f-e8c4-413e-977d-40aa6df226aa") : secret "cluster-monitoring-operator-tls" not found Mar 12 14:12:29.055094 master-0 kubenswrapper[7440]: E0312 14:12:29.054515 7440 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 14:12:29.055094 master-0 kubenswrapper[7440]: E0312 14:12:29.054533 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls podName:8c6b9f13-4a3a-4920-a84b-f76516501f81 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:37.054527235 +0000 UTC m=+17.389905794 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls") pod "dns-operator-589895fbb7-q4wwv" (UID: "8c6b9f13-4a3a-4920-a84b-f76516501f81") : secret "metrics-tls" not found Mar 12 14:12:29.055094 master-0 kubenswrapper[7440]: E0312 14:12:29.054773 7440 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 14:12:29.055094 master-0 kubenswrapper[7440]: E0312 14:12:29.054801 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls podName:879e9bf1-ce4a-40b7-a72c-fe4c61e96cea nodeName:}" failed. No retries permitted until 2026-03-12 14:12:37.054792451 +0000 UTC m=+17.390171010 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-zghs6" (UID: "879e9bf1-ce4a-40b7-a72c-fe4c61e96cea") : secret "node-tuning-operator-tls" not found Mar 12 14:12:29.055094 master-0 kubenswrapper[7440]: E0312 14:12:29.054834 7440 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 14:12:29.055094 master-0 kubenswrapper[7440]: E0312 14:12:29.054856 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert podName:29ab0e68-ebc6-48a3-b234-e1794c4c5ad6 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:37.054849492 +0000 UTC m=+17.390228051 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert") pod "cluster-version-operator-745944c6b7-vs878" (UID: "29ab0e68-ebc6-48a3-b234-e1794c4c5ad6") : secret "cluster-version-operator-serving-cert" not found Mar 12 14:12:29.055094 master-0 kubenswrapper[7440]: E0312 14:12:29.054912 7440 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 12 14:12:29.055094 master-0 kubenswrapper[7440]: E0312 14:12:29.054931 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs podName:7fdce71e-8085-4316-be40-e535530c2ca4 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:37.054925264 +0000 UTC m=+17.390303823 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs") pod "network-metrics-daemon-n9v7g" (UID: "7fdce71e-8085-4316-be40-e535530c2ca4") : secret "metrics-daemon-secret" not found Mar 12 14:12:29.055094 master-0 kubenswrapper[7440]: E0312 14:12:29.054981 7440 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 14:12:29.055094 master-0 kubenswrapper[7440]: E0312 14:12:29.054998 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert podName:272b53c4-134c-404d-9a27-c7371415b1f7 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:37.054993636 +0000 UTC m=+17.390372185 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert") pod "catalog-operator-7d9c49f57b-whr79" (UID: "272b53c4-134c-404d-9a27-c7371415b1f7") : secret "catalog-operator-serving-cert" not found Mar 12 14:12:29.055094 master-0 kubenswrapper[7440]: E0312 14:12:29.055028 7440 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 14:12:29.055094 master-0 kubenswrapper[7440]: E0312 14:12:29.055044 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls podName:a2435b91-86d6-415b-a978-34cc859e74f2 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:37.055039137 +0000 UTC m=+17.390417696 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-54cr9" (UID: "a2435b91-86d6-415b-a978-34cc859e74f2") : secret "image-registry-operator-tls" not found Mar 12 14:12:29.055094 master-0 kubenswrapper[7440]: E0312 14:12:29.055075 7440 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 14:12:29.055094 master-0 kubenswrapper[7440]: E0312 14:12:29.055091 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert podName:85459175-2c9c-425d-bdfb-0a79c92ed110 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:37.055086178 +0000 UTC m=+17.390464737 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-dvv78" (UID: "85459175-2c9c-425d-bdfb-0a79c92ed110") : secret "package-server-manager-serving-cert" not found Mar 12 14:12:29.224884 master-0 kubenswrapper[7440]: I0312 14:12:29.224779 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" event={"ID":"d56089bf-177c-492d-8964-73a45574e7ed","Type":"ContainerStarted","Data":"d53adb45a67056ee01b81331e65f41973a39210d835cc7c159b8fe9b81f06549"} Mar 12 14:12:29.226771 master-0 kubenswrapper[7440]: I0312 14:12:29.226052 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" event={"ID":"61de099a-410b-4d30-83e8-19cf5901cb27","Type":"ContainerStarted","Data":"b53df61802c76275e2ee152b7486584e46a40bc0a811c6ed0a3e9d62b01955be"} Mar 12 14:12:29.226771 master-0 kubenswrapper[7440]: I0312 14:12:29.226083 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" event={"ID":"61de099a-410b-4d30-83e8-19cf5901cb27","Type":"ContainerStarted","Data":"47bb0848ead40d3cf654dbab8841bba9aaf69454627f9510e73ce08c4830d731"} Mar 12 14:12:29.227955 master-0 kubenswrapper[7440]: I0312 14:12:29.227934 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q" Mar 12 14:12:29.228978 master-0 kubenswrapper[7440]: I0312 14:12:29.228942 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zswp" event={"ID":"4ef01b7f-f7cb-4fd4-a75d-fe7a657d68d4","Type":"ContainerStarted","Data":"34219c8db92022e83f302fe60298f8acc5a44b5e8ce995bbe93cbd8b92bb7d3e"} Mar 12 14:12:29.229060 master-0 kubenswrapper[7440]: I0312 14:12:29.228983 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zswp" event={"ID":"4ef01b7f-f7cb-4fd4-a75d-fe7a657d68d4","Type":"ContainerStarted","Data":"7b9fd861cdb850c770377b61b9c7cf051a5f9d4b0cf67257f63a4048e2364f02"} Mar 12 14:12:29.229106 master-0 kubenswrapper[7440]: I0312 14:12:29.229070 7440 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 14:12:29.233839 master-0 kubenswrapper[7440]: I0312 14:12:29.233806 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:12:29.237715 master-0 kubenswrapper[7440]: I0312 14:12:29.237654 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" podStartSLOduration=2.403096611 podStartE2EDuration="5.237636662s" podCreationTimestamp="2026-03-12 14:12:24 +0000 UTC" firstStartedPulling="2026-03-12 14:12:25.59109416 +0000 UTC m=+5.926472719" lastFinishedPulling="2026-03-12 14:12:28.425634211 +0000 UTC m=+8.761012770" observedRunningTime="2026-03-12 14:12:29.237481328 +0000 UTC m=+9.572859917" watchObservedRunningTime="2026-03-12 14:12:29.237636662 +0000 UTC m=+9.573015221" Mar 12 14:12:29.269577 master-0 kubenswrapper[7440]: I0312 14:12:29.269473 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zswp" podStartSLOduration=1.659396751 podStartE2EDuration="4.269427519s" podCreationTimestamp="2026-03-12 14:12:25 +0000 UTC" firstStartedPulling="2026-03-12 14:12:25.821009137 +0000 UTC m=+6.156387696" lastFinishedPulling="2026-03-12 14:12:28.431039905 +0000 UTC m=+8.766418464" observedRunningTime="2026-03-12 14:12:29.268518998 +0000 UTC m=+9.603897557" watchObservedRunningTime="2026-03-12 14:12:29.269427519 +0000 UTC m=+9.604806078" Mar 12 14:12:29.309192 master-0 kubenswrapper[7440]: I0312 14:12:29.307238 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" podStartSLOduration=2.307221826 podStartE2EDuration="2.307221826s" podCreationTimestamp="2026-03-12 14:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:12:29.291830885 +0000 UTC m=+9.627209514" watchObservedRunningTime="2026-03-12 14:12:29.307221826 +0000 UTC m=+9.642600385" Mar 12 14:12:29.326457 master-0 kubenswrapper[7440]: I0312 14:12:29.326376 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q"] Mar 12 14:12:29.327240 master-0 kubenswrapper[7440]: I0312 14:12:29.327201 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-rcj4q"] Mar 12 14:12:29.359378 master-0 kubenswrapper[7440]: I0312 14:12:29.359328 7440 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1b35d54-9d3f-40e2-90a6-813f5e51a208-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:29.359378 master-0 kubenswrapper[7440]: I0312 14:12:29.359360 7440 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1b35d54-9d3f-40e2-90a6-813f5e51a208-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:29.813008 master-0 kubenswrapper[7440]: I0312 14:12:29.812962 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1b35d54-9d3f-40e2-90a6-813f5e51a208" path="/var/lib/kubelet/pods/d1b35d54-9d3f-40e2-90a6-813f5e51a208/volumes" Mar 12 14:12:30.468301 master-0 kubenswrapper[7440]: I0312 14:12:30.468270 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:12:30.990305 master-0 kubenswrapper[7440]: I0312 14:12:30.990216 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l"] Mar 12 14:12:30.990545 master-0 kubenswrapper[7440]: I0312 14:12:30.990478 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/951723ec-2626-45a8-86d4-ee5c5cfabf3b-serving-cert\") pod \"route-controller-manager-5549bf695c-78xdj\" (UID: \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\") " pod="openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj" Mar 12 14:12:30.990545 master-0 kubenswrapper[7440]: I0312 14:12:30.990535 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/951723ec-2626-45a8-86d4-ee5c5cfabf3b-client-ca\") pod \"route-controller-manager-5549bf695c-78xdj\" (UID: \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\") " pod="openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj" Mar 12 14:12:30.990685 master-0 kubenswrapper[7440]: E0312 14:12:30.990647 7440 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 12 14:12:30.990749 master-0 kubenswrapper[7440]: E0312 14:12:30.990705 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/951723ec-2626-45a8-86d4-ee5c5cfabf3b-client-ca podName:951723ec-2626-45a8-86d4-ee5c5cfabf3b nodeName:}" failed. No retries permitted until 2026-03-12 14:12:34.990689473 +0000 UTC m=+15.326068032 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/951723ec-2626-45a8-86d4-ee5c5cfabf3b-client-ca") pod "route-controller-manager-5549bf695c-78xdj" (UID: "951723ec-2626-45a8-86d4-ee5c5cfabf3b") : configmap "client-ca" not found Mar 12 14:12:30.990816 master-0 kubenswrapper[7440]: I0312 14:12:30.990779 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l" Mar 12 14:12:30.991122 master-0 kubenswrapper[7440]: E0312 14:12:30.991089 7440 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 14:12:30.991176 master-0 kubenswrapper[7440]: E0312 14:12:30.991140 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/951723ec-2626-45a8-86d4-ee5c5cfabf3b-serving-cert podName:951723ec-2626-45a8-86d4-ee5c5cfabf3b nodeName:}" failed. No retries permitted until 2026-03-12 14:12:34.991130524 +0000 UTC m=+15.326509083 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/951723ec-2626-45a8-86d4-ee5c5cfabf3b-serving-cert") pod "route-controller-manager-5549bf695c-78xdj" (UID: "951723ec-2626-45a8-86d4-ee5c5cfabf3b") : secret "serving-cert" not found Mar 12 14:12:30.992516 master-0 kubenswrapper[7440]: I0312 14:12:30.992416 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 14:12:30.992516 master-0 kubenswrapper[7440]: I0312 14:12:30.992467 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 14:12:30.993788 master-0 kubenswrapper[7440]: I0312 14:12:30.992631 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 14:12:30.993788 master-0 kubenswrapper[7440]: I0312 14:12:30.992741 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 14:12:30.993788 master-0 kubenswrapper[7440]: I0312 14:12:30.992880 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 14:12:31.000112 master-0 kubenswrapper[7440]: I0312 14:12:31.000077 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l"] Mar 12 14:12:31.002501 master-0 kubenswrapper[7440]: I0312 14:12:31.002436 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 14:12:31.091245 master-0 kubenswrapper[7440]: I0312 14:12:31.091180 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkdg6\" (UniqueName: \"kubernetes.io/projected/4c62dd80-5d38-4385-81c2-fead2afdb3c6-kube-api-access-fkdg6\") pod \"controller-manager-86dc8d5fd9-pzj8l\" (UID: \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\") " pod="openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l" Mar 12 14:12:31.091427 master-0 kubenswrapper[7440]: I0312 14:12:31.091301 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-proxy-ca-bundles\") pod \"controller-manager-86dc8d5fd9-pzj8l\" (UID: \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\") " pod="openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l" Mar 12 14:12:31.091427 master-0 kubenswrapper[7440]: I0312 14:12:31.091384 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c62dd80-5d38-4385-81c2-fead2afdb3c6-serving-cert\") pod \"controller-manager-86dc8d5fd9-pzj8l\" (UID: \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\") " pod="openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l" Mar 12 14:12:31.091492 master-0 kubenswrapper[7440]: I0312 14:12:31.091449 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-config\") pod \"controller-manager-86dc8d5fd9-pzj8l\" (UID: \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\") " pod="openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l" Mar 12 14:12:31.091492 master-0 kubenswrapper[7440]: I0312 14:12:31.091477 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-client-ca\") pod \"controller-manager-86dc8d5fd9-pzj8l\" (UID: \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\") " pod="openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l" Mar 12 14:12:31.192342 master-0 kubenswrapper[7440]: I0312 14:12:31.192309 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-proxy-ca-bundles\") pod \"controller-manager-86dc8d5fd9-pzj8l\" (UID: \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\") " pod="openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l" Mar 12 14:12:31.192424 master-0 kubenswrapper[7440]: I0312 14:12:31.192402 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c62dd80-5d38-4385-81c2-fead2afdb3c6-serving-cert\") pod \"controller-manager-86dc8d5fd9-pzj8l\" (UID: \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\") " pod="openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l" Mar 12 14:12:31.192493 master-0 kubenswrapper[7440]: I0312 14:12:31.192474 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-config\") pod \"controller-manager-86dc8d5fd9-pzj8l\" (UID: \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\") " pod="openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l" Mar 12 14:12:31.192548 master-0 kubenswrapper[7440]: I0312 14:12:31.192507 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-client-ca\") pod \"controller-manager-86dc8d5fd9-pzj8l\" (UID: \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\") " pod="openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l" Mar 12 14:12:31.192610 master-0 kubenswrapper[7440]: I0312 14:12:31.192587 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkdg6\" (UniqueName: \"kubernetes.io/projected/4c62dd80-5d38-4385-81c2-fead2afdb3c6-kube-api-access-fkdg6\") pod \"controller-manager-86dc8d5fd9-pzj8l\" (UID: \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\") " pod="openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l" Mar 12 14:12:31.193862 master-0 kubenswrapper[7440]: I0312 14:12:31.193821 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-config\") pod \"controller-manager-86dc8d5fd9-pzj8l\" (UID: \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\") " pod="openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l" Mar 12 14:12:31.193946 master-0 kubenswrapper[7440]: E0312 14:12:31.193920 7440 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 14:12:31.194002 master-0 kubenswrapper[7440]: E0312 14:12:31.193980 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-client-ca podName:4c62dd80-5d38-4385-81c2-fead2afdb3c6 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:31.69396369 +0000 UTC m=+12.029342319 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-client-ca") pod "controller-manager-86dc8d5fd9-pzj8l" (UID: "4c62dd80-5d38-4385-81c2-fead2afdb3c6") : configmap "client-ca" not found Mar 12 14:12:31.194550 master-0 kubenswrapper[7440]: E0312 14:12:31.194527 7440 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 14:12:31.194644 master-0 kubenswrapper[7440]: E0312 14:12:31.194597 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c62dd80-5d38-4385-81c2-fead2afdb3c6-serving-cert podName:4c62dd80-5d38-4385-81c2-fead2afdb3c6 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:31.694577635 +0000 UTC m=+12.029956224 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4c62dd80-5d38-4385-81c2-fead2afdb3c6-serving-cert") pod "controller-manager-86dc8d5fd9-pzj8l" (UID: "4c62dd80-5d38-4385-81c2-fead2afdb3c6") : secret "serving-cert" not found Mar 12 14:12:31.194715 master-0 kubenswrapper[7440]: I0312 14:12:31.194689 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-proxy-ca-bundles\") pod \"controller-manager-86dc8d5fd9-pzj8l\" (UID: \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\") " pod="openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l" Mar 12 14:12:31.211683 master-0 kubenswrapper[7440]: I0312 14:12:31.211650 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkdg6\" (UniqueName: \"kubernetes.io/projected/4c62dd80-5d38-4385-81c2-fead2afdb3c6-kube-api-access-fkdg6\") pod \"controller-manager-86dc8d5fd9-pzj8l\" (UID: \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\") " pod="openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l" Mar 12 14:12:31.698780 master-0 kubenswrapper[7440]: I0312 14:12:31.698549 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-client-ca\") pod \"controller-manager-86dc8d5fd9-pzj8l\" (UID: \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\") " pod="openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l" Mar 12 14:12:31.698780 master-0 kubenswrapper[7440]: I0312 14:12:31.698711 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c62dd80-5d38-4385-81c2-fead2afdb3c6-serving-cert\") pod \"controller-manager-86dc8d5fd9-pzj8l\" (UID: \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\") " pod="openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l" Mar 12 14:12:31.700051 master-0 kubenswrapper[7440]: E0312 14:12:31.698836 7440 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 14:12:31.700051 master-0 kubenswrapper[7440]: E0312 14:12:31.698881 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c62dd80-5d38-4385-81c2-fead2afdb3c6-serving-cert podName:4c62dd80-5d38-4385-81c2-fead2afdb3c6 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:32.698868241 +0000 UTC m=+13.034246800 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4c62dd80-5d38-4385-81c2-fead2afdb3c6-serving-cert") pod "controller-manager-86dc8d5fd9-pzj8l" (UID: "4c62dd80-5d38-4385-81c2-fead2afdb3c6") : secret "serving-cert" not found Mar 12 14:12:31.700051 master-0 kubenswrapper[7440]: E0312 14:12:31.699223 7440 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 14:12:31.700051 master-0 kubenswrapper[7440]: E0312 14:12:31.699267 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-client-ca podName:4c62dd80-5d38-4385-81c2-fead2afdb3c6 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:32.699247761 +0000 UTC m=+13.034626320 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-client-ca") pod "controller-manager-86dc8d5fd9-pzj8l" (UID: "4c62dd80-5d38-4385-81c2-fead2afdb3c6") : configmap "client-ca" not found Mar 12 14:12:32.249745 master-0 kubenswrapper[7440]: I0312 14:12:32.249704 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" event={"ID":"8106d14a-b448-4dd1-bccd-926f85394b5d","Type":"ContainerStarted","Data":"8b008968de598692f915807264f6e75fa5d1e6328d1b0539e40f5fbef6013982"} Mar 12 14:12:32.713272 master-0 kubenswrapper[7440]: I0312 14:12:32.713096 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-client-ca\") pod \"controller-manager-86dc8d5fd9-pzj8l\" (UID: \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\") " pod="openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l" Mar 12 14:12:32.713272 master-0 kubenswrapper[7440]: E0312 14:12:32.713250 7440 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 14:12:32.714131 master-0 kubenswrapper[7440]: E0312 14:12:32.713500 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-client-ca podName:4c62dd80-5d38-4385-81c2-fead2afdb3c6 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:34.713476833 +0000 UTC m=+15.048855402 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-client-ca") pod "controller-manager-86dc8d5fd9-pzj8l" (UID: "4c62dd80-5d38-4385-81c2-fead2afdb3c6") : configmap "client-ca" not found Mar 12 14:12:32.714131 master-0 kubenswrapper[7440]: I0312 14:12:32.713546 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c62dd80-5d38-4385-81c2-fead2afdb3c6-serving-cert\") pod \"controller-manager-86dc8d5fd9-pzj8l\" (UID: \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\") " pod="openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l" Mar 12 14:12:32.718154 master-0 kubenswrapper[7440]: I0312 14:12:32.718102 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c62dd80-5d38-4385-81c2-fead2afdb3c6-serving-cert\") pod \"controller-manager-86dc8d5fd9-pzj8l\" (UID: \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\") " pod="openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l" Mar 12 14:12:34.732585 master-0 kubenswrapper[7440]: I0312 14:12:34.732538 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-client-ca\") pod \"controller-manager-86dc8d5fd9-pzj8l\" (UID: \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\") " pod="openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l" Mar 12 14:12:34.733335 master-0 kubenswrapper[7440]: E0312 14:12:34.732662 7440 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 14:12:34.733335 master-0 kubenswrapper[7440]: E0312 14:12:34.732756 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-client-ca podName:4c62dd80-5d38-4385-81c2-fead2afdb3c6 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:38.73273375 +0000 UTC m=+19.068112399 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-client-ca") pod "controller-manager-86dc8d5fd9-pzj8l" (UID: "4c62dd80-5d38-4385-81c2-fead2afdb3c6") : configmap "client-ca" not found Mar 12 14:12:35.035783 master-0 kubenswrapper[7440]: I0312 14:12:35.035723 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/951723ec-2626-45a8-86d4-ee5c5cfabf3b-serving-cert\") pod \"route-controller-manager-5549bf695c-78xdj\" (UID: \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\") " pod="openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj" Mar 12 14:12:35.036035 master-0 kubenswrapper[7440]: E0312 14:12:35.035939 7440 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 14:12:35.036035 master-0 kubenswrapper[7440]: E0312 14:12:35.036025 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/951723ec-2626-45a8-86d4-ee5c5cfabf3b-serving-cert podName:951723ec-2626-45a8-86d4-ee5c5cfabf3b nodeName:}" failed. No retries permitted until 2026-03-12 14:12:43.036004176 +0000 UTC m=+23.371382735 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/951723ec-2626-45a8-86d4-ee5c5cfabf3b-serving-cert") pod "route-controller-manager-5549bf695c-78xdj" (UID: "951723ec-2626-45a8-86d4-ee5c5cfabf3b") : secret "serving-cert" not found Mar 12 14:12:35.036133 master-0 kubenswrapper[7440]: I0312 14:12:35.036026 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/951723ec-2626-45a8-86d4-ee5c5cfabf3b-client-ca\") pod \"route-controller-manager-5549bf695c-78xdj\" (UID: \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\") " pod="openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj" Mar 12 14:12:35.036221 master-0 kubenswrapper[7440]: E0312 14:12:35.036190 7440 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 12 14:12:35.036268 master-0 kubenswrapper[7440]: E0312 14:12:35.036233 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/951723ec-2626-45a8-86d4-ee5c5cfabf3b-client-ca podName:951723ec-2626-45a8-86d4-ee5c5cfabf3b nodeName:}" failed. No retries permitted until 2026-03-12 14:12:43.036225081 +0000 UTC m=+23.371603640 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/951723ec-2626-45a8-86d4-ee5c5cfabf3b-client-ca") pod "route-controller-manager-5549bf695c-78xdj" (UID: "951723ec-2626-45a8-86d4-ee5c5cfabf3b") : configmap "client-ca" not found Mar 12 14:12:36.830838 master-0 kubenswrapper[7440]: I0312 14:12:36.830338 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-74b98ff8f9-wn926"] Mar 12 14:12:36.832013 master-0 kubenswrapper[7440]: I0312 14:12:36.831628 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:36.833725 master-0 kubenswrapper[7440]: I0312 14:12:36.833689 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 12 14:12:36.836200 master-0 kubenswrapper[7440]: I0312 14:12:36.834012 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 12 14:12:36.836200 master-0 kubenswrapper[7440]: I0312 14:12:36.834019 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 12 14:12:36.836200 master-0 kubenswrapper[7440]: I0312 14:12:36.834061 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 12 14:12:36.836200 master-0 kubenswrapper[7440]: I0312 14:12:36.834272 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Mar 12 14:12:36.836200 master-0 kubenswrapper[7440]: I0312 14:12:36.834292 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 12 14:12:36.836200 master-0 kubenswrapper[7440]: I0312 14:12:36.835310 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 12 14:12:36.836200 master-0 kubenswrapper[7440]: I0312 14:12:36.835467 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 12 14:12:36.836200 master-0 kubenswrapper[7440]: I0312 14:12:36.835569 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Mar 12 14:12:36.843955 master-0 kubenswrapper[7440]: I0312 14:12:36.842678 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-74b98ff8f9-wn926"] Mar 12 14:12:36.854389 master-0 kubenswrapper[7440]: I0312 14:12:36.854285 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 12 14:12:36.958624 master-0 kubenswrapper[7440]: I0312 14:12:36.958570 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-config\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:36.958822 master-0 kubenswrapper[7440]: I0312 14:12:36.958640 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fda26e79-226b-45ff-8e7e-2396bbb495c0-audit-dir\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:36.958822 master-0 kubenswrapper[7440]: I0312 14:12:36.958801 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fda26e79-226b-45ff-8e7e-2396bbb495c0-node-pullsecrets\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:36.958963 master-0 kubenswrapper[7440]: I0312 14:12:36.958878 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-serving-ca\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:36.959100 master-0 kubenswrapper[7440]: I0312 14:12:36.959045 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-image-import-ca\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:36.959100 master-0 kubenswrapper[7440]: I0312 14:12:36.959086 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-serving-cert\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:36.959204 master-0 kubenswrapper[7440]: I0312 14:12:36.959144 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-client\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:36.959204 master-0 kubenswrapper[7440]: I0312 14:12:36.959197 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-969ld\" (UniqueName: \"kubernetes.io/projected/fda26e79-226b-45ff-8e7e-2396bbb495c0-kube-api-access-969ld\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:36.959282 master-0 kubenswrapper[7440]: I0312 14:12:36.959220 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-audit\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:36.959434 master-0 kubenswrapper[7440]: I0312 14:12:36.959349 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-trusted-ca-bundle\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:36.959498 master-0 kubenswrapper[7440]: I0312 14:12:36.959437 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-encryption-config\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:37.060756 master-0 kubenswrapper[7440]: I0312 14:12:37.060624 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:37.060756 master-0 kubenswrapper[7440]: I0312 14:12:37.060690 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls\") pod \"dns-operator-589895fbb7-q4wwv\" (UID: \"8c6b9f13-4a3a-4920-a84b-f76516501f81\") " pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" Mar 12 14:12:37.060756 master-0 kubenswrapper[7440]: I0312 14:12:37.060711 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-encryption-config\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:37.060756 master-0 kubenswrapper[7440]: E0312 14:12:37.060734 7440 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 14:12:37.061048 master-0 kubenswrapper[7440]: E0312 14:12:37.060785 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls podName:42dbcb8f-e8c4-413e-977d-40aa6df226aa nodeName:}" failed. No retries permitted until 2026-03-12 14:12:53.060771469 +0000 UTC m=+33.396150028 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-6w5nv" (UID: "42dbcb8f-e8c4-413e-977d-40aa6df226aa") : secret "cluster-monitoring-operator-tls" not found Mar 12 14:12:37.061511 master-0 kubenswrapper[7440]: I0312 14:12:37.061483 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-config\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:37.061760 master-0 kubenswrapper[7440]: I0312 14:12:37.061706 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:37.061822 master-0 kubenswrapper[7440]: I0312 14:12:37.061770 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fda26e79-226b-45ff-8e7e-2396bbb495c0-audit-dir\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:37.061855 master-0 kubenswrapper[7440]: I0312 14:12:37.061825 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fda26e79-226b-45ff-8e7e-2396bbb495c0-audit-dir\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:37.062008 master-0 kubenswrapper[7440]: I0312 14:12:37.061977 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:12:37.062235 master-0 kubenswrapper[7440]: I0312 14:12:37.062196 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-config\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:37.062351 master-0 kubenswrapper[7440]: I0312 14:12:37.062321 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs\") pod \"network-metrics-daemon-n9v7g\" (UID: \"7fdce71e-8085-4316-be40-e535530c2ca4\") " pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:12:37.062414 master-0 kubenswrapper[7440]: E0312 14:12:37.062348 7440 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 12 14:12:37.062414 master-0 kubenswrapper[7440]: I0312 14:12:37.062355 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fda26e79-226b-45ff-8e7e-2396bbb495c0-node-pullsecrets\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:37.062414 master-0 kubenswrapper[7440]: I0312 14:12:37.062378 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-serving-ca\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:37.062414 master-0 kubenswrapper[7440]: E0312 14:12:37.062389 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs podName:7fdce71e-8085-4316-be40-e535530c2ca4 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:53.062378589 +0000 UTC m=+33.397757148 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs") pod "network-metrics-daemon-n9v7g" (UID: "7fdce71e-8085-4316-be40-e535530c2ca4") : secret "metrics-daemon-secret" not found Mar 12 14:12:37.062414 master-0 kubenswrapper[7440]: I0312 14:12:37.062410 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert\") pod \"catalog-operator-7d9c49f57b-whr79\" (UID: \"272b53c4-134c-404d-9a27-c7371415b1f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:12:37.062565 master-0 kubenswrapper[7440]: I0312 14:12:37.062438 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:37.062565 master-0 kubenswrapper[7440]: E0312 14:12:37.062450 7440 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: configmap "etcd-serving-ca" not found Mar 12 14:12:37.062565 master-0 kubenswrapper[7440]: I0312 14:12:37.062471 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fda26e79-226b-45ff-8e7e-2396bbb495c0-node-pullsecrets\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:37.062565 master-0 kubenswrapper[7440]: E0312 14:12:37.062495 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-serving-ca podName:fda26e79-226b-45ff-8e7e-2396bbb495c0 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:37.562479901 +0000 UTC m=+17.897858550 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-serving-ca") pod "apiserver-74b98ff8f9-wn926" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0") : configmap "etcd-serving-ca" not found Mar 12 14:12:37.062565 master-0 kubenswrapper[7440]: I0312 14:12:37.062474 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-dvv78\" (UID: \"85459175-2c9c-425d-bdfb-0a79c92ed110\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:12:37.062565 master-0 kubenswrapper[7440]: I0312 14:12:37.062526 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs\") pod \"multus-admission-controller-8d675b596-sm9nb\" (UID: \"7023af8b-bfcc-4253-85cd-d891dff1c86e\") " pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" Mar 12 14:12:37.062565 master-0 kubenswrapper[7440]: E0312 14:12:37.062542 7440 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 14:12:37.062565 master-0 kubenswrapper[7440]: I0312 14:12:37.062558 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-serving-cert\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:37.062806 master-0 kubenswrapper[7440]: E0312 14:12:37.062587 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert podName:85459175-2c9c-425d-bdfb-0a79c92ed110 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:53.062572564 +0000 UTC m=+33.397951123 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-dvv78" (UID: "85459175-2c9c-425d-bdfb-0a79c92ed110") : secret "package-server-manager-serving-cert" not found Mar 12 14:12:37.062806 master-0 kubenswrapper[7440]: E0312 14:12:37.062606 7440 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 12 14:12:37.062806 master-0 kubenswrapper[7440]: I0312 14:12:37.062608 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-image-import-ca\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:37.062806 master-0 kubenswrapper[7440]: E0312 14:12:37.062626 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-serving-cert podName:fda26e79-226b-45ff-8e7e-2396bbb495c0 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:37.562620925 +0000 UTC m=+17.897999484 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-serving-cert") pod "apiserver-74b98ff8f9-wn926" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0") : secret "serving-cert" not found Mar 12 14:12:37.062806 master-0 kubenswrapper[7440]: E0312 14:12:37.062660 7440 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 14:12:37.062806 master-0 kubenswrapper[7440]: E0312 14:12:37.062677 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs podName:7023af8b-bfcc-4253-85cd-d891dff1c86e nodeName:}" failed. No retries permitted until 2026-03-12 14:12:53.062672336 +0000 UTC m=+33.398050895 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs") pod "multus-admission-controller-8d675b596-sm9nb" (UID: "7023af8b-bfcc-4253-85cd-d891dff1c86e") : secret "multus-admission-controller-secret" not found Mar 12 14:12:37.062806 master-0 kubenswrapper[7440]: I0312 14:12:37.062698 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:37.062806 master-0 kubenswrapper[7440]: I0312 14:12:37.062716 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-client\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:37.062806 master-0 kubenswrapper[7440]: I0312 14:12:37.062737 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-969ld\" (UniqueName: \"kubernetes.io/projected/fda26e79-226b-45ff-8e7e-2396bbb495c0-kube-api-access-969ld\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:37.062806 master-0 kubenswrapper[7440]: I0312 14:12:37.062754 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:37.062806 master-0 kubenswrapper[7440]: I0312 14:12:37.062773 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-audit\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:37.062806 master-0 kubenswrapper[7440]: I0312 14:12:37.062793 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert\") pod \"olm-operator-d64cfc9db-f48hv\" (UID: \"07a6a1d6-fecf-4847-b7c1-160d5d7320fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:12:37.062806 master-0 kubenswrapper[7440]: I0312 14:12:37.062810 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:37.063311 master-0 kubenswrapper[7440]: I0312 14:12:37.062830 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-trusted-ca-bundle\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:37.063311 master-0 kubenswrapper[7440]: E0312 14:12:37.062962 7440 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 14:12:37.063311 master-0 kubenswrapper[7440]: E0312 14:12:37.063005 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics podName:1bc0d552-01c7-4212-a551-d16419f2dc80 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:53.062993285 +0000 UTC m=+33.398371844 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-qzdff" (UID: "1bc0d552-01c7-4212-a551-d16419f2dc80") : secret "marketplace-operator-metrics" not found Mar 12 14:12:37.063311 master-0 kubenswrapper[7440]: I0312 14:12:37.063126 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-image-import-ca\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:37.063311 master-0 kubenswrapper[7440]: E0312 14:12:37.063136 7440 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 14:12:37.063311 master-0 kubenswrapper[7440]: E0312 14:12:37.063193 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert podName:07a6a1d6-fecf-4847-b7c1-160d5d7320fb nodeName:}" failed. No retries permitted until 2026-03-12 14:12:53.063181539 +0000 UTC m=+33.398560188 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert") pod "olm-operator-d64cfc9db-f48hv" (UID: "07a6a1d6-fecf-4847-b7c1-160d5d7320fb") : secret "olm-operator-serving-cert" not found Mar 12 14:12:37.063500 master-0 kubenswrapper[7440]: E0312 14:12:37.063401 7440 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Mar 12 14:12:37.063500 master-0 kubenswrapper[7440]: E0312 14:12:37.063436 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-client podName:fda26e79-226b-45ff-8e7e-2396bbb495c0 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:37.563425335 +0000 UTC m=+17.898803994 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-client") pod "apiserver-74b98ff8f9-wn926" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0") : secret "etcd-client" not found Mar 12 14:12:37.063573 master-0 kubenswrapper[7440]: E0312 14:12:37.063496 7440 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 12 14:12:37.063573 master-0 kubenswrapper[7440]: E0312 14:12:37.063547 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-audit podName:fda26e79-226b-45ff-8e7e-2396bbb495c0 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:37.563532958 +0000 UTC m=+17.898911607 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-audit") pod "apiserver-74b98ff8f9-wn926" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0") : configmap "audit-0" not found Mar 12 14:12:37.063573 master-0 kubenswrapper[7440]: E0312 14:12:37.063547 7440 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 14:12:37.063677 master-0 kubenswrapper[7440]: E0312 14:12:37.063582 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert podName:272b53c4-134c-404d-9a27-c7371415b1f7 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:53.063574219 +0000 UTC m=+33.398952898 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert") pod "catalog-operator-7d9c49f57b-whr79" (UID: "272b53c4-134c-404d-9a27-c7371415b1f7") : secret "catalog-operator-serving-cert" not found Mar 12 14:12:37.064306 master-0 kubenswrapper[7440]: I0312 14:12:37.064217 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-trusted-ca-bundle\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:37.064749 master-0 kubenswrapper[7440]: I0312 14:12:37.064711 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:37.066048 master-0 kubenswrapper[7440]: I0312 14:12:37.064957 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-encryption-config\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:37.066048 master-0 kubenswrapper[7440]: I0312 14:12:37.065089 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls\") pod \"dns-operator-589895fbb7-q4wwv\" (UID: \"8c6b9f13-4a3a-4920-a84b-f76516501f81\") " pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" Mar 12 14:12:37.066048 master-0 kubenswrapper[7440]: I0312 14:12:37.065362 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:37.066048 master-0 kubenswrapper[7440]: I0312 14:12:37.065564 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert\") pod \"cluster-version-operator-745944c6b7-vs878\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:12:37.074493 master-0 kubenswrapper[7440]: I0312 14:12:37.074451 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:37.074493 master-0 kubenswrapper[7440]: I0312 14:12:37.074494 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:37.078531 master-0 kubenswrapper[7440]: I0312 14:12:37.078488 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-969ld\" (UniqueName: \"kubernetes.io/projected/fda26e79-226b-45ff-8e7e-2396bbb495c0-kube-api-access-969ld\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:37.268219 master-0 kubenswrapper[7440]: I0312 14:12:37.268152 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" event={"ID":"8d775283-2696-4411-8ddf-d4e6000f0a0c","Type":"ContainerStarted","Data":"7a2823c237ff92e61d73f497473360f5c4e92a6a6cb9f9ef1530c99732f22a88"} Mar 12 14:12:37.270092 master-0 kubenswrapper[7440]: I0312 14:12:37.270057 7440 generic.go:334] "Generic (PLEG): container finished" podID="8106d14a-b448-4dd1-bccd-926f85394b5d" containerID="8b008968de598692f915807264f6e75fa5d1e6328d1b0539e40f5fbef6013982" exitCode=0 Mar 12 14:12:37.270092 master-0 kubenswrapper[7440]: I0312 14:12:37.270084 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" event={"ID":"8106d14a-b448-4dd1-bccd-926f85394b5d","Type":"ContainerDied","Data":"8b008968de598692f915807264f6e75fa5d1e6328d1b0539e40f5fbef6013982"} Mar 12 14:12:37.270333 master-0 kubenswrapper[7440]: I0312 14:12:37.270306 7440 scope.go:117] "RemoveContainer" containerID="8b008968de598692f915807264f6e75fa5d1e6328d1b0539e40f5fbef6013982" Mar 12 14:12:37.357297 master-0 kubenswrapper[7440]: I0312 14:12:37.357225 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:12:37.361648 master-0 kubenswrapper[7440]: I0312 14:12:37.361626 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" Mar 12 14:12:37.362144 master-0 kubenswrapper[7440]: I0312 14:12:37.362056 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:12:37.366665 master-0 kubenswrapper[7440]: I0312 14:12:37.366047 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:12:37.366665 master-0 kubenswrapper[7440]: I0312 14:12:37.366060 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:12:37.576260 master-0 kubenswrapper[7440]: I0312 14:12:37.575976 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-serving-ca\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:37.576336 master-0 kubenswrapper[7440]: I0312 14:12:37.576310 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-serving-cert\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:37.576369 master-0 kubenswrapper[7440]: I0312 14:12:37.576337 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-client\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:37.576369 master-0 kubenswrapper[7440]: I0312 14:12:37.576361 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-audit\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:37.576507 master-0 kubenswrapper[7440]: E0312 14:12:37.576218 7440 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: configmap "etcd-serving-ca" not found Mar 12 14:12:37.576551 master-0 kubenswrapper[7440]: E0312 14:12:37.576537 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-serving-ca podName:fda26e79-226b-45ff-8e7e-2396bbb495c0 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:38.576523009 +0000 UTC m=+18.911901568 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-serving-ca") pod "apiserver-74b98ff8f9-wn926" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0") : configmap "etcd-serving-ca" not found Mar 12 14:12:37.576617 master-0 kubenswrapper[7440]: E0312 14:12:37.576604 7440 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 12 14:12:37.576651 master-0 kubenswrapper[7440]: E0312 14:12:37.576628 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-serving-cert podName:fda26e79-226b-45ff-8e7e-2396bbb495c0 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:38.576621982 +0000 UTC m=+18.912000541 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-serving-cert") pod "apiserver-74b98ff8f9-wn926" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0") : secret "serving-cert" not found Mar 12 14:12:37.576692 master-0 kubenswrapper[7440]: E0312 14:12:37.576483 7440 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 12 14:12:37.576692 master-0 kubenswrapper[7440]: E0312 14:12:37.576662 7440 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Mar 12 14:12:37.576692 master-0 kubenswrapper[7440]: E0312 14:12:37.576683 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-client podName:fda26e79-226b-45ff-8e7e-2396bbb495c0 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:38.576678233 +0000 UTC m=+18.912056792 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-client") pod "apiserver-74b98ff8f9-wn926" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0") : secret "etcd-client" not found Mar 12 14:12:37.576777 master-0 kubenswrapper[7440]: E0312 14:12:37.576712 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-audit podName:fda26e79-226b-45ff-8e7e-2396bbb495c0 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:38.576690163 +0000 UTC m=+18.912068722 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-audit") pod "apiserver-74b98ff8f9-wn926" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0") : configmap "audit-0" not found Mar 12 14:12:37.639459 master-0 kubenswrapper[7440]: I0312 14:12:37.639397 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9"] Mar 12 14:12:37.675664 master-0 kubenswrapper[7440]: I0312 14:12:37.675622 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6"] Mar 12 14:12:37.678860 master-0 kubenswrapper[7440]: W0312 14:12:37.678817 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod879e9bf1_ce4a_40b7_a72c_fe4c61e96cea.slice/crio-cf474d719fe021709d76198dcf6233015fdb798e1bd5aaff8f16e8ee1cf431e4 WatchSource:0}: Error finding container cf474d719fe021709d76198dcf6233015fdb798e1bd5aaff8f16e8ee1cf431e4: Status 404 returned error can't find the container with id cf474d719fe021709d76198dcf6233015fdb798e1bd5aaff8f16e8ee1cf431e4 Mar 12 14:12:37.688763 master-0 kubenswrapper[7440]: I0312 14:12:37.688707 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-q4wwv"] Mar 12 14:12:37.697524 master-0 kubenswrapper[7440]: W0312 14:12:37.697492 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c6b9f13_4a3a_4920_a84b_f76516501f81.slice/crio-04b735b224daf50d8a4394bad34d733739b181daca3e401220cb41161ddee701 WatchSource:0}: Error finding container 04b735b224daf50d8a4394bad34d733739b181daca3e401220cb41161ddee701: Status 404 returned error can't find the container with id 04b735b224daf50d8a4394bad34d733739b181daca3e401220cb41161ddee701 Mar 12 14:12:37.703109 master-0 kubenswrapper[7440]: I0312 14:12:37.703087 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-44hhf"] Mar 12 14:12:37.710604 master-0 kubenswrapper[7440]: W0312 14:12:37.710561 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bbd4f6c_53c0_45dc_ac7c_940a3a5a08f6.slice/crio-031300aa1cb0172a0d2afed31c2d6390d62119757876eb5bc01076e0f90336fb WatchSource:0}: Error finding container 031300aa1cb0172a0d2afed31c2d6390d62119757876eb5bc01076e0f90336fb: Status 404 returned error can't find the container with id 031300aa1cb0172a0d2afed31c2d6390d62119757876eb5bc01076e0f90336fb Mar 12 14:12:38.280921 master-0 kubenswrapper[7440]: I0312 14:12:38.280587 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" event={"ID":"8106d14a-b448-4dd1-bccd-926f85394b5d","Type":"ContainerStarted","Data":"d09193ab64fa4ad5898ed40452f50720dec8c982d5f7eb0df7950d928c3d3534"} Mar 12 14:12:38.282848 master-0 kubenswrapper[7440]: I0312 14:12:38.281445 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" event={"ID":"8c6b9f13-4a3a-4920-a84b-f76516501f81","Type":"ContainerStarted","Data":"04b735b224daf50d8a4394bad34d733739b181daca3e401220cb41161ddee701"} Mar 12 14:12:38.284753 master-0 kubenswrapper[7440]: I0312 14:12:38.284656 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" event={"ID":"08ea0d9f-0635-4759-803e-572eca2f2d34","Type":"ContainerStarted","Data":"d27cef2ffd951ac8b7af825674c33be11e2853a2bd3265c01b885bcdafe8ff3f"} Mar 12 14:12:38.286149 master-0 kubenswrapper[7440]: I0312 14:12:38.286127 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" event={"ID":"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6","Type":"ContainerStarted","Data":"031300aa1cb0172a0d2afed31c2d6390d62119757876eb5bc01076e0f90336fb"} Mar 12 14:12:38.287026 master-0 kubenswrapper[7440]: I0312 14:12:38.287005 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" event={"ID":"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6","Type":"ContainerStarted","Data":"c4b5088c802a368b7c7d0efdb50871f27fcbf22b2f22473b852cef3d38ae1618"} Mar 12 14:12:38.287925 master-0 kubenswrapper[7440]: I0312 14:12:38.287853 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" event={"ID":"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea","Type":"ContainerStarted","Data":"cf474d719fe021709d76198dcf6233015fdb798e1bd5aaff8f16e8ee1cf431e4"} Mar 12 14:12:38.289124 master-0 kubenswrapper[7440]: I0312 14:12:38.289098 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" event={"ID":"a2435b91-86d6-415b-a978-34cc859e74f2","Type":"ContainerStarted","Data":"a6ab4911ef54a5ef7fd92d9752905d7377429179c56c4e77bafea0e6505d40e2"} Mar 12 14:12:38.591412 master-0 kubenswrapper[7440]: I0312 14:12:38.591317 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-serving-ca\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:38.591412 master-0 kubenswrapper[7440]: I0312 14:12:38.591398 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-serving-cert\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:38.591412 master-0 kubenswrapper[7440]: I0312 14:12:38.591421 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-client\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:38.591629 master-0 kubenswrapper[7440]: E0312 14:12:38.591491 7440 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: configmap "etcd-serving-ca" not found Mar 12 14:12:38.591629 master-0 kubenswrapper[7440]: I0312 14:12:38.591538 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-audit\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:38.591629 master-0 kubenswrapper[7440]: E0312 14:12:38.591563 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-serving-ca podName:fda26e79-226b-45ff-8e7e-2396bbb495c0 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:40.591546932 +0000 UTC m=+20.926925491 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-serving-ca") pod "apiserver-74b98ff8f9-wn926" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0") : configmap "etcd-serving-ca" not found Mar 12 14:12:38.591629 master-0 kubenswrapper[7440]: E0312 14:12:38.591607 7440 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 12 14:12:38.591749 master-0 kubenswrapper[7440]: E0312 14:12:38.591651 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-audit podName:fda26e79-226b-45ff-8e7e-2396bbb495c0 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:40.591637864 +0000 UTC m=+20.927016423 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-audit") pod "apiserver-74b98ff8f9-wn926" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0") : configmap "audit-0" not found Mar 12 14:12:38.591749 master-0 kubenswrapper[7440]: E0312 14:12:38.591741 7440 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Mar 12 14:12:38.591837 master-0 kubenswrapper[7440]: E0312 14:12:38.591761 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-client podName:fda26e79-226b-45ff-8e7e-2396bbb495c0 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:40.591755147 +0000 UTC m=+20.927133706 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-client") pod "apiserver-74b98ff8f9-wn926" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0") : secret "etcd-client" not found Mar 12 14:12:38.591837 master-0 kubenswrapper[7440]: E0312 14:12:38.591796 7440 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 12 14:12:38.591837 master-0 kubenswrapper[7440]: E0312 14:12:38.591815 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-serving-cert podName:fda26e79-226b-45ff-8e7e-2396bbb495c0 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:40.591808608 +0000 UTC m=+20.927187167 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-serving-cert") pod "apiserver-74b98ff8f9-wn926" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0") : secret "serving-cert" not found Mar 12 14:12:38.793530 master-0 kubenswrapper[7440]: I0312 14:12:38.793488 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-client-ca\") pod \"controller-manager-86dc8d5fd9-pzj8l\" (UID: \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\") " pod="openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l" Mar 12 14:12:38.793722 master-0 kubenswrapper[7440]: E0312 14:12:38.793626 7440 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 14:12:38.793722 master-0 kubenswrapper[7440]: E0312 14:12:38.793679 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-client-ca podName:4c62dd80-5d38-4385-81c2-fead2afdb3c6 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:46.793666111 +0000 UTC m=+27.129044670 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-client-ca") pod "controller-manager-86dc8d5fd9-pzj8l" (UID: "4c62dd80-5d38-4385-81c2-fead2afdb3c6") : configmap "client-ca" not found Mar 12 14:12:39.295804 master-0 kubenswrapper[7440]: I0312 14:12:39.295725 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-vb4v5" event={"ID":"b9d51570-06dd-4e2f-9c19-07fb694279ae","Type":"ContainerStarted","Data":"d103b6dd3025cd6675ab416a258908fa08fddcab4596d474c90fa3b8ce404326"} Mar 12 14:12:40.620364 master-0 kubenswrapper[7440]: I0312 14:12:40.620040 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-serving-ca\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:40.622008 master-0 kubenswrapper[7440]: E0312 14:12:40.620191 7440 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: configmap "etcd-serving-ca" not found Mar 12 14:12:40.622008 master-0 kubenswrapper[7440]: I0312 14:12:40.620425 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-serving-cert\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:40.622008 master-0 kubenswrapper[7440]: E0312 14:12:40.620459 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-serving-ca podName:fda26e79-226b-45ff-8e7e-2396bbb495c0 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:44.620428826 +0000 UTC m=+24.955807385 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-serving-ca") pod "apiserver-74b98ff8f9-wn926" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0") : configmap "etcd-serving-ca" not found Mar 12 14:12:40.622008 master-0 kubenswrapper[7440]: I0312 14:12:40.620488 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-client\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:40.622008 master-0 kubenswrapper[7440]: I0312 14:12:40.620538 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-audit\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:40.622008 master-0 kubenswrapper[7440]: E0312 14:12:40.620559 7440 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 12 14:12:40.622008 master-0 kubenswrapper[7440]: E0312 14:12:40.620616 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-serving-cert podName:fda26e79-226b-45ff-8e7e-2396bbb495c0 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:44.62059827 +0000 UTC m=+24.955976829 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-serving-cert") pod "apiserver-74b98ff8f9-wn926" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0") : secret "serving-cert" not found Mar 12 14:12:40.622008 master-0 kubenswrapper[7440]: E0312 14:12:40.620708 7440 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Mar 12 14:12:40.622008 master-0 kubenswrapper[7440]: E0312 14:12:40.620790 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-client podName:fda26e79-226b-45ff-8e7e-2396bbb495c0 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:44.620770836 +0000 UTC m=+24.956149395 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-client") pod "apiserver-74b98ff8f9-wn926" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0") : secret "etcd-client" not found Mar 12 14:12:40.622008 master-0 kubenswrapper[7440]: E0312 14:12:40.620825 7440 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 12 14:12:40.622008 master-0 kubenswrapper[7440]: E0312 14:12:40.620870 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-audit podName:fda26e79-226b-45ff-8e7e-2396bbb495c0 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:44.620858188 +0000 UTC m=+24.956236807 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-audit") pod "apiserver-74b98ff8f9-wn926" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0") : configmap "audit-0" not found Mar 12 14:12:41.423424 master-0 kubenswrapper[7440]: I0312 14:12:41.423371 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-74b98ff8f9-wn926"] Mar 12 14:12:41.423703 master-0 kubenswrapper[7440]: E0312 14:12:41.423667 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit etcd-client etcd-serving-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" podUID="fda26e79-226b-45ff-8e7e-2396bbb495c0" Mar 12 14:12:42.304307 master-0 kubenswrapper[7440]: I0312 14:12:42.304258 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:42.312452 master-0 kubenswrapper[7440]: I0312 14:12:42.312421 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:42.343826 master-0 kubenswrapper[7440]: I0312 14:12:42.343767 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-image-import-ca\") pod \"fda26e79-226b-45ff-8e7e-2396bbb495c0\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " Mar 12 14:12:42.343826 master-0 kubenswrapper[7440]: I0312 14:12:42.343811 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-config\") pod \"fda26e79-226b-45ff-8e7e-2396bbb495c0\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " Mar 12 14:12:42.344033 master-0 kubenswrapper[7440]: I0312 14:12:42.343857 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-969ld\" (UniqueName: \"kubernetes.io/projected/fda26e79-226b-45ff-8e7e-2396bbb495c0-kube-api-access-969ld\") pod \"fda26e79-226b-45ff-8e7e-2396bbb495c0\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " Mar 12 14:12:42.344033 master-0 kubenswrapper[7440]: I0312 14:12:42.343875 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fda26e79-226b-45ff-8e7e-2396bbb495c0-node-pullsecrets\") pod \"fda26e79-226b-45ff-8e7e-2396bbb495c0\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " Mar 12 14:12:42.344033 master-0 kubenswrapper[7440]: I0312 14:12:42.343957 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fda26e79-226b-45ff-8e7e-2396bbb495c0-audit-dir\") pod \"fda26e79-226b-45ff-8e7e-2396bbb495c0\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " Mar 12 14:12:42.344033 master-0 kubenswrapper[7440]: I0312 14:12:42.343981 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-encryption-config\") pod \"fda26e79-226b-45ff-8e7e-2396bbb495c0\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " Mar 12 14:12:42.344033 master-0 kubenswrapper[7440]: I0312 14:12:42.344024 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-trusted-ca-bundle\") pod \"fda26e79-226b-45ff-8e7e-2396bbb495c0\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " Mar 12 14:12:42.344467 master-0 kubenswrapper[7440]: I0312 14:12:42.344431 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-config" (OuterVolumeSpecName: "config") pod "fda26e79-226b-45ff-8e7e-2396bbb495c0" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:12:42.344509 master-0 kubenswrapper[7440]: I0312 14:12:42.344485 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fda26e79-226b-45ff-8e7e-2396bbb495c0-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "fda26e79-226b-45ff-8e7e-2396bbb495c0" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:12:42.344539 master-0 kubenswrapper[7440]: I0312 14:12:42.344519 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fda26e79-226b-45ff-8e7e-2396bbb495c0-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "fda26e79-226b-45ff-8e7e-2396bbb495c0" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:12:42.344572 master-0 kubenswrapper[7440]: I0312 14:12:42.344541 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "fda26e79-226b-45ff-8e7e-2396bbb495c0" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:12:42.345211 master-0 kubenswrapper[7440]: I0312 14:12:42.345167 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "fda26e79-226b-45ff-8e7e-2396bbb495c0" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:12:42.349441 master-0 kubenswrapper[7440]: I0312 14:12:42.349400 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda26e79-226b-45ff-8e7e-2396bbb495c0-kube-api-access-969ld" (OuterVolumeSpecName: "kube-api-access-969ld") pod "fda26e79-226b-45ff-8e7e-2396bbb495c0" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0"). InnerVolumeSpecName "kube-api-access-969ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:12:42.354468 master-0 kubenswrapper[7440]: I0312 14:12:42.354191 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "fda26e79-226b-45ff-8e7e-2396bbb495c0" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:12:42.445292 master-0 kubenswrapper[7440]: I0312 14:12:42.445217 7440 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:42.445292 master-0 kubenswrapper[7440]: I0312 14:12:42.445264 7440 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-image-import-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:42.445292 master-0 kubenswrapper[7440]: I0312 14:12:42.445277 7440 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:42.445292 master-0 kubenswrapper[7440]: I0312 14:12:42.445289 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-969ld\" (UniqueName: \"kubernetes.io/projected/fda26e79-226b-45ff-8e7e-2396bbb495c0-kube-api-access-969ld\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:42.445292 master-0 kubenswrapper[7440]: I0312 14:12:42.445300 7440 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fda26e79-226b-45ff-8e7e-2396bbb495c0-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:42.445616 master-0 kubenswrapper[7440]: I0312 14:12:42.445310 7440 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fda26e79-226b-45ff-8e7e-2396bbb495c0-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:42.445616 master-0 kubenswrapper[7440]: I0312 14:12:42.445321 7440 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-encryption-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:43.052302 master-0 kubenswrapper[7440]: I0312 14:12:43.052242 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/951723ec-2626-45a8-86d4-ee5c5cfabf3b-serving-cert\") pod \"route-controller-manager-5549bf695c-78xdj\" (UID: \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\") " pod="openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj" Mar 12 14:12:43.052994 master-0 kubenswrapper[7440]: I0312 14:12:43.052956 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/951723ec-2626-45a8-86d4-ee5c5cfabf3b-client-ca\") pod \"route-controller-manager-5549bf695c-78xdj\" (UID: \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\") " pod="openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj" Mar 12 14:12:43.053090 master-0 kubenswrapper[7440]: E0312 14:12:43.053052 7440 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 12 14:12:43.053166 master-0 kubenswrapper[7440]: E0312 14:12:43.053143 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/951723ec-2626-45a8-86d4-ee5c5cfabf3b-client-ca podName:951723ec-2626-45a8-86d4-ee5c5cfabf3b nodeName:}" failed. No retries permitted until 2026-03-12 14:12:59.053097919 +0000 UTC m=+39.388476478 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/951723ec-2626-45a8-86d4-ee5c5cfabf3b-client-ca") pod "route-controller-manager-5549bf695c-78xdj" (UID: "951723ec-2626-45a8-86d4-ee5c5cfabf3b") : configmap "client-ca" not found Mar 12 14:12:43.055981 master-0 kubenswrapper[7440]: I0312 14:12:43.055952 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/951723ec-2626-45a8-86d4-ee5c5cfabf3b-serving-cert\") pod \"route-controller-manager-5549bf695c-78xdj\" (UID: \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\") " pod="openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj" Mar 12 14:12:43.312804 master-0 kubenswrapper[7440]: I0312 14:12:43.309041 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:44.672291 master-0 kubenswrapper[7440]: I0312 14:12:44.672212 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-audit\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:44.673035 master-0 kubenswrapper[7440]: E0312 14:12:44.672313 7440 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: object "openshift-apiserver"/"audit-0" not registered Mar 12 14:12:44.673035 master-0 kubenswrapper[7440]: I0312 14:12:44.672345 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-serving-ca\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:44.673035 master-0 kubenswrapper[7440]: E0312 14:12:44.672378 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-audit podName:fda26e79-226b-45ff-8e7e-2396bbb495c0 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:52.672360883 +0000 UTC m=+33.007739442 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-audit") pod "apiserver-74b98ff8f9-wn926" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0") : object "openshift-apiserver"/"audit-0" not registered Mar 12 14:12:44.673035 master-0 kubenswrapper[7440]: E0312 14:12:44.672407 7440 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Mar 12 14:12:44.673035 master-0 kubenswrapper[7440]: E0312 14:12:44.672441 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-serving-ca podName:fda26e79-226b-45ff-8e7e-2396bbb495c0 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:52.672429205 +0000 UTC m=+33.007807764 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-serving-ca") pod "apiserver-74b98ff8f9-wn926" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0") : object "openshift-apiserver"/"etcd-serving-ca" not registered Mar 12 14:12:44.673035 master-0 kubenswrapper[7440]: I0312 14:12:44.672458 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-serving-cert\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:44.673035 master-0 kubenswrapper[7440]: I0312 14:12:44.672489 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-client\") pod \"apiserver-74b98ff8f9-wn926\" (UID: \"fda26e79-226b-45ff-8e7e-2396bbb495c0\") " pod="openshift-apiserver/apiserver-74b98ff8f9-wn926" Mar 12 14:12:44.673035 master-0 kubenswrapper[7440]: E0312 14:12:44.672571 7440 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Mar 12 14:12:44.673035 master-0 kubenswrapper[7440]: E0312 14:12:44.672593 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-client podName:fda26e79-226b-45ff-8e7e-2396bbb495c0 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:52.672586869 +0000 UTC m=+33.007965428 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-client") pod "apiserver-74b98ff8f9-wn926" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0") : object "openshift-apiserver"/"etcd-client" not registered Mar 12 14:12:44.673035 master-0 kubenswrapper[7440]: E0312 14:12:44.672663 7440 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Mar 12 14:12:44.673035 master-0 kubenswrapper[7440]: E0312 14:12:44.672749 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-serving-cert podName:fda26e79-226b-45ff-8e7e-2396bbb495c0 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:52.672727183 +0000 UTC m=+33.008105782 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-serving-cert") pod "apiserver-74b98ff8f9-wn926" (UID: "fda26e79-226b-45ff-8e7e-2396bbb495c0") : object "openshift-apiserver"/"serving-cert" not registered Mar 12 14:12:45.884925 master-0 kubenswrapper[7440]: I0312 14:12:45.884835 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-65c58d4d64-b8nwz"] Mar 12 14:12:45.885888 master-0 kubenswrapper[7440]: I0312 14:12:45.885692 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:45.908691 master-0 kubenswrapper[7440]: I0312 14:12:45.908466 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 12 14:12:45.909099 master-0 kubenswrapper[7440]: I0312 14:12:45.908759 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 12 14:12:45.909099 master-0 kubenswrapper[7440]: I0312 14:12:45.908827 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 12 14:12:45.909099 master-0 kubenswrapper[7440]: I0312 14:12:45.908920 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 12 14:12:45.909099 master-0 kubenswrapper[7440]: I0312 14:12:45.908937 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 12 14:12:45.909099 master-0 kubenswrapper[7440]: I0312 14:12:45.908975 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 12 14:12:45.913941 master-0 kubenswrapper[7440]: I0312 14:12:45.909057 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 12 14:12:45.927609 master-0 kubenswrapper[7440]: I0312 14:12:45.923833 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 12 14:12:45.927609 master-0 kubenswrapper[7440]: I0312 14:12:45.926405 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 12 14:12:45.935584 master-0 kubenswrapper[7440]: I0312 14:12:45.935525 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 12 14:12:45.986613 master-0 kubenswrapper[7440]: I0312 14:12:45.986532 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1763269d-d7d2-44ae-a7aa-74eca578d04b-node-pullsecrets\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:45.986613 master-0 kubenswrapper[7440]: I0312 14:12:45.986607 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-image-import-ca\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:45.986853 master-0 kubenswrapper[7440]: I0312 14:12:45.986685 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1763269d-d7d2-44ae-a7aa-74eca578d04b-serving-cert\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:45.986853 master-0 kubenswrapper[7440]: I0312 14:12:45.986757 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x5bj\" (UniqueName: \"kubernetes.io/projected/1763269d-d7d2-44ae-a7aa-74eca578d04b-kube-api-access-9x5bj\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:45.986853 master-0 kubenswrapper[7440]: I0312 14:12:45.986838 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1763269d-d7d2-44ae-a7aa-74eca578d04b-etcd-client\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:45.986976 master-0 kubenswrapper[7440]: I0312 14:12:45.986869 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-config\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:45.987059 master-0 kubenswrapper[7440]: I0312 14:12:45.986959 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-trusted-ca-bundle\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:45.987103 master-0 kubenswrapper[7440]: I0312 14:12:45.987068 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1763269d-d7d2-44ae-a7aa-74eca578d04b-encryption-config\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:45.987103 master-0 kubenswrapper[7440]: I0312 14:12:45.987095 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1763269d-d7d2-44ae-a7aa-74eca578d04b-audit-dir\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:45.987181 master-0 kubenswrapper[7440]: I0312 14:12:45.987117 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-audit\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:45.987181 master-0 kubenswrapper[7440]: I0312 14:12:45.987140 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-etcd-serving-ca\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:46.088099 master-0 kubenswrapper[7440]: I0312 14:12:46.088004 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1763269d-d7d2-44ae-a7aa-74eca578d04b-audit-dir\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:46.088099 master-0 kubenswrapper[7440]: I0312 14:12:46.088053 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-audit\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:46.088099 master-0 kubenswrapper[7440]: I0312 14:12:46.088083 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-etcd-serving-ca\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:46.088386 master-0 kubenswrapper[7440]: I0312 14:12:46.088168 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1763269d-d7d2-44ae-a7aa-74eca578d04b-audit-dir\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:46.088386 master-0 kubenswrapper[7440]: I0312 14:12:46.088345 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1763269d-d7d2-44ae-a7aa-74eca578d04b-node-pullsecrets\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:46.088450 master-0 kubenswrapper[7440]: I0312 14:12:46.088393 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-image-import-ca\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:46.088488 master-0 kubenswrapper[7440]: I0312 14:12:46.088444 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1763269d-d7d2-44ae-a7aa-74eca578d04b-serving-cert\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:46.088488 master-0 kubenswrapper[7440]: I0312 14:12:46.088454 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1763269d-d7d2-44ae-a7aa-74eca578d04b-node-pullsecrets\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:46.088488 master-0 kubenswrapper[7440]: I0312 14:12:46.088469 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x5bj\" (UniqueName: \"kubernetes.io/projected/1763269d-d7d2-44ae-a7aa-74eca578d04b-kube-api-access-9x5bj\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:46.088600 master-0 kubenswrapper[7440]: I0312 14:12:46.088571 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1763269d-d7d2-44ae-a7aa-74eca578d04b-etcd-client\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:46.088645 master-0 kubenswrapper[7440]: I0312 14:12:46.088619 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-config\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:46.088699 master-0 kubenswrapper[7440]: I0312 14:12:46.088670 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-trusted-ca-bundle\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:46.088737 master-0 kubenswrapper[7440]: I0312 14:12:46.088711 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1763269d-d7d2-44ae-a7aa-74eca578d04b-encryption-config\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:46.089034 master-0 kubenswrapper[7440]: E0312 14:12:46.089001 7440 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Mar 12 14:12:46.089196 master-0 kubenswrapper[7440]: E0312 14:12:46.089183 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1763269d-d7d2-44ae-a7aa-74eca578d04b-etcd-client podName:1763269d-d7d2-44ae-a7aa-74eca578d04b nodeName:}" failed. No retries permitted until 2026-03-12 14:12:46.589160621 +0000 UTC m=+26.924539180 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/1763269d-d7d2-44ae-a7aa-74eca578d04b-etcd-client") pod "apiserver-65c58d4d64-b8nwz" (UID: "1763269d-d7d2-44ae-a7aa-74eca578d04b") : secret "etcd-client" not found Mar 12 14:12:46.089261 master-0 kubenswrapper[7440]: I0312 14:12:46.089070 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-etcd-serving-ca\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:46.089330 master-0 kubenswrapper[7440]: I0312 14:12:46.089026 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-audit\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:46.089516 master-0 kubenswrapper[7440]: I0312 14:12:46.089484 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-image-import-ca\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:46.089641 master-0 kubenswrapper[7440]: I0312 14:12:46.089613 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-config\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:46.090087 master-0 kubenswrapper[7440]: I0312 14:12:46.090053 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-trusted-ca-bundle\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:46.093519 master-0 kubenswrapper[7440]: I0312 14:12:46.093453 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1763269d-d7d2-44ae-a7aa-74eca578d04b-serving-cert\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:46.095239 master-0 kubenswrapper[7440]: I0312 14:12:46.095205 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1763269d-d7d2-44ae-a7aa-74eca578d04b-encryption-config\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:46.163938 master-0 kubenswrapper[7440]: I0312 14:12:46.163767 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:46.163938 master-0 kubenswrapper[7440]: I0312 14:12:46.163941 7440 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 14:12:46.180096 master-0 kubenswrapper[7440]: I0312 14:12:46.180053 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:12:46.502092 master-0 kubenswrapper[7440]: I0312 14:12:46.501160 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-74b98ff8f9-wn926"] Mar 12 14:12:46.503097 master-0 kubenswrapper[7440]: I0312 14:12:46.503065 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-65c58d4d64-b8nwz"] Mar 12 14:12:46.595267 master-0 kubenswrapper[7440]: I0312 14:12:46.594980 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1763269d-d7d2-44ae-a7aa-74eca578d04b-etcd-client\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:46.598317 master-0 kubenswrapper[7440]: I0312 14:12:46.598197 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1763269d-d7d2-44ae-a7aa-74eca578d04b-etcd-client\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:46.800987 master-0 kubenswrapper[7440]: I0312 14:12:46.800782 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-client-ca\") pod \"controller-manager-86dc8d5fd9-pzj8l\" (UID: \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\") " pod="openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l" Mar 12 14:12:46.801212 master-0 kubenswrapper[7440]: E0312 14:12:46.801070 7440 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 14:12:46.801212 master-0 kubenswrapper[7440]: E0312 14:12:46.801149 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-client-ca podName:4c62dd80-5d38-4385-81c2-fead2afdb3c6 nodeName:}" failed. No retries permitted until 2026-03-12 14:13:02.801122294 +0000 UTC m=+43.136500883 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-client-ca") pod "controller-manager-86dc8d5fd9-pzj8l" (UID: "4c62dd80-5d38-4385-81c2-fead2afdb3c6") : configmap "client-ca" not found Mar 12 14:12:46.811862 master-0 kubenswrapper[7440]: I0312 14:12:46.811792 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-74b98ff8f9-wn926"] Mar 12 14:12:46.902268 master-0 kubenswrapper[7440]: I0312 14:12:46.902222 7440 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-client\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:46.903060 master-0 kubenswrapper[7440]: I0312 14:12:46.903019 7440 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-audit\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:46.903060 master-0 kubenswrapper[7440]: I0312 14:12:46.903050 7440 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fda26e79-226b-45ff-8e7e-2396bbb495c0-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:46.903060 master-0 kubenswrapper[7440]: I0312 14:12:46.903061 7440 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fda26e79-226b-45ff-8e7e-2396bbb495c0-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:47.254593 master-0 kubenswrapper[7440]: I0312 14:12:47.254556 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x5bj\" (UniqueName: \"kubernetes.io/projected/1763269d-d7d2-44ae-a7aa-74eca578d04b-kube-api-access-9x5bj\") pod \"apiserver-65c58d4d64-b8nwz\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:47.425203 master-0 kubenswrapper[7440]: I0312 14:12:47.425150 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:47.805559 master-0 kubenswrapper[7440]: I0312 14:12:47.803583 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-65c58d4d64-b8nwz"] Mar 12 14:12:47.816203 master-0 kubenswrapper[7440]: I0312 14:12:47.815926 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda26e79-226b-45ff-8e7e-2396bbb495c0" path="/var/lib/kubelet/pods/fda26e79-226b-45ff-8e7e-2396bbb495c0/volumes" Mar 12 14:12:47.891975 master-0 kubenswrapper[7440]: I0312 14:12:47.885966 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-65c58d4d64-b8nwz"] Mar 12 14:12:47.951800 master-0 kubenswrapper[7440]: I0312 14:12:47.951746 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l"] Mar 12 14:12:47.952764 master-0 kubenswrapper[7440]: E0312 14:12:47.952055 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l" podUID="4c62dd80-5d38-4385-81c2-fead2afdb3c6" Mar 12 14:12:47.997927 master-0 kubenswrapper[7440]: I0312 14:12:47.997847 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj"] Mar 12 14:12:47.998161 master-0 kubenswrapper[7440]: E0312 14:12:47.998112 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj" podUID="951723ec-2626-45a8-86d4-ee5c5cfabf3b" Mar 12 14:12:48.139031 master-0 kubenswrapper[7440]: I0312 14:12:48.138992 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z"] Mar 12 14:12:48.139655 master-0 kubenswrapper[7440]: I0312 14:12:48.139632 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:12:48.141225 master-0 kubenswrapper[7440]: W0312 14:12:48.141177 7440 reflector.go:561] object-"openshift-catalogd"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-catalogd": no relationship found between node 'master-0' and this object Mar 12 14:12:48.141324 master-0 kubenswrapper[7440]: E0312 14:12:48.141237 7440 reflector.go:158] "Unhandled Error" err="object-\"openshift-catalogd\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-catalogd\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 12 14:12:48.141324 master-0 kubenswrapper[7440]: W0312 14:12:48.141279 7440 reflector.go:561] object-"openshift-catalogd"/"catalogserver-cert": failed to list *v1.Secret: secrets "catalogserver-cert" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-catalogd": no relationship found between node 'master-0' and this object Mar 12 14:12:48.141324 master-0 kubenswrapper[7440]: W0312 14:12:48.141285 7440 reflector.go:561] object-"openshift-catalogd"/"catalogd-trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "catalogd-trusted-ca-bundle" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-catalogd": no relationship found between node 'master-0' and this object Mar 12 14:12:48.141324 master-0 kubenswrapper[7440]: W0312 14:12:48.141323 7440 reflector.go:561] object-"openshift-catalogd"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-catalogd": no relationship found between node 'master-0' and this object Mar 12 14:12:48.141444 master-0 kubenswrapper[7440]: E0312 14:12:48.141324 7440 reflector.go:158] "Unhandled Error" err="object-\"openshift-catalogd\"/\"catalogd-trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"catalogd-trusted-ca-bundle\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-catalogd\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 12 14:12:48.141444 master-0 kubenswrapper[7440]: E0312 14:12:48.141338 7440 reflector.go:158] "Unhandled Error" err="object-\"openshift-catalogd\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-catalogd\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 12 14:12:48.141444 master-0 kubenswrapper[7440]: E0312 14:12:48.141292 7440 reflector.go:158] "Unhandled Error" err="object-\"openshift-catalogd\"/\"catalogserver-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"catalogserver-cert\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-catalogd\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 12 14:12:48.164713 master-0 kubenswrapper[7440]: I0312 14:12:48.164662 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z"] Mar 12 14:12:48.232599 master-0 kubenswrapper[7440]: I0312 14:12:48.232548 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/39252b5a-d014-4319-ad81-3c1bf2ef585e-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:12:48.232871 master-0 kubenswrapper[7440]: I0312 14:12:48.232631 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/39252b5a-d014-4319-ad81-3c1bf2ef585e-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:12:48.232871 master-0 kubenswrapper[7440]: I0312 14:12:48.232673 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/39252b5a-d014-4319-ad81-3c1bf2ef585e-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:12:48.232871 master-0 kubenswrapper[7440]: I0312 14:12:48.232687 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/39252b5a-d014-4319-ad81-3c1bf2ef585e-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:12:48.232871 master-0 kubenswrapper[7440]: I0312 14:12:48.232720 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktncx\" (UniqueName: \"kubernetes.io/projected/39252b5a-d014-4319-ad81-3c1bf2ef585e-kube-api-access-ktncx\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:12:48.232871 master-0 kubenswrapper[7440]: I0312 14:12:48.232760 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/39252b5a-d014-4319-ad81-3c1bf2ef585e-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:12:48.242372 master-0 kubenswrapper[7440]: I0312 14:12:48.242325 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn"] Mar 12 14:12:48.243003 master-0 kubenswrapper[7440]: I0312 14:12:48.242983 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:12:48.245682 master-0 kubenswrapper[7440]: I0312 14:12:48.245644 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-btfvk"] Mar 12 14:12:48.246047 master-0 kubenswrapper[7440]: I0312 14:12:48.246019 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.252572 master-0 kubenswrapper[7440]: I0312 14:12:48.252530 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 12 14:12:48.252765 master-0 kubenswrapper[7440]: I0312 14:12:48.252719 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 12 14:12:48.253879 master-0 kubenswrapper[7440]: I0312 14:12:48.253850 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 12 14:12:48.262613 master-0 kubenswrapper[7440]: I0312 14:12:48.262581 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn"] Mar 12 14:12:48.325614 master-0 kubenswrapper[7440]: I0312 14:12:48.325512 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" event={"ID":"a2435b91-86d6-415b-a978-34cc859e74f2","Type":"ContainerStarted","Data":"875a6bda6b71188c64ac2ab0648f7976d1deadab74df54ad54a3c4c6e3e8c152"} Mar 12 14:12:48.327386 master-0 kubenswrapper[7440]: I0312 14:12:48.327350 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" event={"ID":"1763269d-d7d2-44ae-a7aa-74eca578d04b","Type":"ContainerStarted","Data":"0a6102c2c08043184397ade480887559f77ea3246278bb3afe643c96ef163768"} Mar 12 14:12:48.330178 master-0 kubenswrapper[7440]: I0312 14:12:48.330123 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" event={"ID":"8c6b9f13-4a3a-4920-a84b-f76516501f81","Type":"ContainerStarted","Data":"8fb3af0133d0946b0f849f54e6b053a8d244cc1e5f114a25ba3e224c22bcf96c"} Mar 12 14:12:48.330264 master-0 kubenswrapper[7440]: I0312 14:12:48.330178 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" event={"ID":"8c6b9f13-4a3a-4920-a84b-f76516501f81","Type":"ContainerStarted","Data":"22207ff89d6884489259f42baf46427c71156ff68dfb78cafcbd6e3eaaee6798"} Mar 12 14:12:48.333240 master-0 kubenswrapper[7440]: I0312 14:12:48.333196 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/1f9b15c6-b4ee-4907-8daa-376e3b438896-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:12:48.333311 master-0 kubenswrapper[7440]: I0312 14:12:48.333248 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-systemd\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.333311 master-0 kubenswrapper[7440]: I0312 14:12:48.333297 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5fb06459-09da-4620-91cf-8c3fe8f425db-tmp\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.333389 master-0 kubenswrapper[7440]: I0312 14:12:48.333355 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/39252b5a-d014-4319-ad81-3c1bf2ef585e-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:12:48.333431 master-0 kubenswrapper[7440]: I0312 14:12:48.333395 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-sysconfig\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.333527 master-0 kubenswrapper[7440]: I0312 14:12:48.333504 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-run\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.333573 master-0 kubenswrapper[7440]: I0312 14:12:48.333532 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-var-lib-kubelet\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.333573 master-0 kubenswrapper[7440]: I0312 14:12:48.333567 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv69s\" (UniqueName: \"kubernetes.io/projected/5fb06459-09da-4620-91cf-8c3fe8f425db-kube-api-access-zv69s\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.333643 master-0 kubenswrapper[7440]: I0312 14:12:48.333606 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/39252b5a-d014-4319-ad81-3c1bf2ef585e-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:12:48.333680 master-0 kubenswrapper[7440]: I0312 14:12:48.333642 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/39252b5a-d014-4319-ad81-3c1bf2ef585e-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:12:48.333680 master-0 kubenswrapper[7440]: I0312 14:12:48.333657 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/39252b5a-d014-4319-ad81-3c1bf2ef585e-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:12:48.333680 master-0 kubenswrapper[7440]: I0312 14:12:48.333676 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-sys\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.333759 master-0 kubenswrapper[7440]: I0312 14:12:48.333747 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/39252b5a-d014-4319-ad81-3c1bf2ef585e-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:12:48.333852 master-0 kubenswrapper[7440]: I0312 14:12:48.333812 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/39252b5a-d014-4319-ad81-3c1bf2ef585e-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:12:48.333914 master-0 kubenswrapper[7440]: I0312 14:12:48.333834 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-lib-modules\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.333955 master-0 kubenswrapper[7440]: I0312 14:12:48.333932 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/1f9b15c6-b4ee-4907-8daa-376e3b438896-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:12:48.333993 master-0 kubenswrapper[7440]: I0312 14:12:48.333958 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktncx\" (UniqueName: \"kubernetes.io/projected/39252b5a-d014-4319-ad81-3c1bf2ef585e-kube-api-access-ktncx\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:12:48.333993 master-0 kubenswrapper[7440]: I0312 14:12:48.333979 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/39252b5a-d014-4319-ad81-3c1bf2ef585e-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:12:48.334052 master-0 kubenswrapper[7440]: I0312 14:12:48.333988 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-modprobe-d\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.334052 master-0 kubenswrapper[7440]: I0312 14:12:48.334046 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-kubernetes\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.334117 master-0 kubenswrapper[7440]: I0312 14:12:48.334069 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1f9b15c6-b4ee-4907-8daa-376e3b438896-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:12:48.334117 master-0 kubenswrapper[7440]: I0312 14:12:48.334086 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-host\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.334186 master-0 kubenswrapper[7440]: I0312 14:12:48.334115 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/1f9b15c6-b4ee-4907-8daa-376e3b438896-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:12:48.334186 master-0 kubenswrapper[7440]: I0312 14:12:48.334132 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-sysctl-conf\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.334263 master-0 kubenswrapper[7440]: I0312 14:12:48.334190 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-sysctl-d\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.334263 master-0 kubenswrapper[7440]: I0312 14:12:48.334211 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/39252b5a-d014-4319-ad81-3c1bf2ef585e-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:12:48.334263 master-0 kubenswrapper[7440]: I0312 14:12:48.334227 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7nnk\" (UniqueName: \"kubernetes.io/projected/1f9b15c6-b4ee-4907-8daa-376e3b438896-kube-api-access-w7nnk\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:12:48.334263 master-0 kubenswrapper[7440]: I0312 14:12:48.334247 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-tuned\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.334727 master-0 kubenswrapper[7440]: I0312 14:12:48.334697 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" event={"ID":"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6","Type":"ContainerStarted","Data":"2fa51a43e8255ddff099408eecb3af1c9c7359cdc855341d675c4d921272ecf0"} Mar 12 14:12:48.334727 master-0 kubenswrapper[7440]: I0312 14:12:48.334721 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" event={"ID":"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6","Type":"ContainerStarted","Data":"6ba212567515d3f9436de59fb6dea21c7df5a57d0a71d8f4512b348613929a0b"} Mar 12 14:12:48.336458 master-0 kubenswrapper[7440]: I0312 14:12:48.336430 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" event={"ID":"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6","Type":"ContainerStarted","Data":"cf23fc0b6cd95a02f686246211e31b8df0ad1c1b49b21a0c7774df5c0e49337f"} Mar 12 14:12:48.337367 master-0 kubenswrapper[7440]: I0312 14:12:48.337343 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj" Mar 12 14:12:48.337514 master-0 kubenswrapper[7440]: I0312 14:12:48.337470 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" event={"ID":"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea","Type":"ContainerStarted","Data":"84cd4dda4ef244649d072d7fb3ef07cda0fc4acab308d3a457899758e508ea9b"} Mar 12 14:12:48.337651 master-0 kubenswrapper[7440]: I0312 14:12:48.337631 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l" Mar 12 14:12:48.345505 master-0 kubenswrapper[7440]: I0312 14:12:48.345465 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj" Mar 12 14:12:48.350476 master-0 kubenswrapper[7440]: I0312 14:12:48.350409 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l" Mar 12 14:12:48.435544 master-0 kubenswrapper[7440]: I0312 14:12:48.435318 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c62dd80-5d38-4385-81c2-fead2afdb3c6-serving-cert\") pod \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\" (UID: \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\") " Mar 12 14:12:48.435544 master-0 kubenswrapper[7440]: I0312 14:12:48.435411 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkdg6\" (UniqueName: \"kubernetes.io/projected/4c62dd80-5d38-4385-81c2-fead2afdb3c6-kube-api-access-fkdg6\") pod \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\" (UID: \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\") " Mar 12 14:12:48.435544 master-0 kubenswrapper[7440]: I0312 14:12:48.435489 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-config\") pod \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\" (UID: \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\") " Mar 12 14:12:48.435544 master-0 kubenswrapper[7440]: I0312 14:12:48.435543 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-proxy-ca-bundles\") pod \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\" (UID: \"4c62dd80-5d38-4385-81c2-fead2afdb3c6\") " Mar 12 14:12:48.435797 master-0 kubenswrapper[7440]: I0312 14:12:48.435573 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2hrz\" (UniqueName: \"kubernetes.io/projected/951723ec-2626-45a8-86d4-ee5c5cfabf3b-kube-api-access-l2hrz\") pod \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\" (UID: \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\") " Mar 12 14:12:48.435797 master-0 kubenswrapper[7440]: I0312 14:12:48.435632 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/951723ec-2626-45a8-86d4-ee5c5cfabf3b-config\") pod \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\" (UID: \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\") " Mar 12 14:12:48.435797 master-0 kubenswrapper[7440]: I0312 14:12:48.435676 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/951723ec-2626-45a8-86d4-ee5c5cfabf3b-serving-cert\") pod \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\" (UID: \"951723ec-2626-45a8-86d4-ee5c5cfabf3b\") " Mar 12 14:12:48.435879 master-0 kubenswrapper[7440]: I0312 14:12:48.435846 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv69s\" (UniqueName: \"kubernetes.io/projected/5fb06459-09da-4620-91cf-8c3fe8f425db-kube-api-access-zv69s\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.436136 master-0 kubenswrapper[7440]: I0312 14:12:48.436110 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-sys\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.436641 master-0 kubenswrapper[7440]: I0312 14:12:48.436614 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4c62dd80-5d38-4385-81c2-fead2afdb3c6" (UID: "4c62dd80-5d38-4385-81c2-fead2afdb3c6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:12:48.436795 master-0 kubenswrapper[7440]: I0312 14:12:48.436780 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-lib-modules\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.436887 master-0 kubenswrapper[7440]: I0312 14:12:48.436874 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/1f9b15c6-b4ee-4907-8daa-376e3b438896-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:12:48.437028 master-0 kubenswrapper[7440]: I0312 14:12:48.437012 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-modprobe-d\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.439110 master-0 kubenswrapper[7440]: I0312 14:12:48.437121 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-sys\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.439110 master-0 kubenswrapper[7440]: I0312 14:12:48.437212 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/1f9b15c6-b4ee-4907-8daa-376e3b438896-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:12:48.439110 master-0 kubenswrapper[7440]: I0312 14:12:48.437395 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-modprobe-d\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.439110 master-0 kubenswrapper[7440]: I0312 14:12:48.437479 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-config" (OuterVolumeSpecName: "config") pod "4c62dd80-5d38-4385-81c2-fead2afdb3c6" (UID: "4c62dd80-5d38-4385-81c2-fead2afdb3c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:12:48.439110 master-0 kubenswrapper[7440]: I0312 14:12:48.437620 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-lib-modules\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.439110 master-0 kubenswrapper[7440]: I0312 14:12:48.438164 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/951723ec-2626-45a8-86d4-ee5c5cfabf3b-config" (OuterVolumeSpecName: "config") pod "951723ec-2626-45a8-86d4-ee5c5cfabf3b" (UID: "951723ec-2626-45a8-86d4-ee5c5cfabf3b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:12:48.439667 master-0 kubenswrapper[7440]: I0312 14:12:48.439651 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-kubernetes\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.439779 master-0 kubenswrapper[7440]: I0312 14:12:48.439761 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1f9b15c6-b4ee-4907-8daa-376e3b438896-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:12:48.439870 master-0 kubenswrapper[7440]: I0312 14:12:48.439857 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-host\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.440559 master-0 kubenswrapper[7440]: I0312 14:12:48.440304 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1f9b15c6-b4ee-4907-8daa-376e3b438896-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:12:48.440559 master-0 kubenswrapper[7440]: I0312 14:12:48.440258 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-host\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.440559 master-0 kubenswrapper[7440]: I0312 14:12:48.440309 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/1f9b15c6-b4ee-4907-8daa-376e3b438896-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:12:48.440559 master-0 kubenswrapper[7440]: I0312 14:12:48.440224 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-kubernetes\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.443616 master-0 kubenswrapper[7440]: I0312 14:12:48.443594 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/1f9b15c6-b4ee-4907-8daa-376e3b438896-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:12:48.443848 master-0 kubenswrapper[7440]: I0312 14:12:48.443834 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-sysctl-conf\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.444007 master-0 kubenswrapper[7440]: I0312 14:12:48.443987 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7nnk\" (UniqueName: \"kubernetes.io/projected/1f9b15c6-b4ee-4907-8daa-376e3b438896-kube-api-access-w7nnk\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:12:48.463210 master-0 kubenswrapper[7440]: I0312 14:12:48.462605 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-sysctl-d\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.463210 master-0 kubenswrapper[7440]: I0312 14:12:48.446802 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c62dd80-5d38-4385-81c2-fead2afdb3c6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4c62dd80-5d38-4385-81c2-fead2afdb3c6" (UID: "4c62dd80-5d38-4385-81c2-fead2afdb3c6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:12:48.463210 master-0 kubenswrapper[7440]: I0312 14:12:48.445247 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-sysctl-conf\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.463210 master-0 kubenswrapper[7440]: I0312 14:12:48.462764 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-tuned\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.463210 master-0 kubenswrapper[7440]: I0312 14:12:48.462947 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/1f9b15c6-b4ee-4907-8daa-376e3b438896-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:12:48.463210 master-0 kubenswrapper[7440]: I0312 14:12:48.463004 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-systemd\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.463210 master-0 kubenswrapper[7440]: I0312 14:12:48.463033 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5fb06459-09da-4620-91cf-8c3fe8f425db-tmp\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.463210 master-0 kubenswrapper[7440]: I0312 14:12:48.463186 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-sysconfig\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.465703 master-0 kubenswrapper[7440]: I0312 14:12:48.463215 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-run\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.465703 master-0 kubenswrapper[7440]: I0312 14:12:48.463290 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-var-lib-kubelet\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.465703 master-0 kubenswrapper[7440]: I0312 14:12:48.463422 7440 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:48.465703 master-0 kubenswrapper[7440]: I0312 14:12:48.463446 7440 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:48.465703 master-0 kubenswrapper[7440]: I0312 14:12:48.463489 7440 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/951723ec-2626-45a8-86d4-ee5c5cfabf3b-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:48.465703 master-0 kubenswrapper[7440]: I0312 14:12:48.463507 7440 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c62dd80-5d38-4385-81c2-fead2afdb3c6-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:48.465703 master-0 kubenswrapper[7440]: I0312 14:12:48.463610 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-var-lib-kubelet\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.465703 master-0 kubenswrapper[7440]: I0312 14:12:48.464017 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/951723ec-2626-45a8-86d4-ee5c5cfabf3b-kube-api-access-l2hrz" (OuterVolumeSpecName: "kube-api-access-l2hrz") pod "951723ec-2626-45a8-86d4-ee5c5cfabf3b" (UID: "951723ec-2626-45a8-86d4-ee5c5cfabf3b"). InnerVolumeSpecName "kube-api-access-l2hrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:12:48.465703 master-0 kubenswrapper[7440]: I0312 14:12:48.465215 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-sysconfig\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.465703 master-0 kubenswrapper[7440]: I0312 14:12:48.465411 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-run\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.467241 master-0 kubenswrapper[7440]: I0312 14:12:48.462792 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-sysctl-d\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.475268 master-0 kubenswrapper[7440]: I0312 14:12:48.467339 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c62dd80-5d38-4385-81c2-fead2afdb3c6-kube-api-access-fkdg6" (OuterVolumeSpecName: "kube-api-access-fkdg6") pod "4c62dd80-5d38-4385-81c2-fead2afdb3c6" (UID: "4c62dd80-5d38-4385-81c2-fead2afdb3c6"). InnerVolumeSpecName "kube-api-access-fkdg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:12:48.475268 master-0 kubenswrapper[7440]: I0312 14:12:48.468019 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-systemd\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.484954 master-0 kubenswrapper[7440]: I0312 14:12:48.484202 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5fb06459-09da-4620-91cf-8c3fe8f425db-tmp\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.485216 master-0 kubenswrapper[7440]: I0312 14:12:48.484961 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/1f9b15c6-b4ee-4907-8daa-376e3b438896-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:12:48.492704 master-0 kubenswrapper[7440]: I0312 14:12:48.489656 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-tuned\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.492704 master-0 kubenswrapper[7440]: I0312 14:12:48.492636 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/951723ec-2626-45a8-86d4-ee5c5cfabf3b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "951723ec-2626-45a8-86d4-ee5c5cfabf3b" (UID: "951723ec-2626-45a8-86d4-ee5c5cfabf3b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:12:48.505700 master-0 kubenswrapper[7440]: I0312 14:12:48.505616 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7nnk\" (UniqueName: \"kubernetes.io/projected/1f9b15c6-b4ee-4907-8daa-376e3b438896-kube-api-access-w7nnk\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:12:48.521929 master-0 kubenswrapper[7440]: I0312 14:12:48.519274 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-fpjck"] Mar 12 14:12:48.521929 master-0 kubenswrapper[7440]: I0312 14:12:48.519990 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-fpjck" Mar 12 14:12:48.523089 master-0 kubenswrapper[7440]: I0312 14:12:48.523050 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 12 14:12:48.528047 master-0 kubenswrapper[7440]: I0312 14:12:48.527784 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 12 14:12:48.528047 master-0 kubenswrapper[7440]: I0312 14:12:48.527940 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 12 14:12:48.533973 master-0 kubenswrapper[7440]: I0312 14:12:48.533222 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 12 14:12:48.556810 master-0 kubenswrapper[7440]: I0312 14:12:48.544563 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-fpjck"] Mar 12 14:12:48.561910 master-0 kubenswrapper[7440]: I0312 14:12:48.557017 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv69s\" (UniqueName: \"kubernetes.io/projected/5fb06459-09da-4620-91cf-8c3fe8f425db-kube-api-access-zv69s\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.566956 master-0 kubenswrapper[7440]: I0312 14:12:48.565485 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkdg6\" (UniqueName: \"kubernetes.io/projected/4c62dd80-5d38-4385-81c2-fead2afdb3c6-kube-api-access-fkdg6\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:48.566956 master-0 kubenswrapper[7440]: I0312 14:12:48.565521 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2hrz\" (UniqueName: \"kubernetes.io/projected/951723ec-2626-45a8-86d4-ee5c5cfabf3b-kube-api-access-l2hrz\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:48.566956 master-0 kubenswrapper[7440]: I0312 14:12:48.565531 7440 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/951723ec-2626-45a8-86d4-ee5c5cfabf3b-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:48.571819 master-0 kubenswrapper[7440]: I0312 14:12:48.571246 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:12:48.586541 master-0 kubenswrapper[7440]: I0312 14:12:48.586072 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:12:48.666662 master-0 kubenswrapper[7440]: I0312 14:12:48.666601 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ec846db-e344-4f9e-95e6-7a0055f52766-metrics-tls\") pod \"dns-default-fpjck\" (UID: \"3ec846db-e344-4f9e-95e6-7a0055f52766\") " pod="openshift-dns/dns-default-fpjck" Mar 12 14:12:48.666866 master-0 kubenswrapper[7440]: I0312 14:12:48.666722 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ec846db-e344-4f9e-95e6-7a0055f52766-config-volume\") pod \"dns-default-fpjck\" (UID: \"3ec846db-e344-4f9e-95e6-7a0055f52766\") " pod="openshift-dns/dns-default-fpjck" Mar 12 14:12:48.666866 master-0 kubenswrapper[7440]: I0312 14:12:48.666775 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkgft\" (UniqueName: \"kubernetes.io/projected/3ec846db-e344-4f9e-95e6-7a0055f52766-kube-api-access-tkgft\") pod \"dns-default-fpjck\" (UID: \"3ec846db-e344-4f9e-95e6-7a0055f52766\") " pod="openshift-dns/dns-default-fpjck" Mar 12 14:12:48.768308 master-0 kubenswrapper[7440]: I0312 14:12:48.768257 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ec846db-e344-4f9e-95e6-7a0055f52766-metrics-tls\") pod \"dns-default-fpjck\" (UID: \"3ec846db-e344-4f9e-95e6-7a0055f52766\") " pod="openshift-dns/dns-default-fpjck" Mar 12 14:12:48.768489 master-0 kubenswrapper[7440]: E0312 14:12:48.768379 7440 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Mar 12 14:12:48.768489 master-0 kubenswrapper[7440]: I0312 14:12:48.768416 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ec846db-e344-4f9e-95e6-7a0055f52766-config-volume\") pod \"dns-default-fpjck\" (UID: \"3ec846db-e344-4f9e-95e6-7a0055f52766\") " pod="openshift-dns/dns-default-fpjck" Mar 12 14:12:48.768489 master-0 kubenswrapper[7440]: E0312 14:12:48.768439 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ec846db-e344-4f9e-95e6-7a0055f52766-metrics-tls podName:3ec846db-e344-4f9e-95e6-7a0055f52766 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:49.268422183 +0000 UTC m=+29.603800732 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3ec846db-e344-4f9e-95e6-7a0055f52766-metrics-tls") pod "dns-default-fpjck" (UID: "3ec846db-e344-4f9e-95e6-7a0055f52766") : secret "dns-default-metrics-tls" not found Mar 12 14:12:48.768580 master-0 kubenswrapper[7440]: I0312 14:12:48.768548 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkgft\" (UniqueName: \"kubernetes.io/projected/3ec846db-e344-4f9e-95e6-7a0055f52766-kube-api-access-tkgft\") pod \"dns-default-fpjck\" (UID: \"3ec846db-e344-4f9e-95e6-7a0055f52766\") " pod="openshift-dns/dns-default-fpjck" Mar 12 14:12:48.769228 master-0 kubenswrapper[7440]: I0312 14:12:48.769202 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ec846db-e344-4f9e-95e6-7a0055f52766-config-volume\") pod \"dns-default-fpjck\" (UID: \"3ec846db-e344-4f9e-95e6-7a0055f52766\") " pod="openshift-dns/dns-default-fpjck" Mar 12 14:12:48.799034 master-0 kubenswrapper[7440]: I0312 14:12:48.798914 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkgft\" (UniqueName: \"kubernetes.io/projected/3ec846db-e344-4f9e-95e6-7a0055f52766-kube-api-access-tkgft\") pod \"dns-default-fpjck\" (UID: \"3ec846db-e344-4f9e-95e6-7a0055f52766\") " pod="openshift-dns/dns-default-fpjck" Mar 12 14:12:48.804870 master-0 kubenswrapper[7440]: I0312 14:12:48.804835 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn"] Mar 12 14:12:48.992539 master-0 kubenswrapper[7440]: I0312 14:12:48.992495 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-nml4k"] Mar 12 14:12:48.993440 master-0 kubenswrapper[7440]: I0312 14:12:48.993102 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-nml4k" Mar 12 14:12:49.071447 master-0 kubenswrapper[7440]: I0312 14:12:49.071391 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3815db41-fe01-43f6-b75c-4ccca9124f51-hosts-file\") pod \"node-resolver-nml4k\" (UID: \"3815db41-fe01-43f6-b75c-4ccca9124f51\") " pod="openshift-dns/node-resolver-nml4k" Mar 12 14:12:49.071447 master-0 kubenswrapper[7440]: I0312 14:12:49.071433 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shknb\" (UniqueName: \"kubernetes.io/projected/3815db41-fe01-43f6-b75c-4ccca9124f51-kube-api-access-shknb\") pod \"node-resolver-nml4k\" (UID: \"3815db41-fe01-43f6-b75c-4ccca9124f51\") " pod="openshift-dns/node-resolver-nml4k" Mar 12 14:12:49.172018 master-0 kubenswrapper[7440]: I0312 14:12:49.171976 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3815db41-fe01-43f6-b75c-4ccca9124f51-hosts-file\") pod \"node-resolver-nml4k\" (UID: \"3815db41-fe01-43f6-b75c-4ccca9124f51\") " pod="openshift-dns/node-resolver-nml4k" Mar 12 14:12:49.172151 master-0 kubenswrapper[7440]: I0312 14:12:49.172033 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shknb\" (UniqueName: \"kubernetes.io/projected/3815db41-fe01-43f6-b75c-4ccca9124f51-kube-api-access-shknb\") pod \"node-resolver-nml4k\" (UID: \"3815db41-fe01-43f6-b75c-4ccca9124f51\") " pod="openshift-dns/node-resolver-nml4k" Mar 12 14:12:49.172191 master-0 kubenswrapper[7440]: I0312 14:12:49.172172 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3815db41-fe01-43f6-b75c-4ccca9124f51-hosts-file\") pod \"node-resolver-nml4k\" (UID: \"3815db41-fe01-43f6-b75c-4ccca9124f51\") " pod="openshift-dns/node-resolver-nml4k" Mar 12 14:12:49.189023 master-0 kubenswrapper[7440]: I0312 14:12:49.188997 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shknb\" (UniqueName: \"kubernetes.io/projected/3815db41-fe01-43f6-b75c-4ccca9124f51-kube-api-access-shknb\") pod \"node-resolver-nml4k\" (UID: \"3815db41-fe01-43f6-b75c-4ccca9124f51\") " pod="openshift-dns/node-resolver-nml4k" Mar 12 14:12:49.262939 master-0 kubenswrapper[7440]: I0312 14:12:49.261708 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 12 14:12:49.274002 master-0 kubenswrapper[7440]: I0312 14:12:49.273318 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ec846db-e344-4f9e-95e6-7a0055f52766-metrics-tls\") pod \"dns-default-fpjck\" (UID: \"3ec846db-e344-4f9e-95e6-7a0055f52766\") " pod="openshift-dns/dns-default-fpjck" Mar 12 14:12:49.292098 master-0 kubenswrapper[7440]: I0312 14:12:49.290874 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ec846db-e344-4f9e-95e6-7a0055f52766-metrics-tls\") pod \"dns-default-fpjck\" (UID: \"3ec846db-e344-4f9e-95e6-7a0055f52766\") " pod="openshift-dns/dns-default-fpjck" Mar 12 14:12:49.336043 master-0 kubenswrapper[7440]: E0312 14:12:49.334763 7440 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 14:12:49.336043 master-0 kubenswrapper[7440]: E0312 14:12:49.334854 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39252b5a-d014-4319-ad81-3c1bf2ef585e-catalogserver-certs podName:39252b5a-d014-4319-ad81-3c1bf2ef585e nodeName:}" failed. No retries permitted until 2026-03-12 14:12:49.834835508 +0000 UTC m=+30.170214067 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/39252b5a-d014-4319-ad81-3c1bf2ef585e-catalogserver-certs") pod "catalogd-controller-manager-7f8b8b6f4c-2pj4z" (UID: "39252b5a-d014-4319-ad81-3c1bf2ef585e") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:12:49.343510 master-0 kubenswrapper[7440]: I0312 14:12:49.342829 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-btfvk" event={"ID":"5fb06459-09da-4620-91cf-8c3fe8f425db","Type":"ContainerStarted","Data":"ce843dfd3f27a78f901d934f6e3dbf102d0d981cef10c5b1d1777b8952181107"} Mar 12 14:12:49.343510 master-0 kubenswrapper[7440]: I0312 14:12:49.342870 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-btfvk" event={"ID":"5fb06459-09da-4620-91cf-8c3fe8f425db","Type":"ContainerStarted","Data":"b1a14449751313d757471e50a932157f1cfc8f3980d87122c44917f7224e903e"} Mar 12 14:12:49.349252 master-0 kubenswrapper[7440]: I0312 14:12:49.347191 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" event={"ID":"1f9b15c6-b4ee-4907-8daa-376e3b438896","Type":"ContainerStarted","Data":"ed6b1efe75e8b6c558fafcaa8ddbf929d9ca6180cac551e6f152da3936b202da"} Mar 12 14:12:49.349252 master-0 kubenswrapper[7440]: I0312 14:12:49.347223 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" event={"ID":"1f9b15c6-b4ee-4907-8daa-376e3b438896","Type":"ContainerStarted","Data":"81cd0864a54b3fb544c03e1c4cc3bb2a1e8301732b585b1ac0d2dad7435e59f9"} Mar 12 14:12:49.349252 master-0 kubenswrapper[7440]: I0312 14:12:49.347945 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj" Mar 12 14:12:49.351645 master-0 kubenswrapper[7440]: I0312 14:12:49.350238 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l" Mar 12 14:12:49.354307 master-0 kubenswrapper[7440]: E0312 14:12:49.354233 7440 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:12:49.359677 master-0 kubenswrapper[7440]: I0312 14:12:49.359606 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-btfvk" podStartSLOduration=1.3595854520000001 podStartE2EDuration="1.359585452s" podCreationTimestamp="2026-03-12 14:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:12:49.358987498 +0000 UTC m=+29.694366077" watchObservedRunningTime="2026-03-12 14:12:49.359585452 +0000 UTC m=+29.694964011" Mar 12 14:12:49.405727 master-0 kubenswrapper[7440]: I0312 14:12:49.405354 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj"] Mar 12 14:12:49.411976 master-0 kubenswrapper[7440]: I0312 14:12:49.411510 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658bd69d7b-dgrvq"] Mar 12 14:12:49.413582 master-0 kubenswrapper[7440]: I0312 14:12:49.412333 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-658bd69d7b-dgrvq" Mar 12 14:12:49.413582 master-0 kubenswrapper[7440]: I0312 14:12:49.413342 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5549bf695c-78xdj"] Mar 12 14:12:49.416932 master-0 kubenswrapper[7440]: I0312 14:12:49.415181 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 14:12:49.416932 master-0 kubenswrapper[7440]: I0312 14:12:49.415450 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 14:12:49.416932 master-0 kubenswrapper[7440]: I0312 14:12:49.415618 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 14:12:49.416932 master-0 kubenswrapper[7440]: I0312 14:12:49.415756 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 14:12:49.416932 master-0 kubenswrapper[7440]: I0312 14:12:49.415921 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 14:12:49.426780 master-0 kubenswrapper[7440]: I0312 14:12:49.425498 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658bd69d7b-dgrvq"] Mar 12 14:12:49.442924 master-0 kubenswrapper[7440]: I0312 14:12:49.442068 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-nml4k" Mar 12 14:12:49.456115 master-0 kubenswrapper[7440]: I0312 14:12:49.455093 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l"] Mar 12 14:12:49.469562 master-0 kubenswrapper[7440]: I0312 14:12:49.460753 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-86dc8d5fd9-pzj8l"] Mar 12 14:12:49.469562 master-0 kubenswrapper[7440]: I0312 14:12:49.467413 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 12 14:12:49.481067 master-0 kubenswrapper[7440]: I0312 14:12:49.471666 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/39252b5a-d014-4319-ad81-3c1bf2ef585e-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:12:49.481067 master-0 kubenswrapper[7440]: E0312 14:12:49.478035 7440 projected.go:194] Error preparing data for projected volume kube-api-access-ktncx for pod openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:12:49.484941 master-0 kubenswrapper[7440]: E0312 14:12:49.483171 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39252b5a-d014-4319-ad81-3c1bf2ef585e-kube-api-access-ktncx podName:39252b5a-d014-4319-ad81-3c1bf2ef585e nodeName:}" failed. No retries permitted until 2026-03-12 14:12:49.983147554 +0000 UTC m=+30.318526113 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ktncx" (UniqueName: "kubernetes.io/projected/39252b5a-d014-4319-ad81-3c1bf2ef585e-kube-api-access-ktncx") pod "catalogd-controller-manager-7f8b8b6f4c-2pj4z" (UID: "39252b5a-d014-4319-ad81-3c1bf2ef585e") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:12:49.514248 master-0 kubenswrapper[7440]: I0312 14:12:49.493065 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-fpjck" Mar 12 14:12:49.514248 master-0 kubenswrapper[7440]: W0312 14:12:49.512802 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3815db41_fe01_43f6_b75c_4ccca9124f51.slice/crio-2d3eaf559f7c7fc8939b6cb1adf4ce35f6ab04af130fc43628777d00ccfd15a4 WatchSource:0}: Error finding container 2d3eaf559f7c7fc8939b6cb1adf4ce35f6ab04af130fc43628777d00ccfd15a4: Status 404 returned error can't find the container with id 2d3eaf559f7c7fc8939b6cb1adf4ce35f6ab04af130fc43628777d00ccfd15a4 Mar 12 14:12:49.570101 master-0 kubenswrapper[7440]: I0312 14:12:49.570040 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 12 14:12:49.591594 master-0 kubenswrapper[7440]: I0312 14:12:49.590212 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-config\") pod \"route-controller-manager-658bd69d7b-dgrvq\" (UID: \"2bbc2b06-9e09-46e1-803f-d60d9a41e49d\") " pod="openshift-route-controller-manager/route-controller-manager-658bd69d7b-dgrvq" Mar 12 14:12:49.591594 master-0 kubenswrapper[7440]: I0312 14:12:49.590264 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-client-ca\") pod \"route-controller-manager-658bd69d7b-dgrvq\" (UID: \"2bbc2b06-9e09-46e1-803f-d60d9a41e49d\") " pod="openshift-route-controller-manager/route-controller-manager-658bd69d7b-dgrvq" Mar 12 14:12:49.591594 master-0 kubenswrapper[7440]: I0312 14:12:49.590364 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-serving-cert\") pod \"route-controller-manager-658bd69d7b-dgrvq\" (UID: \"2bbc2b06-9e09-46e1-803f-d60d9a41e49d\") " pod="openshift-route-controller-manager/route-controller-manager-658bd69d7b-dgrvq" Mar 12 14:12:49.591594 master-0 kubenswrapper[7440]: I0312 14:12:49.590449 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqvrq\" (UniqueName: \"kubernetes.io/projected/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-kube-api-access-kqvrq\") pod \"route-controller-manager-658bd69d7b-dgrvq\" (UID: \"2bbc2b06-9e09-46e1-803f-d60d9a41e49d\") " pod="openshift-route-controller-manager/route-controller-manager-658bd69d7b-dgrvq" Mar 12 14:12:49.591594 master-0 kubenswrapper[7440]: I0312 14:12:49.590487 7440 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c62dd80-5d38-4385-81c2-fead2afdb3c6-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:49.591594 master-0 kubenswrapper[7440]: I0312 14:12:49.590502 7440 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/951723ec-2626-45a8-86d4-ee5c5cfabf3b-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:49.593184 master-0 kubenswrapper[7440]: I0312 14:12:49.593144 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 12 14:12:49.692780 master-0 kubenswrapper[7440]: I0312 14:12:49.691510 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-serving-cert\") pod \"route-controller-manager-658bd69d7b-dgrvq\" (UID: \"2bbc2b06-9e09-46e1-803f-d60d9a41e49d\") " pod="openshift-route-controller-manager/route-controller-manager-658bd69d7b-dgrvq" Mar 12 14:12:49.692780 master-0 kubenswrapper[7440]: I0312 14:12:49.691604 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqvrq\" (UniqueName: \"kubernetes.io/projected/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-kube-api-access-kqvrq\") pod \"route-controller-manager-658bd69d7b-dgrvq\" (UID: \"2bbc2b06-9e09-46e1-803f-d60d9a41e49d\") " pod="openshift-route-controller-manager/route-controller-manager-658bd69d7b-dgrvq" Mar 12 14:12:49.692780 master-0 kubenswrapper[7440]: I0312 14:12:49.691629 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-config\") pod \"route-controller-manager-658bd69d7b-dgrvq\" (UID: \"2bbc2b06-9e09-46e1-803f-d60d9a41e49d\") " pod="openshift-route-controller-manager/route-controller-manager-658bd69d7b-dgrvq" Mar 12 14:12:49.692780 master-0 kubenswrapper[7440]: I0312 14:12:49.691646 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-client-ca\") pod \"route-controller-manager-658bd69d7b-dgrvq\" (UID: \"2bbc2b06-9e09-46e1-803f-d60d9a41e49d\") " pod="openshift-route-controller-manager/route-controller-manager-658bd69d7b-dgrvq" Mar 12 14:12:49.692780 master-0 kubenswrapper[7440]: E0312 14:12:49.691781 7440 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 12 14:12:49.692780 master-0 kubenswrapper[7440]: E0312 14:12:49.691826 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-client-ca podName:2bbc2b06-9e09-46e1-803f-d60d9a41e49d nodeName:}" failed. No retries permitted until 2026-03-12 14:12:50.191812184 +0000 UTC m=+30.527190743 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-client-ca") pod "route-controller-manager-658bd69d7b-dgrvq" (UID: "2bbc2b06-9e09-46e1-803f-d60d9a41e49d") : configmap "client-ca" not found Mar 12 14:12:49.695168 master-0 kubenswrapper[7440]: I0312 14:12:49.695124 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-config\") pod \"route-controller-manager-658bd69d7b-dgrvq\" (UID: \"2bbc2b06-9e09-46e1-803f-d60d9a41e49d\") " pod="openshift-route-controller-manager/route-controller-manager-658bd69d7b-dgrvq" Mar 12 14:12:49.712394 master-0 kubenswrapper[7440]: I0312 14:12:49.712350 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-serving-cert\") pod \"route-controller-manager-658bd69d7b-dgrvq\" (UID: \"2bbc2b06-9e09-46e1-803f-d60d9a41e49d\") " pod="openshift-route-controller-manager/route-controller-manager-658bd69d7b-dgrvq" Mar 12 14:12:49.721126 master-0 kubenswrapper[7440]: I0312 14:12:49.721091 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-fpjck"] Mar 12 14:12:49.722585 master-0 kubenswrapper[7440]: I0312 14:12:49.722548 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqvrq\" (UniqueName: \"kubernetes.io/projected/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-kube-api-access-kqvrq\") pod \"route-controller-manager-658bd69d7b-dgrvq\" (UID: \"2bbc2b06-9e09-46e1-803f-d60d9a41e49d\") " pod="openshift-route-controller-manager/route-controller-manager-658bd69d7b-dgrvq" Mar 12 14:12:49.814739 master-0 kubenswrapper[7440]: I0312 14:12:49.814691 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c62dd80-5d38-4385-81c2-fead2afdb3c6" path="/var/lib/kubelet/pods/4c62dd80-5d38-4385-81c2-fead2afdb3c6/volumes" Mar 12 14:12:49.815245 master-0 kubenswrapper[7440]: I0312 14:12:49.815223 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="951723ec-2626-45a8-86d4-ee5c5cfabf3b" path="/var/lib/kubelet/pods/951723ec-2626-45a8-86d4-ee5c5cfabf3b/volumes" Mar 12 14:12:49.895413 master-0 kubenswrapper[7440]: I0312 14:12:49.895183 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/39252b5a-d014-4319-ad81-3c1bf2ef585e-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:12:49.900706 master-0 kubenswrapper[7440]: I0312 14:12:49.900662 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/39252b5a-d014-4319-ad81-3c1bf2ef585e-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:12:49.996430 master-0 kubenswrapper[7440]: I0312 14:12:49.996394 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktncx\" (UniqueName: \"kubernetes.io/projected/39252b5a-d014-4319-ad81-3c1bf2ef585e-kube-api-access-ktncx\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:12:50.000071 master-0 kubenswrapper[7440]: I0312 14:12:50.000030 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktncx\" (UniqueName: \"kubernetes.io/projected/39252b5a-d014-4319-ad81-3c1bf2ef585e-kube-api-access-ktncx\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:12:50.198270 master-0 kubenswrapper[7440]: I0312 14:12:50.198142 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-client-ca\") pod \"route-controller-manager-658bd69d7b-dgrvq\" (UID: \"2bbc2b06-9e09-46e1-803f-d60d9a41e49d\") " pod="openshift-route-controller-manager/route-controller-manager-658bd69d7b-dgrvq" Mar 12 14:12:50.198270 master-0 kubenswrapper[7440]: E0312 14:12:50.198245 7440 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 12 14:12:50.198453 master-0 kubenswrapper[7440]: E0312 14:12:50.198290 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-client-ca podName:2bbc2b06-9e09-46e1-803f-d60d9a41e49d nodeName:}" failed. No retries permitted until 2026-03-12 14:12:51.198276515 +0000 UTC m=+31.533655074 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-client-ca") pod "route-controller-manager-658bd69d7b-dgrvq" (UID: "2bbc2b06-9e09-46e1-803f-d60d9a41e49d") : configmap "client-ca" not found Mar 12 14:12:50.257128 master-0 kubenswrapper[7440]: I0312 14:12:50.252937 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:12:50.361190 master-0 kubenswrapper[7440]: I0312 14:12:50.360201 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nml4k" event={"ID":"3815db41-fe01-43f6-b75c-4ccca9124f51","Type":"ContainerStarted","Data":"c9974749d7b55714b2a366fdd455a4e5648ebc243ccac259517cdb7794faf5cb"} Mar 12 14:12:50.361190 master-0 kubenswrapper[7440]: I0312 14:12:50.360251 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nml4k" event={"ID":"3815db41-fe01-43f6-b75c-4ccca9124f51","Type":"ContainerStarted","Data":"2d3eaf559f7c7fc8939b6cb1adf4ce35f6ab04af130fc43628777d00ccfd15a4"} Mar 12 14:12:50.361190 master-0 kubenswrapper[7440]: I0312 14:12:50.361060 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fpjck" event={"ID":"3ec846db-e344-4f9e-95e6-7a0055f52766","Type":"ContainerStarted","Data":"a917672632ddd41ece955a9caf8b6f8e502d8c6d1a179cc7a84283068844b577"} Mar 12 14:12:50.365151 master-0 kubenswrapper[7440]: I0312 14:12:50.364626 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" event={"ID":"1f9b15c6-b4ee-4907-8daa-376e3b438896","Type":"ContainerStarted","Data":"cbf45306386e8635befce668d8225cbafe68cc1140eea40d954cb85ff55d0336"} Mar 12 14:12:50.365151 master-0 kubenswrapper[7440]: I0312 14:12:50.364763 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:12:50.373272 master-0 kubenswrapper[7440]: I0312 14:12:50.373203 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-nml4k" podStartSLOduration=2.373189259 podStartE2EDuration="2.373189259s" podCreationTimestamp="2026-03-12 14:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:12:50.372858701 +0000 UTC m=+30.708237250" watchObservedRunningTime="2026-03-12 14:12:50.373189259 +0000 UTC m=+30.708567818" Mar 12 14:12:50.406139 master-0 kubenswrapper[7440]: I0312 14:12:50.397923 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" podStartSLOduration=2.397884202 podStartE2EDuration="2.397884202s" podCreationTimestamp="2026-03-12 14:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:12:50.396165868 +0000 UTC m=+30.731544437" watchObservedRunningTime="2026-03-12 14:12:50.397884202 +0000 UTC m=+30.733262761" Mar 12 14:12:50.454979 master-0 kubenswrapper[7440]: I0312 14:12:50.452922 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z"] Mar 12 14:12:50.590919 master-0 kubenswrapper[7440]: I0312 14:12:50.589695 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-757d65d745-gzpdw"] Mar 12 14:12:50.592217 master-0 kubenswrapper[7440]: I0312 14:12:50.592178 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:12:50.598728 master-0 kubenswrapper[7440]: I0312 14:12:50.597685 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 12 14:12:50.598728 master-0 kubenswrapper[7440]: I0312 14:12:50.598038 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 12 14:12:50.598728 master-0 kubenswrapper[7440]: I0312 14:12:50.598174 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 12 14:12:50.598728 master-0 kubenswrapper[7440]: I0312 14:12:50.598313 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 12 14:12:50.598728 master-0 kubenswrapper[7440]: I0312 14:12:50.598451 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 12 14:12:50.598728 master-0 kubenswrapper[7440]: I0312 14:12:50.598558 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 12 14:12:50.598728 master-0 kubenswrapper[7440]: I0312 14:12:50.598702 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 12 14:12:50.599231 master-0 kubenswrapper[7440]: I0312 14:12:50.598824 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 12 14:12:50.603212 master-0 kubenswrapper[7440]: I0312 14:12:50.603177 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-757d65d745-gzpdw"] Mar 12 14:12:50.704607 master-0 kubenswrapper[7440]: I0312 14:12:50.704381 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1edf236b-654d-4568-ab33-b1f408dcbec6-trusted-ca-bundle\") pod \"apiserver-757d65d745-gzpdw\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:12:50.704607 master-0 kubenswrapper[7440]: I0312 14:12:50.704423 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1edf236b-654d-4568-ab33-b1f408dcbec6-audit-policies\") pod \"apiserver-757d65d745-gzpdw\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:12:50.704607 master-0 kubenswrapper[7440]: I0312 14:12:50.704461 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1edf236b-654d-4568-ab33-b1f408dcbec6-etcd-client\") pod \"apiserver-757d65d745-gzpdw\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:12:50.704607 master-0 kubenswrapper[7440]: I0312 14:12:50.704477 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1edf236b-654d-4568-ab33-b1f408dcbec6-serving-cert\") pod \"apiserver-757d65d745-gzpdw\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:12:50.704607 master-0 kubenswrapper[7440]: I0312 14:12:50.704512 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1edf236b-654d-4568-ab33-b1f408dcbec6-encryption-config\") pod \"apiserver-757d65d745-gzpdw\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:12:50.704916 master-0 kubenswrapper[7440]: I0312 14:12:50.704653 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5dwz\" (UniqueName: \"kubernetes.io/projected/1edf236b-654d-4568-ab33-b1f408dcbec6-kube-api-access-t5dwz\") pod \"apiserver-757d65d745-gzpdw\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:12:50.704916 master-0 kubenswrapper[7440]: I0312 14:12:50.704772 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1edf236b-654d-4568-ab33-b1f408dcbec6-etcd-serving-ca\") pod \"apiserver-757d65d745-gzpdw\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:12:50.704916 master-0 kubenswrapper[7440]: I0312 14:12:50.704804 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1edf236b-654d-4568-ab33-b1f408dcbec6-audit-dir\") pod \"apiserver-757d65d745-gzpdw\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:12:50.806015 master-0 kubenswrapper[7440]: I0312 14:12:50.805971 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1edf236b-654d-4568-ab33-b1f408dcbec6-trusted-ca-bundle\") pod \"apiserver-757d65d745-gzpdw\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:12:50.806161 master-0 kubenswrapper[7440]: I0312 14:12:50.806032 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1edf236b-654d-4568-ab33-b1f408dcbec6-audit-policies\") pod \"apiserver-757d65d745-gzpdw\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:12:50.806322 master-0 kubenswrapper[7440]: I0312 14:12:50.806275 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1edf236b-654d-4568-ab33-b1f408dcbec6-etcd-client\") pod \"apiserver-757d65d745-gzpdw\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:12:50.806368 master-0 kubenswrapper[7440]: I0312 14:12:50.806339 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1edf236b-654d-4568-ab33-b1f408dcbec6-serving-cert\") pod \"apiserver-757d65d745-gzpdw\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:12:50.806398 master-0 kubenswrapper[7440]: I0312 14:12:50.806381 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1edf236b-654d-4568-ab33-b1f408dcbec6-encryption-config\") pod \"apiserver-757d65d745-gzpdw\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:12:50.806434 master-0 kubenswrapper[7440]: I0312 14:12:50.806403 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5dwz\" (UniqueName: \"kubernetes.io/projected/1edf236b-654d-4568-ab33-b1f408dcbec6-kube-api-access-t5dwz\") pod \"apiserver-757d65d745-gzpdw\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:12:50.806467 master-0 kubenswrapper[7440]: I0312 14:12:50.806433 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1edf236b-654d-4568-ab33-b1f408dcbec6-etcd-serving-ca\") pod \"apiserver-757d65d745-gzpdw\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:12:50.806495 master-0 kubenswrapper[7440]: I0312 14:12:50.806465 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1edf236b-654d-4568-ab33-b1f408dcbec6-audit-dir\") pod \"apiserver-757d65d745-gzpdw\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:12:50.807052 master-0 kubenswrapper[7440]: I0312 14:12:50.806662 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1edf236b-654d-4568-ab33-b1f408dcbec6-audit-dir\") pod \"apiserver-757d65d745-gzpdw\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:12:50.807052 master-0 kubenswrapper[7440]: I0312 14:12:50.806888 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1edf236b-654d-4568-ab33-b1f408dcbec6-trusted-ca-bundle\") pod \"apiserver-757d65d745-gzpdw\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:12:50.807052 master-0 kubenswrapper[7440]: I0312 14:12:50.806938 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1edf236b-654d-4568-ab33-b1f408dcbec6-audit-policies\") pod \"apiserver-757d65d745-gzpdw\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:12:50.807339 master-0 kubenswrapper[7440]: I0312 14:12:50.807306 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1edf236b-654d-4568-ab33-b1f408dcbec6-etcd-serving-ca\") pod \"apiserver-757d65d745-gzpdw\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:12:50.811461 master-0 kubenswrapper[7440]: I0312 14:12:50.811380 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1edf236b-654d-4568-ab33-b1f408dcbec6-encryption-config\") pod \"apiserver-757d65d745-gzpdw\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:12:50.811781 master-0 kubenswrapper[7440]: I0312 14:12:50.811732 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1edf236b-654d-4568-ab33-b1f408dcbec6-serving-cert\") pod \"apiserver-757d65d745-gzpdw\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:12:50.821041 master-0 kubenswrapper[7440]: I0312 14:12:50.820984 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1edf236b-654d-4568-ab33-b1f408dcbec6-etcd-client\") pod \"apiserver-757d65d745-gzpdw\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:12:50.829824 master-0 kubenswrapper[7440]: I0312 14:12:50.829772 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5dwz\" (UniqueName: \"kubernetes.io/projected/1edf236b-654d-4568-ab33-b1f408dcbec6-kube-api-access-t5dwz\") pod \"apiserver-757d65d745-gzpdw\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:12:50.934417 master-0 kubenswrapper[7440]: I0312 14:12:50.933867 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:12:51.130673 master-0 kubenswrapper[7440]: I0312 14:12:51.130586 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-757d65d745-gzpdw"] Mar 12 14:12:51.212883 master-0 kubenswrapper[7440]: I0312 14:12:51.212654 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-client-ca\") pod \"route-controller-manager-658bd69d7b-dgrvq\" (UID: \"2bbc2b06-9e09-46e1-803f-d60d9a41e49d\") " pod="openshift-route-controller-manager/route-controller-manager-658bd69d7b-dgrvq" Mar 12 14:12:51.213114 master-0 kubenswrapper[7440]: E0312 14:12:51.212821 7440 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 12 14:12:51.213114 master-0 kubenswrapper[7440]: E0312 14:12:51.212980 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-client-ca podName:2bbc2b06-9e09-46e1-803f-d60d9a41e49d nodeName:}" failed. No retries permitted until 2026-03-12 14:12:53.212961239 +0000 UTC m=+33.548339798 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-client-ca") pod "route-controller-manager-658bd69d7b-dgrvq" (UID: "2bbc2b06-9e09-46e1-803f-d60d9a41e49d") : configmap "client-ca" not found Mar 12 14:12:51.369924 master-0 kubenswrapper[7440]: I0312 14:12:51.369799 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" event={"ID":"39252b5a-d014-4319-ad81-3c1bf2ef585e","Type":"ContainerStarted","Data":"9e5d0273aaf9a58de181bc25e8eb0e74c78055d79bccf5dc90c3b2168e550793"} Mar 12 14:12:51.369924 master-0 kubenswrapper[7440]: I0312 14:12:51.369843 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" event={"ID":"39252b5a-d014-4319-ad81-3c1bf2ef585e","Type":"ContainerStarted","Data":"90ca548788bdb3dbbc3bce6e0bb77f916ef9ff6e9d18d4c0ee025d2ba9c36e55"} Mar 12 14:12:51.369924 master-0 kubenswrapper[7440]: I0312 14:12:51.369854 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" event={"ID":"39252b5a-d014-4319-ad81-3c1bf2ef585e","Type":"ContainerStarted","Data":"48b23f5b2fb0b4600ed151be719911ca6e8598a87db7cece2fed00b00050b177"} Mar 12 14:12:51.370160 master-0 kubenswrapper[7440]: I0312 14:12:51.369952 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:12:51.370970 master-0 kubenswrapper[7440]: I0312 14:12:51.370928 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" event={"ID":"1edf236b-654d-4568-ab33-b1f408dcbec6","Type":"ContainerStarted","Data":"8d6e945225bb5f896e615cb1136c4b7a8164a71da35bf0c82c5fc6e8b79b6cc2"} Mar 12 14:12:51.385664 master-0 kubenswrapper[7440]: I0312 14:12:51.385519 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" podStartSLOduration=3.385501374 podStartE2EDuration="3.385501374s" podCreationTimestamp="2026-03-12 14:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:12:51.383408882 +0000 UTC m=+31.718787461" watchObservedRunningTime="2026-03-12 14:12:51.385501374 +0000 UTC m=+31.720879933" Mar 12 14:12:52.009624 master-0 kubenswrapper[7440]: I0312 14:12:52.009567 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6"] Mar 12 14:12:52.010307 master-0 kubenswrapper[7440]: I0312 14:12:52.010277 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6" Mar 12 14:12:52.012392 master-0 kubenswrapper[7440]: I0312 14:12:52.012206 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 14:12:52.012392 master-0 kubenswrapper[7440]: I0312 14:12:52.012212 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 14:12:52.012551 master-0 kubenswrapper[7440]: I0312 14:12:52.012447 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 14:12:52.012728 master-0 kubenswrapper[7440]: I0312 14:12:52.012620 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 14:12:52.012970 master-0 kubenswrapper[7440]: I0312 14:12:52.012886 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 14:12:52.020448 master-0 kubenswrapper[7440]: I0312 14:12:52.020354 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6"] Mar 12 14:12:52.021724 master-0 kubenswrapper[7440]: I0312 14:12:52.021704 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 14:12:52.123037 master-0 kubenswrapper[7440]: I0312 14:12:52.122978 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrrv5\" (UniqueName: \"kubernetes.io/projected/70d139ff-05ec-4733-8c0c-b7de1a535d60-kube-api-access-mrrv5\") pod \"controller-manager-74d66c4c7c-5lsl6\" (UID: \"70d139ff-05ec-4733-8c0c-b7de1a535d60\") " pod="openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6" Mar 12 14:12:52.123197 master-0 kubenswrapper[7440]: I0312 14:12:52.123122 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-client-ca\") pod \"controller-manager-74d66c4c7c-5lsl6\" (UID: \"70d139ff-05ec-4733-8c0c-b7de1a535d60\") " pod="openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6" Mar 12 14:12:52.123197 master-0 kubenswrapper[7440]: I0312 14:12:52.123166 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-config\") pod \"controller-manager-74d66c4c7c-5lsl6\" (UID: \"70d139ff-05ec-4733-8c0c-b7de1a535d60\") " pod="openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6" Mar 12 14:12:52.123263 master-0 kubenswrapper[7440]: I0312 14:12:52.123216 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70d139ff-05ec-4733-8c0c-b7de1a535d60-serving-cert\") pod \"controller-manager-74d66c4c7c-5lsl6\" (UID: \"70d139ff-05ec-4733-8c0c-b7de1a535d60\") " pod="openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6" Mar 12 14:12:52.123328 master-0 kubenswrapper[7440]: I0312 14:12:52.123309 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-proxy-ca-bundles\") pod \"controller-manager-74d66c4c7c-5lsl6\" (UID: \"70d139ff-05ec-4733-8c0c-b7de1a535d60\") " pod="openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6" Mar 12 14:12:52.224848 master-0 kubenswrapper[7440]: I0312 14:12:52.224804 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-client-ca\") pod \"controller-manager-74d66c4c7c-5lsl6\" (UID: \"70d139ff-05ec-4733-8c0c-b7de1a535d60\") " pod="openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6" Mar 12 14:12:52.225434 master-0 kubenswrapper[7440]: I0312 14:12:52.224862 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-config\") pod \"controller-manager-74d66c4c7c-5lsl6\" (UID: \"70d139ff-05ec-4733-8c0c-b7de1a535d60\") " pod="openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6" Mar 12 14:12:52.225434 master-0 kubenswrapper[7440]: I0312 14:12:52.224928 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70d139ff-05ec-4733-8c0c-b7de1a535d60-serving-cert\") pod \"controller-manager-74d66c4c7c-5lsl6\" (UID: \"70d139ff-05ec-4733-8c0c-b7de1a535d60\") " pod="openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6" Mar 12 14:12:52.225434 master-0 kubenswrapper[7440]: I0312 14:12:52.224964 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-proxy-ca-bundles\") pod \"controller-manager-74d66c4c7c-5lsl6\" (UID: \"70d139ff-05ec-4733-8c0c-b7de1a535d60\") " pod="openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6" Mar 12 14:12:52.225434 master-0 kubenswrapper[7440]: I0312 14:12:52.225010 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrrv5\" (UniqueName: \"kubernetes.io/projected/70d139ff-05ec-4733-8c0c-b7de1a535d60-kube-api-access-mrrv5\") pod \"controller-manager-74d66c4c7c-5lsl6\" (UID: \"70d139ff-05ec-4733-8c0c-b7de1a535d60\") " pod="openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6" Mar 12 14:12:52.225819 master-0 kubenswrapper[7440]: E0312 14:12:52.225653 7440 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 14:12:52.225819 master-0 kubenswrapper[7440]: E0312 14:12:52.225694 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-client-ca podName:70d139ff-05ec-4733-8c0c-b7de1a535d60 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:52.725681684 +0000 UTC m=+33.061060243 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-client-ca") pod "controller-manager-74d66c4c7c-5lsl6" (UID: "70d139ff-05ec-4733-8c0c-b7de1a535d60") : configmap "client-ca" not found Mar 12 14:12:52.226955 master-0 kubenswrapper[7440]: I0312 14:12:52.226922 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-config\") pod \"controller-manager-74d66c4c7c-5lsl6\" (UID: \"70d139ff-05ec-4733-8c0c-b7de1a535d60\") " pod="openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6" Mar 12 14:12:52.228911 master-0 kubenswrapper[7440]: I0312 14:12:52.228869 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-proxy-ca-bundles\") pod \"controller-manager-74d66c4c7c-5lsl6\" (UID: \"70d139ff-05ec-4733-8c0c-b7de1a535d60\") " pod="openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6" Mar 12 14:12:52.233206 master-0 kubenswrapper[7440]: I0312 14:12:52.233161 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70d139ff-05ec-4733-8c0c-b7de1a535d60-serving-cert\") pod \"controller-manager-74d66c4c7c-5lsl6\" (UID: \"70d139ff-05ec-4733-8c0c-b7de1a535d60\") " pod="openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6" Mar 12 14:12:52.248145 master-0 kubenswrapper[7440]: I0312 14:12:52.247975 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrrv5\" (UniqueName: \"kubernetes.io/projected/70d139ff-05ec-4733-8c0c-b7de1a535d60-kube-api-access-mrrv5\") pod \"controller-manager-74d66c4c7c-5lsl6\" (UID: \"70d139ff-05ec-4733-8c0c-b7de1a535d60\") " pod="openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6" Mar 12 14:12:52.731575 master-0 kubenswrapper[7440]: I0312 14:12:52.731453 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-client-ca\") pod \"controller-manager-74d66c4c7c-5lsl6\" (UID: \"70d139ff-05ec-4733-8c0c-b7de1a535d60\") " pod="openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6" Mar 12 14:12:52.731575 master-0 kubenswrapper[7440]: E0312 14:12:52.731577 7440 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 14:12:52.731819 master-0 kubenswrapper[7440]: E0312 14:12:52.731639 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-client-ca podName:70d139ff-05ec-4733-8c0c-b7de1a535d60 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:53.731623381 +0000 UTC m=+34.067001940 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-client-ca") pod "controller-manager-74d66c4c7c-5lsl6" (UID: "70d139ff-05ec-4733-8c0c-b7de1a535d60") : configmap "client-ca" not found Mar 12 14:12:53.144684 master-0 kubenswrapper[7440]: I0312 14:12:53.144621 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs\") pod \"network-metrics-daemon-n9v7g\" (UID: \"7fdce71e-8085-4316-be40-e535530c2ca4\") " pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:12:53.144684 master-0 kubenswrapper[7440]: I0312 14:12:53.144667 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-dvv78\" (UID: \"85459175-2c9c-425d-bdfb-0a79c92ed110\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:12:53.144684 master-0 kubenswrapper[7440]: I0312 14:12:53.144685 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs\") pod \"multus-admission-controller-8d675b596-sm9nb\" (UID: \"7023af8b-bfcc-4253-85cd-d891dff1c86e\") " pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" Mar 12 14:12:53.144909 master-0 kubenswrapper[7440]: I0312 14:12:53.144710 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:53.144909 master-0 kubenswrapper[7440]: I0312 14:12:53.144728 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert\") pod \"olm-operator-d64cfc9db-f48hv\" (UID: \"07a6a1d6-fecf-4847-b7c1-160d5d7320fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:12:53.144909 master-0 kubenswrapper[7440]: I0312 14:12:53.144748 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:53.144909 master-0 kubenswrapper[7440]: I0312 14:12:53.144771 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert\") pod \"catalog-operator-7d9c49f57b-whr79\" (UID: \"272b53c4-134c-404d-9a27-c7371415b1f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:12:53.148384 master-0 kubenswrapper[7440]: I0312 14:12:53.148358 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert\") pod \"catalog-operator-7d9c49f57b-whr79\" (UID: \"272b53c4-134c-404d-9a27-c7371415b1f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:12:53.150304 master-0 kubenswrapper[7440]: I0312 14:12:53.150275 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:53.150368 master-0 kubenswrapper[7440]: I0312 14:12:53.150328 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert\") pod \"olm-operator-d64cfc9db-f48hv\" (UID: \"07a6a1d6-fecf-4847-b7c1-160d5d7320fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:12:53.151422 master-0 kubenswrapper[7440]: I0312 14:12:53.151383 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs\") pod \"network-metrics-daemon-n9v7g\" (UID: \"7fdce71e-8085-4316-be40-e535530c2ca4\") " pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:12:53.151888 master-0 kubenswrapper[7440]: I0312 14:12:53.151865 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-dvv78\" (UID: \"85459175-2c9c-425d-bdfb-0a79c92ed110\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:12:53.152528 master-0 kubenswrapper[7440]: I0312 14:12:53.152477 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs\") pod \"multus-admission-controller-8d675b596-sm9nb\" (UID: \"7023af8b-bfcc-4253-85cd-d891dff1c86e\") " pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" Mar 12 14:12:53.152759 master-0 kubenswrapper[7440]: I0312 14:12:53.152716 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:53.246293 master-0 kubenswrapper[7440]: I0312 14:12:53.246187 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-client-ca\") pod \"route-controller-manager-658bd69d7b-dgrvq\" (UID: \"2bbc2b06-9e09-46e1-803f-d60d9a41e49d\") " pod="openshift-route-controller-manager/route-controller-manager-658bd69d7b-dgrvq" Mar 12 14:12:53.247658 master-0 kubenswrapper[7440]: E0312 14:12:53.246320 7440 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 12 14:12:53.247658 master-0 kubenswrapper[7440]: E0312 14:12:53.246376 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-client-ca podName:2bbc2b06-9e09-46e1-803f-d60d9a41e49d nodeName:}" failed. No retries permitted until 2026-03-12 14:12:57.246362107 +0000 UTC m=+37.581740666 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-client-ca") pod "route-controller-manager-658bd69d7b-dgrvq" (UID: "2bbc2b06-9e09-46e1-803f-d60d9a41e49d") : configmap "client-ca" not found Mar 12 14:12:53.261523 master-0 kubenswrapper[7440]: I0312 14:12:53.261130 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:12:53.261523 master-0 kubenswrapper[7440]: I0312 14:12:53.261203 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:53.261523 master-0 kubenswrapper[7440]: I0312 14:12:53.261316 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:12:53.262632 master-0 kubenswrapper[7440]: I0312 14:12:53.262597 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" Mar 12 14:12:53.265809 master-0 kubenswrapper[7440]: I0312 14:12:53.265561 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:12:53.265809 master-0 kubenswrapper[7440]: I0312 14:12:53.265583 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:12:53.265809 master-0 kubenswrapper[7440]: I0312 14:12:53.265728 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:12:53.379099 master-0 kubenswrapper[7440]: I0312 14:12:53.379029 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fpjck" event={"ID":"3ec846db-e344-4f9e-95e6-7a0055f52766","Type":"ContainerStarted","Data":"fe9d2abf39cf7290e7c15c5dee12b2f88b594ebe36b5f363c1c4813ec36888d6"} Mar 12 14:12:53.381277 master-0 kubenswrapper[7440]: I0312 14:12:53.381213 7440 generic.go:334] "Generic (PLEG): container finished" podID="1763269d-d7d2-44ae-a7aa-74eca578d04b" containerID="82b52b848bd037248fe2830dd77de8e1f754a1ac267f5743b2929e5fcd07f837" exitCode=0 Mar 12 14:12:53.381277 master-0 kubenswrapper[7440]: I0312 14:12:53.381251 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" event={"ID":"1763269d-d7d2-44ae-a7aa-74eca578d04b","Type":"ContainerDied","Data":"82b52b848bd037248fe2830dd77de8e1f754a1ac267f5743b2929e5fcd07f837"} Mar 12 14:12:53.753789 master-0 kubenswrapper[7440]: I0312 14:12:53.753730 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-client-ca\") pod \"controller-manager-74d66c4c7c-5lsl6\" (UID: \"70d139ff-05ec-4733-8c0c-b7de1a535d60\") " pod="openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6" Mar 12 14:12:53.753995 master-0 kubenswrapper[7440]: E0312 14:12:53.753908 7440 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 14:12:53.753995 master-0 kubenswrapper[7440]: E0312 14:12:53.753986 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-client-ca podName:70d139ff-05ec-4733-8c0c-b7de1a535d60 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:55.753968796 +0000 UTC m=+36.089347355 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-client-ca") pod "controller-manager-74d66c4c7c-5lsl6" (UID: "70d139ff-05ec-4733-8c0c-b7de1a535d60") : configmap "client-ca" not found Mar 12 14:12:53.769826 master-0 kubenswrapper[7440]: I0312 14:12:53.769733 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-sm9nb"] Mar 12 14:12:54.165990 master-0 kubenswrapper[7440]: I0312 14:12:54.165882 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv"] Mar 12 14:12:54.176801 master-0 kubenswrapper[7440]: I0312 14:12:54.176654 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-qzdff"] Mar 12 14:12:54.327003 master-0 kubenswrapper[7440]: I0312 14:12:54.326330 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78"] Mar 12 14:12:54.331301 master-0 kubenswrapper[7440]: I0312 14:12:54.331246 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-n9v7g"] Mar 12 14:12:54.335405 master-0 kubenswrapper[7440]: I0312 14:12:54.335267 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv"] Mar 12 14:12:54.344109 master-0 kubenswrapper[7440]: I0312 14:12:54.343494 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79"] Mar 12 14:12:54.386880 master-0 kubenswrapper[7440]: I0312 14:12:54.386816 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fpjck" event={"ID":"3ec846db-e344-4f9e-95e6-7a0055f52766","Type":"ContainerStarted","Data":"7d54ce2bc817ff7890a25ecf66d17535153695c3255ca6f5f7a08771a0185ede"} Mar 12 14:12:54.388537 master-0 kubenswrapper[7440]: I0312 14:12:54.387582 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-fpjck" Mar 12 14:12:54.401685 master-0 kubenswrapper[7440]: I0312 14:12:54.401621 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-fpjck" podStartSLOduration=3.188362769 podStartE2EDuration="6.401605123s" podCreationTimestamp="2026-03-12 14:12:48 +0000 UTC" firstStartedPulling="2026-03-12 14:12:49.751580496 +0000 UTC m=+30.086959055" lastFinishedPulling="2026-03-12 14:12:52.96482285 +0000 UTC m=+33.300201409" observedRunningTime="2026-03-12 14:12:54.400329422 +0000 UTC m=+34.735707981" watchObservedRunningTime="2026-03-12 14:12:54.401605123 +0000 UTC m=+34.736983672" Mar 12 14:12:54.682845 master-0 kubenswrapper[7440]: W0312 14:12:54.682782 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42dbcb8f_e8c4_413e_977d_40aa6df226aa.slice/crio-dc05a7757105e04e114bec1d0c6d1948857cd13293222846a43aed00c9eb7e9e WatchSource:0}: Error finding container dc05a7757105e04e114bec1d0c6d1948857cd13293222846a43aed00c9eb7e9e: Status 404 returned error can't find the container with id dc05a7757105e04e114bec1d0c6d1948857cd13293222846a43aed00c9eb7e9e Mar 12 14:12:54.687977 master-0 kubenswrapper[7440]: W0312 14:12:54.687777 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fdce71e_8085_4316_be40_e535530c2ca4.slice/crio-bc3c55d0c455838629b8ab5cf95b13e36cb5ff08d49b778a2bbce43b9948d568 WatchSource:0}: Error finding container bc3c55d0c455838629b8ab5cf95b13e36cb5ff08d49b778a2bbce43b9948d568: Status 404 returned error can't find the container with id bc3c55d0c455838629b8ab5cf95b13e36cb5ff08d49b778a2bbce43b9948d568 Mar 12 14:12:54.690944 master-0 kubenswrapper[7440]: W0312 14:12:54.689437 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7023af8b_bfcc_4253_85cd_d891dff1c86e.slice/crio-fab4209128367cae9aae1c602fe8e2a20cfcbb53ea4e672f691caba442c30231 WatchSource:0}: Error finding container fab4209128367cae9aae1c602fe8e2a20cfcbb53ea4e672f691caba442c30231: Status 404 returned error can't find the container with id fab4209128367cae9aae1c602fe8e2a20cfcbb53ea4e672f691caba442c30231 Mar 12 14:12:54.691304 master-0 kubenswrapper[7440]: W0312 14:12:54.690981 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod272b53c4_134c_404d_9a27_c7371415b1f7.slice/crio-d6cba419a6f6e1067b6ba753b668a42fc154b7b841036f746eeb0f9473a12dda WatchSource:0}: Error finding container d6cba419a6f6e1067b6ba753b668a42fc154b7b841036f746eeb0f9473a12dda: Status 404 returned error can't find the container with id d6cba419a6f6e1067b6ba753b668a42fc154b7b841036f746eeb0f9473a12dda Mar 12 14:12:54.697320 master-0 kubenswrapper[7440]: W0312 14:12:54.697286 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bc0d552_01c7_4212_a551_d16419f2dc80.slice/crio-d46849ab9a3cac26570e0fb5ca7236cfad3a52459d3d93f56a2bd305b0ad9cd4 WatchSource:0}: Error finding container d46849ab9a3cac26570e0fb5ca7236cfad3a52459d3d93f56a2bd305b0ad9cd4: Status 404 returned error can't find the container with id d46849ab9a3cac26570e0fb5ca7236cfad3a52459d3d93f56a2bd305b0ad9cd4 Mar 12 14:12:54.723032 master-0 kubenswrapper[7440]: I0312 14:12:54.722990 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:54.872311 master-0 kubenswrapper[7440]: I0312 14:12:54.872267 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-trusted-ca-bundle\") pod \"1763269d-d7d2-44ae-a7aa-74eca578d04b\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " Mar 12 14:12:54.872311 master-0 kubenswrapper[7440]: I0312 14:12:54.872315 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1763269d-d7d2-44ae-a7aa-74eca578d04b-serving-cert\") pod \"1763269d-d7d2-44ae-a7aa-74eca578d04b\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " Mar 12 14:12:54.872505 master-0 kubenswrapper[7440]: I0312 14:12:54.872335 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1763269d-d7d2-44ae-a7aa-74eca578d04b-node-pullsecrets\") pod \"1763269d-d7d2-44ae-a7aa-74eca578d04b\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " Mar 12 14:12:54.872505 master-0 kubenswrapper[7440]: I0312 14:12:54.872374 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-audit\") pod \"1763269d-d7d2-44ae-a7aa-74eca578d04b\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " Mar 12 14:12:54.872505 master-0 kubenswrapper[7440]: I0312 14:12:54.872434 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1763269d-d7d2-44ae-a7aa-74eca578d04b-etcd-client\") pod \"1763269d-d7d2-44ae-a7aa-74eca578d04b\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " Mar 12 14:12:54.872505 master-0 kubenswrapper[7440]: I0312 14:12:54.872474 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-image-import-ca\") pod \"1763269d-d7d2-44ae-a7aa-74eca578d04b\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " Mar 12 14:12:54.872505 master-0 kubenswrapper[7440]: I0312 14:12:54.872495 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x5bj\" (UniqueName: \"kubernetes.io/projected/1763269d-d7d2-44ae-a7aa-74eca578d04b-kube-api-access-9x5bj\") pod \"1763269d-d7d2-44ae-a7aa-74eca578d04b\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " Mar 12 14:12:54.872640 master-0 kubenswrapper[7440]: I0312 14:12:54.872511 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-config\") pod \"1763269d-d7d2-44ae-a7aa-74eca578d04b\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " Mar 12 14:12:54.872640 master-0 kubenswrapper[7440]: I0312 14:12:54.872528 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-etcd-serving-ca\") pod \"1763269d-d7d2-44ae-a7aa-74eca578d04b\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " Mar 12 14:12:54.872640 master-0 kubenswrapper[7440]: I0312 14:12:54.872544 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1763269d-d7d2-44ae-a7aa-74eca578d04b-encryption-config\") pod \"1763269d-d7d2-44ae-a7aa-74eca578d04b\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " Mar 12 14:12:54.872640 master-0 kubenswrapper[7440]: I0312 14:12:54.872565 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1763269d-d7d2-44ae-a7aa-74eca578d04b-audit-dir\") pod \"1763269d-d7d2-44ae-a7aa-74eca578d04b\" (UID: \"1763269d-d7d2-44ae-a7aa-74eca578d04b\") " Mar 12 14:12:54.873783 master-0 kubenswrapper[7440]: I0312 14:12:54.872759 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1763269d-d7d2-44ae-a7aa-74eca578d04b-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "1763269d-d7d2-44ae-a7aa-74eca578d04b" (UID: "1763269d-d7d2-44ae-a7aa-74eca578d04b"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:12:54.873783 master-0 kubenswrapper[7440]: I0312 14:12:54.872890 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1763269d-d7d2-44ae-a7aa-74eca578d04b-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "1763269d-d7d2-44ae-a7aa-74eca578d04b" (UID: "1763269d-d7d2-44ae-a7aa-74eca578d04b"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:12:54.873783 master-0 kubenswrapper[7440]: I0312 14:12:54.873208 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-config" (OuterVolumeSpecName: "config") pod "1763269d-d7d2-44ae-a7aa-74eca578d04b" (UID: "1763269d-d7d2-44ae-a7aa-74eca578d04b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:12:54.873783 master-0 kubenswrapper[7440]: I0312 14:12:54.873220 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1763269d-d7d2-44ae-a7aa-74eca578d04b" (UID: "1763269d-d7d2-44ae-a7aa-74eca578d04b"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:12:54.873783 master-0 kubenswrapper[7440]: I0312 14:12:54.873656 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-audit" (OuterVolumeSpecName: "audit") pod "1763269d-d7d2-44ae-a7aa-74eca578d04b" (UID: "1763269d-d7d2-44ae-a7aa-74eca578d04b"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:12:54.873783 master-0 kubenswrapper[7440]: I0312 14:12:54.873761 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1763269d-d7d2-44ae-a7aa-74eca578d04b" (UID: "1763269d-d7d2-44ae-a7aa-74eca578d04b"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:12:54.873995 master-0 kubenswrapper[7440]: I0312 14:12:54.873858 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1763269d-d7d2-44ae-a7aa-74eca578d04b" (UID: "1763269d-d7d2-44ae-a7aa-74eca578d04b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:12:54.875719 master-0 kubenswrapper[7440]: I0312 14:12:54.875475 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1763269d-d7d2-44ae-a7aa-74eca578d04b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1763269d-d7d2-44ae-a7aa-74eca578d04b" (UID: "1763269d-d7d2-44ae-a7aa-74eca578d04b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:12:54.876227 master-0 kubenswrapper[7440]: I0312 14:12:54.876186 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1763269d-d7d2-44ae-a7aa-74eca578d04b-kube-api-access-9x5bj" (OuterVolumeSpecName: "kube-api-access-9x5bj") pod "1763269d-d7d2-44ae-a7aa-74eca578d04b" (UID: "1763269d-d7d2-44ae-a7aa-74eca578d04b"). InnerVolumeSpecName "kube-api-access-9x5bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:12:54.876668 master-0 kubenswrapper[7440]: I0312 14:12:54.876587 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1763269d-d7d2-44ae-a7aa-74eca578d04b-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1763269d-d7d2-44ae-a7aa-74eca578d04b" (UID: "1763269d-d7d2-44ae-a7aa-74eca578d04b"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:12:54.880623 master-0 kubenswrapper[7440]: I0312 14:12:54.880351 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1763269d-d7d2-44ae-a7aa-74eca578d04b-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1763269d-d7d2-44ae-a7aa-74eca578d04b" (UID: "1763269d-d7d2-44ae-a7aa-74eca578d04b"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:12:54.973886 master-0 kubenswrapper[7440]: I0312 14:12:54.973832 7440 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1763269d-d7d2-44ae-a7aa-74eca578d04b-etcd-client\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:54.973886 master-0 kubenswrapper[7440]: I0312 14:12:54.973877 7440 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-image-import-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:54.974071 master-0 kubenswrapper[7440]: I0312 14:12:54.973910 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x5bj\" (UniqueName: \"kubernetes.io/projected/1763269d-d7d2-44ae-a7aa-74eca578d04b-kube-api-access-9x5bj\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:54.974071 master-0 kubenswrapper[7440]: I0312 14:12:54.973928 7440 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:54.974071 master-0 kubenswrapper[7440]: I0312 14:12:54.973938 7440 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:54.974071 master-0 kubenswrapper[7440]: I0312 14:12:54.973947 7440 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1763269d-d7d2-44ae-a7aa-74eca578d04b-encryption-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:54.974071 master-0 kubenswrapper[7440]: I0312 14:12:54.973957 7440 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1763269d-d7d2-44ae-a7aa-74eca578d04b-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:54.974071 master-0 kubenswrapper[7440]: I0312 14:12:54.973966 7440 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:54.974071 master-0 kubenswrapper[7440]: I0312 14:12:54.973977 7440 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1763269d-d7d2-44ae-a7aa-74eca578d04b-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:54.974071 master-0 kubenswrapper[7440]: I0312 14:12:54.973987 7440 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1763269d-d7d2-44ae-a7aa-74eca578d04b-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:54.974071 master-0 kubenswrapper[7440]: I0312 14:12:54.973996 7440 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1763269d-d7d2-44ae-a7aa-74eca578d04b-audit\") on node \"master-0\" DevicePath \"\"" Mar 12 14:12:55.393925 master-0 kubenswrapper[7440]: I0312 14:12:55.393869 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" event={"ID":"85459175-2c9c-425d-bdfb-0a79c92ed110","Type":"ContainerStarted","Data":"8c7c68a3a3bab58942cd948fa92e68afb328afcaa83ac1189a7b2322e7ba46ad"} Mar 12 14:12:55.393925 master-0 kubenswrapper[7440]: I0312 14:12:55.393931 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" event={"ID":"85459175-2c9c-425d-bdfb-0a79c92ed110","Type":"ContainerStarted","Data":"57327dd3cf51a7946c6428acbb4cffd5439484941e4f876980813ac47338ecdb"} Mar 12 14:12:55.394763 master-0 kubenswrapper[7440]: I0312 14:12:55.394735 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" event={"ID":"07a6a1d6-fecf-4847-b7c1-160d5d7320fb","Type":"ContainerStarted","Data":"a41bc83813b39c2fa459a0e9284786027dca250eb150090c47a705729e7d08f5"} Mar 12 14:12:55.395961 master-0 kubenswrapper[7440]: I0312 14:12:55.395882 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" event={"ID":"1763269d-d7d2-44ae-a7aa-74eca578d04b","Type":"ContainerDied","Data":"0a6102c2c08043184397ade480887559f77ea3246278bb3afe643c96ef163768"} Mar 12 14:12:55.395961 master-0 kubenswrapper[7440]: I0312 14:12:55.395952 7440 scope.go:117] "RemoveContainer" containerID="82b52b848bd037248fe2830dd77de8e1f754a1ac267f5743b2929e5fcd07f837" Mar 12 14:12:55.396443 master-0 kubenswrapper[7440]: I0312 14:12:55.396405 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-65c58d4d64-b8nwz" Mar 12 14:12:55.396997 master-0 kubenswrapper[7440]: I0312 14:12:55.396976 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" event={"ID":"7023af8b-bfcc-4253-85cd-d891dff1c86e","Type":"ContainerStarted","Data":"fab4209128367cae9aae1c602fe8e2a20cfcbb53ea4e672f691caba442c30231"} Mar 12 14:12:55.402164 master-0 kubenswrapper[7440]: I0312 14:12:55.402114 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" event={"ID":"272b53c4-134c-404d-9a27-c7371415b1f7","Type":"ContainerStarted","Data":"d6cba419a6f6e1067b6ba753b668a42fc154b7b841036f746eeb0f9473a12dda"} Mar 12 14:12:55.404536 master-0 kubenswrapper[7440]: I0312 14:12:55.404023 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" event={"ID":"42dbcb8f-e8c4-413e-977d-40aa6df226aa","Type":"ContainerStarted","Data":"dc05a7757105e04e114bec1d0c6d1948857cd13293222846a43aed00c9eb7e9e"} Mar 12 14:12:55.405767 master-0 kubenswrapper[7440]: I0312 14:12:55.405740 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n9v7g" event={"ID":"7fdce71e-8085-4316-be40-e535530c2ca4","Type":"ContainerStarted","Data":"bc3c55d0c455838629b8ab5cf95b13e36cb5ff08d49b778a2bbce43b9948d568"} Mar 12 14:12:55.407134 master-0 kubenswrapper[7440]: I0312 14:12:55.407098 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" event={"ID":"1bc0d552-01c7-4212-a551-d16419f2dc80","Type":"ContainerStarted","Data":"d46849ab9a3cac26570e0fb5ca7236cfad3a52459d3d93f56a2bd305b0ad9cd4"} Mar 12 14:12:55.411404 master-0 kubenswrapper[7440]: I0312 14:12:55.411357 7440 generic.go:334] "Generic (PLEG): container finished" podID="1edf236b-654d-4568-ab33-b1f408dcbec6" containerID="46ceffc4cf5b43d6667c001d7ca724c81abc46d22b9354d94d793fd041e473d2" exitCode=0 Mar 12 14:12:55.411506 master-0 kubenswrapper[7440]: I0312 14:12:55.411467 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" event={"ID":"1edf236b-654d-4568-ab33-b1f408dcbec6","Type":"ContainerDied","Data":"46ceffc4cf5b43d6667c001d7ca724c81abc46d22b9354d94d793fd041e473d2"} Mar 12 14:12:55.477012 master-0 kubenswrapper[7440]: I0312 14:12:55.475736 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-65c58d4d64-b8nwz"] Mar 12 14:12:55.477012 master-0 kubenswrapper[7440]: I0312 14:12:55.475790 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-65c58d4d64-b8nwz"] Mar 12 14:12:55.671036 master-0 kubenswrapper[7440]: I0312 14:12:55.669808 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 12 14:12:55.671036 master-0 kubenswrapper[7440]: E0312 14:12:55.670002 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1763269d-d7d2-44ae-a7aa-74eca578d04b" containerName="fix-audit-permissions" Mar 12 14:12:55.671036 master-0 kubenswrapper[7440]: I0312 14:12:55.670016 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="1763269d-d7d2-44ae-a7aa-74eca578d04b" containerName="fix-audit-permissions" Mar 12 14:12:55.671036 master-0 kubenswrapper[7440]: I0312 14:12:55.670082 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="1763269d-d7d2-44ae-a7aa-74eca578d04b" containerName="fix-audit-permissions" Mar 12 14:12:55.671036 master-0 kubenswrapper[7440]: I0312 14:12:55.670357 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 14:12:55.672797 master-0 kubenswrapper[7440]: I0312 14:12:55.672605 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 12 14:12:55.686609 master-0 kubenswrapper[7440]: I0312 14:12:55.686576 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 12 14:12:55.787185 master-0 kubenswrapper[7440]: I0312 14:12:55.786413 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d3364860-0708-4eef-ac94-94992bf2d631-var-lock\") pod \"installer-1-master-0\" (UID: \"d3364860-0708-4eef-ac94-94992bf2d631\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 14:12:55.787185 master-0 kubenswrapper[7440]: I0312 14:12:55.786488 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3364860-0708-4eef-ac94-94992bf2d631-kube-api-access\") pod \"installer-1-master-0\" (UID: \"d3364860-0708-4eef-ac94-94992bf2d631\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 14:12:55.787185 master-0 kubenswrapper[7440]: I0312 14:12:55.786542 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3364860-0708-4eef-ac94-94992bf2d631-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"d3364860-0708-4eef-ac94-94992bf2d631\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 14:12:55.787185 master-0 kubenswrapper[7440]: I0312 14:12:55.786576 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-client-ca\") pod \"controller-manager-74d66c4c7c-5lsl6\" (UID: \"70d139ff-05ec-4733-8c0c-b7de1a535d60\") " pod="openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6" Mar 12 14:12:55.787185 master-0 kubenswrapper[7440]: E0312 14:12:55.786752 7440 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 14:12:55.787185 master-0 kubenswrapper[7440]: E0312 14:12:55.786830 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-client-ca podName:70d139ff-05ec-4733-8c0c-b7de1a535d60 nodeName:}" failed. No retries permitted until 2026-03-12 14:12:59.786815669 +0000 UTC m=+40.122194228 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-client-ca") pod "controller-manager-74d66c4c7c-5lsl6" (UID: "70d139ff-05ec-4733-8c0c-b7de1a535d60") : configmap "client-ca" not found Mar 12 14:12:55.823218 master-0 kubenswrapper[7440]: I0312 14:12:55.823165 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1763269d-d7d2-44ae-a7aa-74eca578d04b" path="/var/lib/kubelet/pods/1763269d-d7d2-44ae-a7aa-74eca578d04b/volumes" Mar 12 14:12:55.890167 master-0 kubenswrapper[7440]: I0312 14:12:55.888141 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d3364860-0708-4eef-ac94-94992bf2d631-var-lock\") pod \"installer-1-master-0\" (UID: \"d3364860-0708-4eef-ac94-94992bf2d631\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 14:12:55.890167 master-0 kubenswrapper[7440]: I0312 14:12:55.888277 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d3364860-0708-4eef-ac94-94992bf2d631-var-lock\") pod \"installer-1-master-0\" (UID: \"d3364860-0708-4eef-ac94-94992bf2d631\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 14:12:55.890167 master-0 kubenswrapper[7440]: I0312 14:12:55.888292 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3364860-0708-4eef-ac94-94992bf2d631-kube-api-access\") pod \"installer-1-master-0\" (UID: \"d3364860-0708-4eef-ac94-94992bf2d631\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 14:12:55.890167 master-0 kubenswrapper[7440]: I0312 14:12:55.888364 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3364860-0708-4eef-ac94-94992bf2d631-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"d3364860-0708-4eef-ac94-94992bf2d631\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 14:12:55.890167 master-0 kubenswrapper[7440]: I0312 14:12:55.888450 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3364860-0708-4eef-ac94-94992bf2d631-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"d3364860-0708-4eef-ac94-94992bf2d631\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 14:12:55.906130 master-0 kubenswrapper[7440]: I0312 14:12:55.906062 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3364860-0708-4eef-ac94-94992bf2d631-kube-api-access\") pod \"installer-1-master-0\" (UID: \"d3364860-0708-4eef-ac94-94992bf2d631\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 14:12:56.006767 master-0 kubenswrapper[7440]: I0312 14:12:56.006285 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 14:12:56.014574 master-0 kubenswrapper[7440]: I0312 14:12:56.014512 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-6b7d9dd778-7klpj"] Mar 12 14:12:56.016339 master-0 kubenswrapper[7440]: I0312 14:12:56.016311 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.022751 master-0 kubenswrapper[7440]: I0312 14:12:56.021681 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 12 14:12:56.022751 master-0 kubenswrapper[7440]: I0312 14:12:56.021819 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 12 14:12:56.023133 master-0 kubenswrapper[7440]: I0312 14:12:56.022831 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 12 14:12:56.027792 master-0 kubenswrapper[7440]: I0312 14:12:56.026393 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 12 14:12:56.027792 master-0 kubenswrapper[7440]: I0312 14:12:56.026662 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 12 14:12:56.027792 master-0 kubenswrapper[7440]: I0312 14:12:56.026838 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 12 14:12:56.028491 master-0 kubenswrapper[7440]: I0312 14:12:56.028460 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 12 14:12:56.044057 master-0 kubenswrapper[7440]: I0312 14:12:56.036511 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 12 14:12:56.044057 master-0 kubenswrapper[7440]: I0312 14:12:56.036540 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 12 14:12:56.044057 master-0 kubenswrapper[7440]: I0312 14:12:56.036812 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 12 14:12:56.044057 master-0 kubenswrapper[7440]: I0312 14:12:56.037349 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-6b7d9dd778-7klpj"] Mar 12 14:12:56.192614 master-0 kubenswrapper[7440]: I0312 14:12:56.192494 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39bda5b8-c748-4023-8680-8e8454512e5b-serving-cert\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.192614 master-0 kubenswrapper[7440]: I0312 14:12:56.192555 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-config\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.192614 master-0 kubenswrapper[7440]: I0312 14:12:56.192588 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4krm9\" (UniqueName: \"kubernetes.io/projected/39bda5b8-c748-4023-8680-8e8454512e5b-kube-api-access-4krm9\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.192614 master-0 kubenswrapper[7440]: I0312 14:12:56.192617 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-audit\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.192915 master-0 kubenswrapper[7440]: I0312 14:12:56.192654 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39bda5b8-c748-4023-8680-8e8454512e5b-audit-dir\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.192915 master-0 kubenswrapper[7440]: I0312 14:12:56.192674 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/39bda5b8-c748-4023-8680-8e8454512e5b-encryption-config\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.192915 master-0 kubenswrapper[7440]: I0312 14:12:56.192694 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-image-import-ca\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.192915 master-0 kubenswrapper[7440]: I0312 14:12:56.192713 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/39bda5b8-c748-4023-8680-8e8454512e5b-etcd-client\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.192915 master-0 kubenswrapper[7440]: I0312 14:12:56.192730 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-etcd-serving-ca\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.192915 master-0 kubenswrapper[7440]: I0312 14:12:56.192748 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-trusted-ca-bundle\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.192915 master-0 kubenswrapper[7440]: I0312 14:12:56.192810 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/39bda5b8-c748-4023-8680-8e8454512e5b-node-pullsecrets\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.293824 master-0 kubenswrapper[7440]: I0312 14:12:56.293777 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39bda5b8-c748-4023-8680-8e8454512e5b-serving-cert\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.293824 master-0 kubenswrapper[7440]: I0312 14:12:56.293823 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-config\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.294045 master-0 kubenswrapper[7440]: I0312 14:12:56.293848 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4krm9\" (UniqueName: \"kubernetes.io/projected/39bda5b8-c748-4023-8680-8e8454512e5b-kube-api-access-4krm9\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.294045 master-0 kubenswrapper[7440]: I0312 14:12:56.293873 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-audit\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.294303 master-0 kubenswrapper[7440]: I0312 14:12:56.294174 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39bda5b8-c748-4023-8680-8e8454512e5b-audit-dir\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.294303 master-0 kubenswrapper[7440]: I0312 14:12:56.294271 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/39bda5b8-c748-4023-8680-8e8454512e5b-encryption-config\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.294391 master-0 kubenswrapper[7440]: I0312 14:12:56.294339 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-image-import-ca\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.294426 master-0 kubenswrapper[7440]: I0312 14:12:56.294394 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/39bda5b8-c748-4023-8680-8e8454512e5b-etcd-client\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.294426 master-0 kubenswrapper[7440]: I0312 14:12:56.294414 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-etcd-serving-ca\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.294476 master-0 kubenswrapper[7440]: I0312 14:12:56.294430 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-trusted-ca-bundle\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.294503 master-0 kubenswrapper[7440]: I0312 14:12:56.294495 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/39bda5b8-c748-4023-8680-8e8454512e5b-node-pullsecrets\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.295825 master-0 kubenswrapper[7440]: I0312 14:12:56.294669 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/39bda5b8-c748-4023-8680-8e8454512e5b-node-pullsecrets\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.295825 master-0 kubenswrapper[7440]: I0312 14:12:56.295097 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-audit\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.295825 master-0 kubenswrapper[7440]: I0312 14:12:56.294340 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39bda5b8-c748-4023-8680-8e8454512e5b-audit-dir\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.295825 master-0 kubenswrapper[7440]: I0312 14:12:56.295704 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-etcd-serving-ca\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.295825 master-0 kubenswrapper[7440]: I0312 14:12:56.295792 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-trusted-ca-bundle\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.296553 master-0 kubenswrapper[7440]: I0312 14:12:56.296531 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-config\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.297200 master-0 kubenswrapper[7440]: I0312 14:12:56.297154 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-image-import-ca\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.301029 master-0 kubenswrapper[7440]: I0312 14:12:56.300998 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/39bda5b8-c748-4023-8680-8e8454512e5b-encryption-config\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.311494 master-0 kubenswrapper[7440]: I0312 14:12:56.311460 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/39bda5b8-c748-4023-8680-8e8454512e5b-etcd-client\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.312840 master-0 kubenswrapper[7440]: I0312 14:12:56.312808 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39bda5b8-c748-4023-8680-8e8454512e5b-serving-cert\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.323170 master-0 kubenswrapper[7440]: I0312 14:12:56.320469 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4krm9\" (UniqueName: \"kubernetes.io/projected/39bda5b8-c748-4023-8680-8e8454512e5b-kube-api-access-4krm9\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.347178 master-0 kubenswrapper[7440]: I0312 14:12:56.347128 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:12:56.360523 master-0 kubenswrapper[7440]: I0312 14:12:56.360467 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-757d65d745-gzpdw"] Mar 12 14:12:56.423551 master-0 kubenswrapper[7440]: I0312 14:12:56.423499 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" event={"ID":"1edf236b-654d-4568-ab33-b1f408dcbec6","Type":"ContainerStarted","Data":"7baf84a669ae145308ec696ea2a3c0f0d8a3eaa1489aa598dc200ebb070fc533"} Mar 12 14:12:56.443859 master-0 kubenswrapper[7440]: I0312 14:12:56.443701 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" podStartSLOduration=2.839562517 podStartE2EDuration="6.443675017s" podCreationTimestamp="2026-03-12 14:12:50 +0000 UTC" firstStartedPulling="2026-03-12 14:12:51.147886966 +0000 UTC m=+31.483265525" lastFinishedPulling="2026-03-12 14:12:54.751999466 +0000 UTC m=+35.087378025" observedRunningTime="2026-03-12 14:12:56.442131467 +0000 UTC m=+36.777510026" watchObservedRunningTime="2026-03-12 14:12:56.443675017 +0000 UTC m=+36.779053576" Mar 12 14:12:57.311167 master-0 kubenswrapper[7440]: I0312 14:12:57.311115 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-client-ca\") pod \"route-controller-manager-658bd69d7b-dgrvq\" (UID: \"2bbc2b06-9e09-46e1-803f-d60d9a41e49d\") " pod="openshift-route-controller-manager/route-controller-manager-658bd69d7b-dgrvq" Mar 12 14:12:57.311337 master-0 kubenswrapper[7440]: E0312 14:12:57.311287 7440 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 12 14:12:57.311369 master-0 kubenswrapper[7440]: E0312 14:12:57.311348 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-client-ca podName:2bbc2b06-9e09-46e1-803f-d60d9a41e49d nodeName:}" failed. No retries permitted until 2026-03-12 14:13:05.311329226 +0000 UTC m=+45.646707785 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-client-ca") pod "route-controller-manager-658bd69d7b-dgrvq" (UID: "2bbc2b06-9e09-46e1-803f-d60d9a41e49d") : configmap "client-ca" not found Mar 12 14:12:57.433515 master-0 kubenswrapper[7440]: I0312 14:12:57.433380 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" podUID="1edf236b-654d-4568-ab33-b1f408dcbec6" containerName="oauth-apiserver" containerID="cri-o://7baf84a669ae145308ec696ea2a3c0f0d8a3eaa1489aa598dc200ebb070fc533" gracePeriod=120 Mar 12 14:12:58.164220 master-0 kubenswrapper[7440]: I0312 14:12:58.164030 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:12:58.581366 master-0 kubenswrapper[7440]: I0312 14:12:58.580078 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:12:58.877042 master-0 kubenswrapper[7440]: I0312 14:12:58.873791 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-6b7d9dd778-7klpj"] Mar 12 14:12:58.944647 master-0 kubenswrapper[7440]: I0312 14:12:58.940174 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 12 14:12:59.441604 master-0 kubenswrapper[7440]: I0312 14:12:59.441535 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"d3364860-0708-4eef-ac94-94992bf2d631","Type":"ContainerStarted","Data":"8da3b2fa0fcd528d3c970486ebbff3077b0323e9beb917763b1f850b9e4f435f"} Mar 12 14:12:59.450972 master-0 kubenswrapper[7440]: I0312 14:12:59.450931 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" event={"ID":"7023af8b-bfcc-4253-85cd-d891dff1c86e","Type":"ContainerStarted","Data":"bbf8648501855090b8f097caff2cdeb613eb87fa32c1c70b502f2307573cd6ef"} Mar 12 14:12:59.451054 master-0 kubenswrapper[7440]: I0312 14:12:59.450976 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" event={"ID":"7023af8b-bfcc-4253-85cd-d891dff1c86e","Type":"ContainerStarted","Data":"59225193c476309a0aa5efa9f60ce80fa3d02930e0324fa57c25ccd5390ef184"} Mar 12 14:12:59.456011 master-0 kubenswrapper[7440]: I0312 14:12:59.455945 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" event={"ID":"42dbcb8f-e8c4-413e-977d-40aa6df226aa","Type":"ContainerStarted","Data":"96773e17e9462f90b171d3286268d0d8f5fc4990dec24aadd0ba11958115f19d"} Mar 12 14:12:59.460340 master-0 kubenswrapper[7440]: I0312 14:12:59.460243 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n9v7g" event={"ID":"7fdce71e-8085-4316-be40-e535530c2ca4","Type":"ContainerStarted","Data":"b8084c79072268deed68a248c9cc23b07e893e8cdbd559a3d91ee67109a24a9f"} Mar 12 14:12:59.460340 master-0 kubenswrapper[7440]: I0312 14:12:59.460315 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n9v7g" event={"ID":"7fdce71e-8085-4316-be40-e535530c2ca4","Type":"ContainerStarted","Data":"aa27a3d716446258953a4956aee28f02e22ffb14db399fb7312647fbcc4f9bfc"} Mar 12 14:12:59.462431 master-0 kubenswrapper[7440]: I0312 14:12:59.462406 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" event={"ID":"1bc0d552-01c7-4212-a551-d16419f2dc80","Type":"ContainerStarted","Data":"73cc9d119c3cd4081058d9ad935f90baed6fe86111a2b8950fb3e1c100feb5fb"} Mar 12 14:12:59.463090 master-0 kubenswrapper[7440]: I0312 14:12:59.463036 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:59.465932 master-0 kubenswrapper[7440]: I0312 14:12:59.464813 7440 generic.go:334] "Generic (PLEG): container finished" podID="39bda5b8-c748-4023-8680-8e8454512e5b" containerID="433f8c8699626602589391cd2daaab97922be2a22d3d7962e8991c85c86df5c6" exitCode=0 Mar 12 14:12:59.465932 master-0 kubenswrapper[7440]: I0312 14:12:59.464860 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" event={"ID":"39bda5b8-c748-4023-8680-8e8454512e5b","Type":"ContainerDied","Data":"433f8c8699626602589391cd2daaab97922be2a22d3d7962e8991c85c86df5c6"} Mar 12 14:12:59.465932 master-0 kubenswrapper[7440]: I0312 14:12:59.464885 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" event={"ID":"39bda5b8-c748-4023-8680-8e8454512e5b","Type":"ContainerStarted","Data":"5679426d37d3354caeeb4580675058670c5c7ef6fa2efa546a861e1c9f923e06"} Mar 12 14:12:59.466606 master-0 kubenswrapper[7440]: I0312 14:12:59.466356 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:12:59.855067 master-0 kubenswrapper[7440]: I0312 14:12:59.852586 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-client-ca\") pod \"controller-manager-74d66c4c7c-5lsl6\" (UID: \"70d139ff-05ec-4733-8c0c-b7de1a535d60\") " pod="openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6" Mar 12 14:12:59.855067 master-0 kubenswrapper[7440]: E0312 14:12:59.852716 7440 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 14:12:59.855067 master-0 kubenswrapper[7440]: E0312 14:12:59.852811 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-client-ca podName:70d139ff-05ec-4733-8c0c-b7de1a535d60 nodeName:}" failed. No retries permitted until 2026-03-12 14:13:07.852782594 +0000 UTC m=+48.188161213 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-client-ca") pod "controller-manager-74d66c4c7c-5lsl6" (UID: "70d139ff-05ec-4733-8c0c-b7de1a535d60") : configmap "client-ca" not found Mar 12 14:13:00.259942 master-0 kubenswrapper[7440]: I0312 14:13:00.259891 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:13:00.494028 master-0 kubenswrapper[7440]: I0312 14:13:00.493978 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" event={"ID":"39bda5b8-c748-4023-8680-8e8454512e5b","Type":"ContainerStarted","Data":"8964cd1f217fb6cd94d6566a0a2a6f59f63cb7a634af81c532937c3dbd22f0d9"} Mar 12 14:13:00.494201 master-0 kubenswrapper[7440]: I0312 14:13:00.494036 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" event={"ID":"39bda5b8-c748-4023-8680-8e8454512e5b","Type":"ContainerStarted","Data":"bf877df3c9cca5ce74acde914aebb5ead90404a0291628cd7df82a19c157c176"} Mar 12 14:13:00.496795 master-0 kubenswrapper[7440]: I0312 14:13:00.496626 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"d3364860-0708-4eef-ac94-94992bf2d631","Type":"ContainerStarted","Data":"f82502f50ac79890c44461c13992c782465cf9d5879da841305e795b8aa38182"} Mar 12 14:13:00.513353 master-0 kubenswrapper[7440]: I0312 14:13:00.512561 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" podStartSLOduration=13.512544173 podStartE2EDuration="13.512544173s" podCreationTimestamp="2026-03-12 14:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:13:00.509864417 +0000 UTC m=+40.845242986" watchObservedRunningTime="2026-03-12 14:13:00.512544173 +0000 UTC m=+40.847922732" Mar 12 14:13:00.937978 master-0 kubenswrapper[7440]: I0312 14:13:00.937023 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:13:01.348249 master-0 kubenswrapper[7440]: I0312 14:13:01.348201 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:13:01.348249 master-0 kubenswrapper[7440]: I0312 14:13:01.348257 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:13:01.359094 master-0 kubenswrapper[7440]: I0312 14:13:01.359027 7440 patch_prober.go:28] interesting pod/apiserver-6b7d9dd778-7klpj container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 12 14:13:01.359094 master-0 kubenswrapper[7440]: [+]log ok Mar 12 14:13:01.359094 master-0 kubenswrapper[7440]: [+]etcd ok Mar 12 14:13:01.359094 master-0 kubenswrapper[7440]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 12 14:13:01.359094 master-0 kubenswrapper[7440]: [+]poststarthook/generic-apiserver-start-informers ok Mar 12 14:13:01.359094 master-0 kubenswrapper[7440]: [+]poststarthook/max-in-flight-filter ok Mar 12 14:13:01.359094 master-0 kubenswrapper[7440]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 12 14:13:01.359094 master-0 kubenswrapper[7440]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 12 14:13:01.359094 master-0 kubenswrapper[7440]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 12 14:13:01.359094 master-0 kubenswrapper[7440]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Mar 12 14:13:01.359094 master-0 kubenswrapper[7440]: [+]poststarthook/project.openshift.io-projectcache ok Mar 12 14:13:01.359094 master-0 kubenswrapper[7440]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 12 14:13:01.359094 master-0 kubenswrapper[7440]: [+]poststarthook/openshift.io-startinformers ok Mar 12 14:13:01.359094 master-0 kubenswrapper[7440]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 12 14:13:01.359094 master-0 kubenswrapper[7440]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 12 14:13:01.359094 master-0 kubenswrapper[7440]: livez check failed Mar 12 14:13:01.359673 master-0 kubenswrapper[7440]: I0312 14:13:01.359113 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" podUID="39bda5b8-c748-4023-8680-8e8454512e5b" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:13:03.186273 master-0 kubenswrapper[7440]: I0312 14:13:03.185609 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=8.185587621 podStartE2EDuration="8.185587621s" podCreationTimestamp="2026-03-12 14:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:13:00.526647923 +0000 UTC m=+40.862026482" watchObservedRunningTime="2026-03-12 14:13:03.185587621 +0000 UTC m=+43.520966180" Mar 12 14:13:03.187192 master-0 kubenswrapper[7440]: I0312 14:13:03.187161 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-vs878"] Mar 12 14:13:03.187398 master-0 kubenswrapper[7440]: I0312 14:13:03.187365 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" podUID="29ab0e68-ebc6-48a3-b234-e1794c4c5ad6" containerName="cluster-version-operator" containerID="cri-o://cf23fc0b6cd95a02f686246211e31b8df0ad1c1b49b21a0c7774df5c0e49337f" gracePeriod=130 Mar 12 14:13:04.496928 master-0 kubenswrapper[7440]: I0312 14:13:04.496860 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-fpjck" Mar 12 14:13:04.731648 master-0 kubenswrapper[7440]: I0312 14:13:04.731413 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:13:04.826653 master-0 kubenswrapper[7440]: I0312 14:13:04.826608 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-kube-api-access\") pod \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " Mar 12 14:13:04.826736 master-0 kubenswrapper[7440]: I0312 14:13:04.826678 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-etc-cvo-updatepayloads\") pod \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " Mar 12 14:13:04.826736 master-0 kubenswrapper[7440]: I0312 14:13:04.826708 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert\") pod \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " Mar 12 14:13:04.826860 master-0 kubenswrapper[7440]: I0312 14:13:04.826739 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-service-ca\") pod \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " Mar 12 14:13:04.826860 master-0 kubenswrapper[7440]: I0312 14:13:04.826797 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-etc-ssl-certs\") pod \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\" (UID: \"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6\") " Mar 12 14:13:04.827046 master-0 kubenswrapper[7440]: I0312 14:13:04.827015 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "29ab0e68-ebc6-48a3-b234-e1794c4c5ad6" (UID: "29ab0e68-ebc6-48a3-b234-e1794c4c5ad6"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:13:04.827091 master-0 kubenswrapper[7440]: I0312 14:13:04.827046 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "29ab0e68-ebc6-48a3-b234-e1794c4c5ad6" (UID: "29ab0e68-ebc6-48a3-b234-e1794c4c5ad6"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:13:04.829133 master-0 kubenswrapper[7440]: I0312 14:13:04.828614 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-service-ca" (OuterVolumeSpecName: "service-ca") pod "29ab0e68-ebc6-48a3-b234-e1794c4c5ad6" (UID: "29ab0e68-ebc6-48a3-b234-e1794c4c5ad6"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:13:04.829823 master-0 kubenswrapper[7440]: I0312 14:13:04.829790 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "29ab0e68-ebc6-48a3-b234-e1794c4c5ad6" (UID: "29ab0e68-ebc6-48a3-b234-e1794c4c5ad6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:13:04.829965 master-0 kubenswrapper[7440]: I0312 14:13:04.829934 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "29ab0e68-ebc6-48a3-b234-e1794c4c5ad6" (UID: "29ab0e68-ebc6-48a3-b234-e1794c4c5ad6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:13:04.933067 master-0 kubenswrapper[7440]: I0312 14:13:04.933017 7440 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:04.933067 master-0 kubenswrapper[7440]: I0312 14:13:04.933057 7440 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:04.933067 master-0 kubenswrapper[7440]: I0312 14:13:04.933066 7440 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:04.933246 master-0 kubenswrapper[7440]: I0312 14:13:04.933075 7440 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:04.933246 master-0 kubenswrapper[7440]: I0312 14:13:04.933084 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:04.941394 master-0 kubenswrapper[7440]: I0312 14:13:04.941285 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 12 14:13:04.943342 master-0 kubenswrapper[7440]: E0312 14:13:04.943313 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ab0e68-ebc6-48a3-b234-e1794c4c5ad6" containerName="cluster-version-operator" Mar 12 14:13:04.943342 master-0 kubenswrapper[7440]: I0312 14:13:04.943336 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ab0e68-ebc6-48a3-b234-e1794c4c5ad6" containerName="cluster-version-operator" Mar 12 14:13:04.943463 master-0 kubenswrapper[7440]: I0312 14:13:04.943442 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ab0e68-ebc6-48a3-b234-e1794c4c5ad6" containerName="cluster-version-operator" Mar 12 14:13:04.943785 master-0 kubenswrapper[7440]: I0312 14:13:04.943764 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 14:13:04.946890 master-0 kubenswrapper[7440]: I0312 14:13:04.945216 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 12 14:13:04.949892 master-0 kubenswrapper[7440]: I0312 14:13:04.948303 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 12 14:13:05.034273 master-0 kubenswrapper[7440]: I0312 14:13:05.034163 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ea22ec3-d02f-4f30-accf-eba03f4d4214-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"1ea22ec3-d02f-4f30-accf-eba03f4d4214\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 14:13:05.034273 master-0 kubenswrapper[7440]: I0312 14:13:05.034240 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ea22ec3-d02f-4f30-accf-eba03f4d4214-kube-api-access\") pod \"installer-1-master-0\" (UID: \"1ea22ec3-d02f-4f30-accf-eba03f4d4214\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 14:13:05.034273 master-0 kubenswrapper[7440]: I0312 14:13:05.034268 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ea22ec3-d02f-4f30-accf-eba03f4d4214-var-lock\") pod \"installer-1-master-0\" (UID: \"1ea22ec3-d02f-4f30-accf-eba03f4d4214\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 14:13:05.135145 master-0 kubenswrapper[7440]: I0312 14:13:05.135102 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ea22ec3-d02f-4f30-accf-eba03f4d4214-var-lock\") pod \"installer-1-master-0\" (UID: \"1ea22ec3-d02f-4f30-accf-eba03f4d4214\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 14:13:05.135477 master-0 kubenswrapper[7440]: I0312 14:13:05.135244 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ea22ec3-d02f-4f30-accf-eba03f4d4214-var-lock\") pod \"installer-1-master-0\" (UID: \"1ea22ec3-d02f-4f30-accf-eba03f4d4214\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 14:13:05.135609 master-0 kubenswrapper[7440]: I0312 14:13:05.135555 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ea22ec3-d02f-4f30-accf-eba03f4d4214-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"1ea22ec3-d02f-4f30-accf-eba03f4d4214\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 14:13:05.135711 master-0 kubenswrapper[7440]: I0312 14:13:05.135656 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ea22ec3-d02f-4f30-accf-eba03f4d4214-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"1ea22ec3-d02f-4f30-accf-eba03f4d4214\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 14:13:05.135844 master-0 kubenswrapper[7440]: I0312 14:13:05.135825 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ea22ec3-d02f-4f30-accf-eba03f4d4214-kube-api-access\") pod \"installer-1-master-0\" (UID: \"1ea22ec3-d02f-4f30-accf-eba03f4d4214\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 14:13:05.153735 master-0 kubenswrapper[7440]: I0312 14:13:05.153677 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ea22ec3-d02f-4f30-accf-eba03f4d4214-kube-api-access\") pod \"installer-1-master-0\" (UID: \"1ea22ec3-d02f-4f30-accf-eba03f4d4214\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 14:13:05.270826 master-0 kubenswrapper[7440]: I0312 14:13:05.270789 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 14:13:05.338675 master-0 kubenswrapper[7440]: I0312 14:13:05.338174 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-client-ca\") pod \"route-controller-manager-658bd69d7b-dgrvq\" (UID: \"2bbc2b06-9e09-46e1-803f-d60d9a41e49d\") " pod="openshift-route-controller-manager/route-controller-manager-658bd69d7b-dgrvq" Mar 12 14:13:05.338675 master-0 kubenswrapper[7440]: E0312 14:13:05.338286 7440 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 12 14:13:05.338675 master-0 kubenswrapper[7440]: E0312 14:13:05.338338 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-client-ca podName:2bbc2b06-9e09-46e1-803f-d60d9a41e49d nodeName:}" failed. No retries permitted until 2026-03-12 14:13:21.338324456 +0000 UTC m=+61.673703005 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-client-ca") pod "route-controller-manager-658bd69d7b-dgrvq" (UID: "2bbc2b06-9e09-46e1-803f-d60d9a41e49d") : configmap "client-ca" not found Mar 12 14:13:05.491441 master-0 kubenswrapper[7440]: I0312 14:13:05.491383 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6"] Mar 12 14:13:05.491739 master-0 kubenswrapper[7440]: E0312 14:13:05.491704 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6" podUID="70d139ff-05ec-4733-8c0c-b7de1a535d60" Mar 12 14:13:05.537008 master-0 kubenswrapper[7440]: I0312 14:13:05.536952 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" event={"ID":"07a6a1d6-fecf-4847-b7c1-160d5d7320fb","Type":"ContainerStarted","Data":"c17b1a095c8d2091cd370bbb911b06ac4230f51bbc05825adea160d39c746b2d"} Mar 12 14:13:05.538303 master-0 kubenswrapper[7440]: I0312 14:13:05.538044 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:13:05.541740 master-0 kubenswrapper[7440]: I0312 14:13:05.541625 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" event={"ID":"272b53c4-134c-404d-9a27-c7371415b1f7","Type":"ContainerStarted","Data":"3010a80a92c3a02adf1119b509dd4d02bfec5d34b2c3fbe2b1e05487ab8ddb25"} Mar 12 14:13:05.545048 master-0 kubenswrapper[7440]: I0312 14:13:05.542385 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:13:05.545048 master-0 kubenswrapper[7440]: I0312 14:13:05.544076 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:13:05.545048 master-0 kubenswrapper[7440]: I0312 14:13:05.544307 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" event={"ID":"85459175-2c9c-425d-bdfb-0a79c92ed110","Type":"ContainerStarted","Data":"e509fdc6496e2a91ab75938ff7600d03685ac240f8fb3c3d670f376d905b17ab"} Mar 12 14:13:05.545048 master-0 kubenswrapper[7440]: I0312 14:13:05.544709 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:13:05.545638 master-0 kubenswrapper[7440]: I0312 14:13:05.545613 7440 generic.go:334] "Generic (PLEG): container finished" podID="29ab0e68-ebc6-48a3-b234-e1794c4c5ad6" containerID="cf23fc0b6cd95a02f686246211e31b8df0ad1c1b49b21a0c7774df5c0e49337f" exitCode=0 Mar 12 14:13:05.545697 master-0 kubenswrapper[7440]: I0312 14:13:05.545661 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" Mar 12 14:13:05.545697 master-0 kubenswrapper[7440]: I0312 14:13:05.545687 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6" Mar 12 14:13:05.545813 master-0 kubenswrapper[7440]: I0312 14:13:05.545674 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" event={"ID":"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6","Type":"ContainerDied","Data":"cf23fc0b6cd95a02f686246211e31b8df0ad1c1b49b21a0c7774df5c0e49337f"} Mar 12 14:13:05.545880 master-0 kubenswrapper[7440]: I0312 14:13:05.545829 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-vs878" event={"ID":"29ab0e68-ebc6-48a3-b234-e1794c4c5ad6","Type":"ContainerDied","Data":"c4b5088c802a368b7c7d0efdb50871f27fcbf22b2f22473b852cef3d38ae1618"} Mar 12 14:13:05.545880 master-0 kubenswrapper[7440]: I0312 14:13:05.545856 7440 scope.go:117] "RemoveContainer" containerID="cf23fc0b6cd95a02f686246211e31b8df0ad1c1b49b21a0c7774df5c0e49337f" Mar 12 14:13:05.548791 master-0 kubenswrapper[7440]: I0312 14:13:05.548401 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:13:05.558401 master-0 kubenswrapper[7440]: I0312 14:13:05.558329 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6" Mar 12 14:13:05.560328 master-0 kubenswrapper[7440]: I0312 14:13:05.559976 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658bd69d7b-dgrvq"] Mar 12 14:13:05.560328 master-0 kubenswrapper[7440]: E0312 14:13:05.560242 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-658bd69d7b-dgrvq" podUID="2bbc2b06-9e09-46e1-803f-d60d9a41e49d" Mar 12 14:13:05.571729 master-0 kubenswrapper[7440]: I0312 14:13:05.570803 7440 scope.go:117] "RemoveContainer" containerID="cf23fc0b6cd95a02f686246211e31b8df0ad1c1b49b21a0c7774df5c0e49337f" Mar 12 14:13:05.571729 master-0 kubenswrapper[7440]: E0312 14:13:05.571391 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf23fc0b6cd95a02f686246211e31b8df0ad1c1b49b21a0c7774df5c0e49337f\": container with ID starting with cf23fc0b6cd95a02f686246211e31b8df0ad1c1b49b21a0c7774df5c0e49337f not found: ID does not exist" containerID="cf23fc0b6cd95a02f686246211e31b8df0ad1c1b49b21a0c7774df5c0e49337f" Mar 12 14:13:05.573006 master-0 kubenswrapper[7440]: I0312 14:13:05.572402 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf23fc0b6cd95a02f686246211e31b8df0ad1c1b49b21a0c7774df5c0e49337f"} err="failed to get container status \"cf23fc0b6cd95a02f686246211e31b8df0ad1c1b49b21a0c7774df5c0e49337f\": rpc error: code = NotFound desc = could not find container \"cf23fc0b6cd95a02f686246211e31b8df0ad1c1b49b21a0c7774df5c0e49337f\": container with ID starting with cf23fc0b6cd95a02f686246211e31b8df0ad1c1b49b21a0c7774df5c0e49337f not found: ID does not exist" Mar 12 14:13:05.644212 master-0 kubenswrapper[7440]: I0312 14:13:05.642694 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-proxy-ca-bundles\") pod \"70d139ff-05ec-4733-8c0c-b7de1a535d60\" (UID: \"70d139ff-05ec-4733-8c0c-b7de1a535d60\") " Mar 12 14:13:05.644212 master-0 kubenswrapper[7440]: I0312 14:13:05.642784 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-config\") pod \"70d139ff-05ec-4733-8c0c-b7de1a535d60\" (UID: \"70d139ff-05ec-4733-8c0c-b7de1a535d60\") " Mar 12 14:13:05.644212 master-0 kubenswrapper[7440]: I0312 14:13:05.642878 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrrv5\" (UniqueName: \"kubernetes.io/projected/70d139ff-05ec-4733-8c0c-b7de1a535d60-kube-api-access-mrrv5\") pod \"70d139ff-05ec-4733-8c0c-b7de1a535d60\" (UID: \"70d139ff-05ec-4733-8c0c-b7de1a535d60\") " Mar 12 14:13:05.644212 master-0 kubenswrapper[7440]: I0312 14:13:05.642923 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70d139ff-05ec-4733-8c0c-b7de1a535d60-serving-cert\") pod \"70d139ff-05ec-4733-8c0c-b7de1a535d60\" (UID: \"70d139ff-05ec-4733-8c0c-b7de1a535d60\") " Mar 12 14:13:05.644212 master-0 kubenswrapper[7440]: I0312 14:13:05.643256 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "70d139ff-05ec-4733-8c0c-b7de1a535d60" (UID: "70d139ff-05ec-4733-8c0c-b7de1a535d60"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:13:05.644212 master-0 kubenswrapper[7440]: I0312 14:13:05.643652 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-config" (OuterVolumeSpecName: "config") pod "70d139ff-05ec-4733-8c0c-b7de1a535d60" (UID: "70d139ff-05ec-4733-8c0c-b7de1a535d60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:13:05.646307 master-0 kubenswrapper[7440]: I0312 14:13:05.646269 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70d139ff-05ec-4733-8c0c-b7de1a535d60-kube-api-access-mrrv5" (OuterVolumeSpecName: "kube-api-access-mrrv5") pod "70d139ff-05ec-4733-8c0c-b7de1a535d60" (UID: "70d139ff-05ec-4733-8c0c-b7de1a535d60"). InnerVolumeSpecName "kube-api-access-mrrv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:13:05.647693 master-0 kubenswrapper[7440]: I0312 14:13:05.647543 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d139ff-05ec-4733-8c0c-b7de1a535d60-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "70d139ff-05ec-4733-8c0c-b7de1a535d60" (UID: "70d139ff-05ec-4733-8c0c-b7de1a535d60"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:13:05.708558 master-0 kubenswrapper[7440]: I0312 14:13:05.706181 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 12 14:13:05.713267 master-0 kubenswrapper[7440]: I0312 14:13:05.712286 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-vs878"] Mar 12 14:13:05.720170 master-0 kubenswrapper[7440]: I0312 14:13:05.720131 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-vs878"] Mar 12 14:13:05.731954 master-0 kubenswrapper[7440]: W0312 14:13:05.729827 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1ea22ec3_d02f_4f30_accf_eba03f4d4214.slice/crio-8c1d080975e9cbaa7de4b702849cfa0b771fcdb691b2e504d05cd9dac5b3bd8f WatchSource:0}: Error finding container 8c1d080975e9cbaa7de4b702849cfa0b771fcdb691b2e504d05cd9dac5b3bd8f: Status 404 returned error can't find the container with id 8c1d080975e9cbaa7de4b702849cfa0b771fcdb691b2e504d05cd9dac5b3bd8f Mar 12 14:13:05.744665 master-0 kubenswrapper[7440]: I0312 14:13:05.744617 7440 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:05.744665 master-0 kubenswrapper[7440]: I0312 14:13:05.744659 7440 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:05.744665 master-0 kubenswrapper[7440]: I0312 14:13:05.744669 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrrv5\" (UniqueName: \"kubernetes.io/projected/70d139ff-05ec-4733-8c0c-b7de1a535d60-kube-api-access-mrrv5\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:05.744811 master-0 kubenswrapper[7440]: I0312 14:13:05.744678 7440 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70d139ff-05ec-4733-8c0c-b7de1a535d60-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:05.817409 master-0 kubenswrapper[7440]: I0312 14:13:05.817346 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29ab0e68-ebc6-48a3-b234-e1794c4c5ad6" path="/var/lib/kubelet/pods/29ab0e68-ebc6-48a3-b234-e1794c4c5ad6/volumes" Mar 12 14:13:05.839715 master-0 kubenswrapper[7440]: I0312 14:13:05.838340 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx"] Mar 12 14:13:05.839715 master-0 kubenswrapper[7440]: I0312 14:13:05.839546 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:13:05.843130 master-0 kubenswrapper[7440]: I0312 14:13:05.843064 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 12 14:13:05.843422 master-0 kubenswrapper[7440]: I0312 14:13:05.843398 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 12 14:13:05.843598 master-0 kubenswrapper[7440]: I0312 14:13:05.843575 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 12 14:13:05.946443 master-0 kubenswrapper[7440]: I0312 14:13:05.946269 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a35674af-162c-4a4a-8605-158b2326267e-service-ca\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:13:05.946556 master-0 kubenswrapper[7440]: I0312 14:13:05.946462 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a35674af-162c-4a4a-8605-158b2326267e-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:13:05.946556 master-0 kubenswrapper[7440]: I0312 14:13:05.946496 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a35674af-162c-4a4a-8605-158b2326267e-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:13:05.946556 master-0 kubenswrapper[7440]: I0312 14:13:05.946541 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a35674af-162c-4a4a-8605-158b2326267e-serving-cert\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:13:05.946699 master-0 kubenswrapper[7440]: I0312 14:13:05.946569 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a35674af-162c-4a4a-8605-158b2326267e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:13:06.048225 master-0 kubenswrapper[7440]: I0312 14:13:06.047686 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a35674af-162c-4a4a-8605-158b2326267e-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:13:06.048225 master-0 kubenswrapper[7440]: I0312 14:13:06.047734 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a35674af-162c-4a4a-8605-158b2326267e-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:13:06.048225 master-0 kubenswrapper[7440]: I0312 14:13:06.047768 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a35674af-162c-4a4a-8605-158b2326267e-serving-cert\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:13:06.048225 master-0 kubenswrapper[7440]: I0312 14:13:06.047789 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a35674af-162c-4a4a-8605-158b2326267e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:13:06.048225 master-0 kubenswrapper[7440]: I0312 14:13:06.047846 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a35674af-162c-4a4a-8605-158b2326267e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:13:06.048225 master-0 kubenswrapper[7440]: I0312 14:13:06.047881 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a35674af-162c-4a4a-8605-158b2326267e-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:13:06.049400 master-0 kubenswrapper[7440]: I0312 14:13:06.048419 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a35674af-162c-4a4a-8605-158b2326267e-service-ca\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:13:06.049400 master-0 kubenswrapper[7440]: I0312 14:13:06.049332 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a35674af-162c-4a4a-8605-158b2326267e-service-ca\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:13:06.051230 master-0 kubenswrapper[7440]: I0312 14:13:06.051093 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a35674af-162c-4a4a-8605-158b2326267e-serving-cert\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:13:06.056891 master-0 kubenswrapper[7440]: I0312 14:13:06.056811 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4622r"] Mar 12 14:13:06.058711 master-0 kubenswrapper[7440]: I0312 14:13:06.058662 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4622r" Mar 12 14:13:06.075198 master-0 kubenswrapper[7440]: I0312 14:13:06.074256 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4622r"] Mar 12 14:13:06.079490 master-0 kubenswrapper[7440]: I0312 14:13:06.079444 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 12 14:13:06.080227 master-0 kubenswrapper[7440]: I0312 14:13:06.080190 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="d3364860-0708-4eef-ac94-94992bf2d631" containerName="installer" containerID="cri-o://f82502f50ac79890c44461c13992c782465cf9d5879da841305e795b8aa38182" gracePeriod=30 Mar 12 14:13:06.100933 master-0 kubenswrapper[7440]: I0312 14:13:06.098202 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a35674af-162c-4a4a-8605-158b2326267e-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:13:06.152660 master-0 kubenswrapper[7440]: I0312 14:13:06.152601 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q6kr\" (UniqueName: \"kubernetes.io/projected/191fe879-7ece-4f8c-bae6-cf46acb382c9-kube-api-access-9q6kr\") pod \"certified-operators-4622r\" (UID: \"191fe879-7ece-4f8c-bae6-cf46acb382c9\") " pod="openshift-marketplace/certified-operators-4622r" Mar 12 14:13:06.152925 master-0 kubenswrapper[7440]: I0312 14:13:06.152880 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/191fe879-7ece-4f8c-bae6-cf46acb382c9-utilities\") pod \"certified-operators-4622r\" (UID: \"191fe879-7ece-4f8c-bae6-cf46acb382c9\") " pod="openshift-marketplace/certified-operators-4622r" Mar 12 14:13:06.153192 master-0 kubenswrapper[7440]: I0312 14:13:06.153142 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/191fe879-7ece-4f8c-bae6-cf46acb382c9-catalog-content\") pod \"certified-operators-4622r\" (UID: \"191fe879-7ece-4f8c-bae6-cf46acb382c9\") " pod="openshift-marketplace/certified-operators-4622r" Mar 12 14:13:06.182076 master-0 kubenswrapper[7440]: I0312 14:13:06.181933 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:13:06.198321 master-0 kubenswrapper[7440]: W0312 14:13:06.198270 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda35674af_162c_4a4a_8605_158b2326267e.slice/crio-241f858261d65330369ee282a68caee5de8979050ed624a101ccc38bb5423e5f WatchSource:0}: Error finding container 241f858261d65330369ee282a68caee5de8979050ed624a101ccc38bb5423e5f: Status 404 returned error can't find the container with id 241f858261d65330369ee282a68caee5de8979050ed624a101ccc38bb5423e5f Mar 12 14:13:06.246576 master-0 kubenswrapper[7440]: I0312 14:13:06.246523 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-thh89"] Mar 12 14:13:06.249052 master-0 kubenswrapper[7440]: I0312 14:13:06.249018 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-thh89" Mar 12 14:13:06.256184 master-0 kubenswrapper[7440]: I0312 14:13:06.256046 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-thh89"] Mar 12 14:13:06.257106 master-0 kubenswrapper[7440]: I0312 14:13:06.257003 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/191fe879-7ece-4f8c-bae6-cf46acb382c9-utilities\") pod \"certified-operators-4622r\" (UID: \"191fe879-7ece-4f8c-bae6-cf46acb382c9\") " pod="openshift-marketplace/certified-operators-4622r" Mar 12 14:13:06.257619 master-0 kubenswrapper[7440]: I0312 14:13:06.257381 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/191fe879-7ece-4f8c-bae6-cf46acb382c9-utilities\") pod \"certified-operators-4622r\" (UID: \"191fe879-7ece-4f8c-bae6-cf46acb382c9\") " pod="openshift-marketplace/certified-operators-4622r" Mar 12 14:13:06.257619 master-0 kubenswrapper[7440]: I0312 14:13:06.257114 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/191fe879-7ece-4f8c-bae6-cf46acb382c9-catalog-content\") pod \"certified-operators-4622r\" (UID: \"191fe879-7ece-4f8c-bae6-cf46acb382c9\") " pod="openshift-marketplace/certified-operators-4622r" Mar 12 14:13:06.257707 master-0 kubenswrapper[7440]: I0312 14:13:06.257693 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/191fe879-7ece-4f8c-bae6-cf46acb382c9-catalog-content\") pod \"certified-operators-4622r\" (UID: \"191fe879-7ece-4f8c-bae6-cf46acb382c9\") " pod="openshift-marketplace/certified-operators-4622r" Mar 12 14:13:06.259105 master-0 kubenswrapper[7440]: I0312 14:13:06.259053 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q6kr\" (UniqueName: \"kubernetes.io/projected/191fe879-7ece-4f8c-bae6-cf46acb382c9-kube-api-access-9q6kr\") pod \"certified-operators-4622r\" (UID: \"191fe879-7ece-4f8c-bae6-cf46acb382c9\") " pod="openshift-marketplace/certified-operators-4622r" Mar 12 14:13:06.276573 master-0 kubenswrapper[7440]: I0312 14:13:06.276528 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q6kr\" (UniqueName: \"kubernetes.io/projected/191fe879-7ece-4f8c-bae6-cf46acb382c9-kube-api-access-9q6kr\") pod \"certified-operators-4622r\" (UID: \"191fe879-7ece-4f8c-bae6-cf46acb382c9\") " pod="openshift-marketplace/certified-operators-4622r" Mar 12 14:13:06.354543 master-0 kubenswrapper[7440]: I0312 14:13:06.354493 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:13:06.358779 master-0 kubenswrapper[7440]: I0312 14:13:06.358696 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:13:06.360239 master-0 kubenswrapper[7440]: I0312 14:13:06.360212 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh4cz\" (UniqueName: \"kubernetes.io/projected/a932351b-831e-4930-85a2-f2faf1e6b262-kube-api-access-hh4cz\") pod \"community-operators-thh89\" (UID: \"a932351b-831e-4930-85a2-f2faf1e6b262\") " pod="openshift-marketplace/community-operators-thh89" Mar 12 14:13:06.360239 master-0 kubenswrapper[7440]: I0312 14:13:06.360287 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a932351b-831e-4930-85a2-f2faf1e6b262-utilities\") pod \"community-operators-thh89\" (UID: \"a932351b-831e-4930-85a2-f2faf1e6b262\") " pod="openshift-marketplace/community-operators-thh89" Mar 12 14:13:06.360481 master-0 kubenswrapper[7440]: I0312 14:13:06.360357 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a932351b-831e-4930-85a2-f2faf1e6b262-catalog-content\") pod \"community-operators-thh89\" (UID: \"a932351b-831e-4930-85a2-f2faf1e6b262\") " pod="openshift-marketplace/community-operators-thh89" Mar 12 14:13:06.380926 master-0 kubenswrapper[7440]: I0312 14:13:06.380835 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4622r" Mar 12 14:13:06.461430 master-0 kubenswrapper[7440]: I0312 14:13:06.461300 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a932351b-831e-4930-85a2-f2faf1e6b262-utilities\") pod \"community-operators-thh89\" (UID: \"a932351b-831e-4930-85a2-f2faf1e6b262\") " pod="openshift-marketplace/community-operators-thh89" Mar 12 14:13:06.461601 master-0 kubenswrapper[7440]: I0312 14:13:06.461435 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a932351b-831e-4930-85a2-f2faf1e6b262-catalog-content\") pod \"community-operators-thh89\" (UID: \"a932351b-831e-4930-85a2-f2faf1e6b262\") " pod="openshift-marketplace/community-operators-thh89" Mar 12 14:13:06.461601 master-0 kubenswrapper[7440]: I0312 14:13:06.461467 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh4cz\" (UniqueName: \"kubernetes.io/projected/a932351b-831e-4930-85a2-f2faf1e6b262-kube-api-access-hh4cz\") pod \"community-operators-thh89\" (UID: \"a932351b-831e-4930-85a2-f2faf1e6b262\") " pod="openshift-marketplace/community-operators-thh89" Mar 12 14:13:06.462657 master-0 kubenswrapper[7440]: I0312 14:13:06.462625 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a932351b-831e-4930-85a2-f2faf1e6b262-utilities\") pod \"community-operators-thh89\" (UID: \"a932351b-831e-4930-85a2-f2faf1e6b262\") " pod="openshift-marketplace/community-operators-thh89" Mar 12 14:13:06.463652 master-0 kubenswrapper[7440]: I0312 14:13:06.463623 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a932351b-831e-4930-85a2-f2faf1e6b262-catalog-content\") pod \"community-operators-thh89\" (UID: \"a932351b-831e-4930-85a2-f2faf1e6b262\") " pod="openshift-marketplace/community-operators-thh89" Mar 12 14:13:06.502137 master-0 kubenswrapper[7440]: I0312 14:13:06.501690 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh4cz\" (UniqueName: \"kubernetes.io/projected/a932351b-831e-4930-85a2-f2faf1e6b262-kube-api-access-hh4cz\") pod \"community-operators-thh89\" (UID: \"a932351b-831e-4930-85a2-f2faf1e6b262\") " pod="openshift-marketplace/community-operators-thh89" Mar 12 14:13:06.574411 master-0 kubenswrapper[7440]: I0312 14:13:06.574364 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" event={"ID":"a35674af-162c-4a4a-8605-158b2326267e","Type":"ContainerStarted","Data":"74c768e9e11582adc0014bc840fea327d7f38cf0f676db2b9e0edea0c24915ce"} Mar 12 14:13:06.574411 master-0 kubenswrapper[7440]: I0312 14:13:06.574408 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" event={"ID":"a35674af-162c-4a4a-8605-158b2326267e","Type":"ContainerStarted","Data":"241f858261d65330369ee282a68caee5de8979050ed624a101ccc38bb5423e5f"} Mar 12 14:13:06.582142 master-0 kubenswrapper[7440]: I0312 14:13:06.580682 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6" Mar 12 14:13:06.582142 master-0 kubenswrapper[7440]: I0312 14:13:06.580740 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-658bd69d7b-dgrvq" Mar 12 14:13:06.582142 master-0 kubenswrapper[7440]: I0312 14:13:06.580829 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"1ea22ec3-d02f-4f30-accf-eba03f4d4214","Type":"ContainerStarted","Data":"0549077fdcaf4a2aa2d8ef81531f23141be4182774336ab1344ca8cff8e70c94"} Mar 12 14:13:06.582142 master-0 kubenswrapper[7440]: I0312 14:13:06.580846 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"1ea22ec3-d02f-4f30-accf-eba03f4d4214","Type":"ContainerStarted","Data":"8c1d080975e9cbaa7de4b702849cfa0b771fcdb691b2e504d05cd9dac5b3bd8f"} Mar 12 14:13:06.585618 master-0 kubenswrapper[7440]: I0312 14:13:06.584689 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-thh89" Mar 12 14:13:06.605418 master-0 kubenswrapper[7440]: I0312 14:13:06.605353 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-658bd69d7b-dgrvq" Mar 12 14:13:06.615731 master-0 kubenswrapper[7440]: I0312 14:13:06.615660 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" podStartSLOduration=1.615641298 podStartE2EDuration="1.615641298s" podCreationTimestamp="2026-03-12 14:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:13:06.615243318 +0000 UTC m=+46.950621877" watchObservedRunningTime="2026-03-12 14:13:06.615641298 +0000 UTC m=+46.951019857" Mar 12 14:13:06.690027 master-0 kubenswrapper[7440]: I0312 14:13:06.689973 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=2.68995549 podStartE2EDuration="2.68995549s" podCreationTimestamp="2026-03-12 14:13:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:13:06.656138401 +0000 UTC m=+46.991516980" watchObservedRunningTime="2026-03-12 14:13:06.68995549 +0000 UTC m=+47.025334049" Mar 12 14:13:06.753049 master-0 kubenswrapper[7440]: I0312 14:13:06.752971 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-865cf8f5f4-frvwv"] Mar 12 14:13:06.757797 master-0 kubenswrapper[7440]: I0312 14:13:06.757282 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-865cf8f5f4-frvwv" Mar 12 14:13:06.758355 master-0 kubenswrapper[7440]: I0312 14:13:06.758313 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6"] Mar 12 14:13:06.760490 master-0 kubenswrapper[7440]: I0312 14:13:06.760450 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-74d66c4c7c-5lsl6"] Mar 12 14:13:06.763377 master-0 kubenswrapper[7440]: I0312 14:13:06.763339 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 14:13:06.763489 master-0 kubenswrapper[7440]: I0312 14:13:06.763426 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 14:13:06.763768 master-0 kubenswrapper[7440]: I0312 14:13:06.763658 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 14:13:06.764769 master-0 kubenswrapper[7440]: I0312 14:13:06.764649 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqvrq\" (UniqueName: \"kubernetes.io/projected/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-kube-api-access-kqvrq\") pod \"2bbc2b06-9e09-46e1-803f-d60d9a41e49d\" (UID: \"2bbc2b06-9e09-46e1-803f-d60d9a41e49d\") " Mar 12 14:13:06.764769 master-0 kubenswrapper[7440]: I0312 14:13:06.764703 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-serving-cert\") pod \"2bbc2b06-9e09-46e1-803f-d60d9a41e49d\" (UID: \"2bbc2b06-9e09-46e1-803f-d60d9a41e49d\") " Mar 12 14:13:06.764769 master-0 kubenswrapper[7440]: I0312 14:13:06.764742 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-config\") pod \"2bbc2b06-9e09-46e1-803f-d60d9a41e49d\" (UID: \"2bbc2b06-9e09-46e1-803f-d60d9a41e49d\") " Mar 12 14:13:06.767063 master-0 kubenswrapper[7440]: I0312 14:13:06.766379 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-865cf8f5f4-frvwv"] Mar 12 14:13:06.768145 master-0 kubenswrapper[7440]: I0312 14:13:06.767118 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-config" (OuterVolumeSpecName: "config") pod "2bbc2b06-9e09-46e1-803f-d60d9a41e49d" (UID: "2bbc2b06-9e09-46e1-803f-d60d9a41e49d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:13:06.768145 master-0 kubenswrapper[7440]: I0312 14:13:06.767293 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 14:13:06.768145 master-0 kubenswrapper[7440]: I0312 14:13:06.767441 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 14:13:06.768145 master-0 kubenswrapper[7440]: I0312 14:13:06.767606 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 14:13:06.775819 master-0 kubenswrapper[7440]: I0312 14:13:06.773372 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2bbc2b06-9e09-46e1-803f-d60d9a41e49d" (UID: "2bbc2b06-9e09-46e1-803f-d60d9a41e49d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:13:06.800752 master-0 kubenswrapper[7440]: I0312 14:13:06.800690 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-kube-api-access-kqvrq" (OuterVolumeSpecName: "kube-api-access-kqvrq") pod "2bbc2b06-9e09-46e1-803f-d60d9a41e49d" (UID: "2bbc2b06-9e09-46e1-803f-d60d9a41e49d"). InnerVolumeSpecName "kube-api-access-kqvrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:13:06.866239 master-0 kubenswrapper[7440]: I0312 14:13:06.866179 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4b264724-e891-4923-9304-cfdcb0c97f3d-proxy-ca-bundles\") pod \"controller-manager-865cf8f5f4-frvwv\" (UID: \"4b264724-e891-4923-9304-cfdcb0c97f3d\") " pod="openshift-controller-manager/controller-manager-865cf8f5f4-frvwv" Mar 12 14:13:06.866957 master-0 kubenswrapper[7440]: I0312 14:13:06.866246 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z778j\" (UniqueName: \"kubernetes.io/projected/4b264724-e891-4923-9304-cfdcb0c97f3d-kube-api-access-z778j\") pod \"controller-manager-865cf8f5f4-frvwv\" (UID: \"4b264724-e891-4923-9304-cfdcb0c97f3d\") " pod="openshift-controller-manager/controller-manager-865cf8f5f4-frvwv" Mar 12 14:13:06.866957 master-0 kubenswrapper[7440]: I0312 14:13:06.866283 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b264724-e891-4923-9304-cfdcb0c97f3d-serving-cert\") pod \"controller-manager-865cf8f5f4-frvwv\" (UID: \"4b264724-e891-4923-9304-cfdcb0c97f3d\") " pod="openshift-controller-manager/controller-manager-865cf8f5f4-frvwv" Mar 12 14:13:06.866957 master-0 kubenswrapper[7440]: I0312 14:13:06.866344 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b264724-e891-4923-9304-cfdcb0c97f3d-client-ca\") pod \"controller-manager-865cf8f5f4-frvwv\" (UID: \"4b264724-e891-4923-9304-cfdcb0c97f3d\") " pod="openshift-controller-manager/controller-manager-865cf8f5f4-frvwv" Mar 12 14:13:06.866957 master-0 kubenswrapper[7440]: I0312 14:13:06.866376 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b264724-e891-4923-9304-cfdcb0c97f3d-config\") pod \"controller-manager-865cf8f5f4-frvwv\" (UID: \"4b264724-e891-4923-9304-cfdcb0c97f3d\") " pod="openshift-controller-manager/controller-manager-865cf8f5f4-frvwv" Mar 12 14:13:06.866957 master-0 kubenswrapper[7440]: I0312 14:13:06.866484 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqvrq\" (UniqueName: \"kubernetes.io/projected/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-kube-api-access-kqvrq\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:06.866957 master-0 kubenswrapper[7440]: I0312 14:13:06.866512 7440 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:06.866957 master-0 kubenswrapper[7440]: I0312 14:13:06.866528 7440 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:06.866957 master-0 kubenswrapper[7440]: I0312 14:13:06.866541 7440 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70d139ff-05ec-4733-8c0c-b7de1a535d60-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:06.968992 master-0 kubenswrapper[7440]: I0312 14:13:06.967873 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b264724-e891-4923-9304-cfdcb0c97f3d-config\") pod \"controller-manager-865cf8f5f4-frvwv\" (UID: \"4b264724-e891-4923-9304-cfdcb0c97f3d\") " pod="openshift-controller-manager/controller-manager-865cf8f5f4-frvwv" Mar 12 14:13:06.968992 master-0 kubenswrapper[7440]: I0312 14:13:06.968186 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4b264724-e891-4923-9304-cfdcb0c97f3d-proxy-ca-bundles\") pod \"controller-manager-865cf8f5f4-frvwv\" (UID: \"4b264724-e891-4923-9304-cfdcb0c97f3d\") " pod="openshift-controller-manager/controller-manager-865cf8f5f4-frvwv" Mar 12 14:13:06.968992 master-0 kubenswrapper[7440]: I0312 14:13:06.968220 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z778j\" (UniqueName: \"kubernetes.io/projected/4b264724-e891-4923-9304-cfdcb0c97f3d-kube-api-access-z778j\") pod \"controller-manager-865cf8f5f4-frvwv\" (UID: \"4b264724-e891-4923-9304-cfdcb0c97f3d\") " pod="openshift-controller-manager/controller-manager-865cf8f5f4-frvwv" Mar 12 14:13:06.968992 master-0 kubenswrapper[7440]: I0312 14:13:06.968251 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b264724-e891-4923-9304-cfdcb0c97f3d-serving-cert\") pod \"controller-manager-865cf8f5f4-frvwv\" (UID: \"4b264724-e891-4923-9304-cfdcb0c97f3d\") " pod="openshift-controller-manager/controller-manager-865cf8f5f4-frvwv" Mar 12 14:13:06.968992 master-0 kubenswrapper[7440]: I0312 14:13:06.968295 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b264724-e891-4923-9304-cfdcb0c97f3d-client-ca\") pod \"controller-manager-865cf8f5f4-frvwv\" (UID: \"4b264724-e891-4923-9304-cfdcb0c97f3d\") " pod="openshift-controller-manager/controller-manager-865cf8f5f4-frvwv" Mar 12 14:13:06.968992 master-0 kubenswrapper[7440]: I0312 14:13:06.968969 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b264724-e891-4923-9304-cfdcb0c97f3d-client-ca\") pod \"controller-manager-865cf8f5f4-frvwv\" (UID: \"4b264724-e891-4923-9304-cfdcb0c97f3d\") " pod="openshift-controller-manager/controller-manager-865cf8f5f4-frvwv" Mar 12 14:13:06.970124 master-0 kubenswrapper[7440]: I0312 14:13:06.970020 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4b264724-e891-4923-9304-cfdcb0c97f3d-proxy-ca-bundles\") pod \"controller-manager-865cf8f5f4-frvwv\" (UID: \"4b264724-e891-4923-9304-cfdcb0c97f3d\") " pod="openshift-controller-manager/controller-manager-865cf8f5f4-frvwv" Mar 12 14:13:06.970359 master-0 kubenswrapper[7440]: I0312 14:13:06.970239 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b264724-e891-4923-9304-cfdcb0c97f3d-config\") pod \"controller-manager-865cf8f5f4-frvwv\" (UID: \"4b264724-e891-4923-9304-cfdcb0c97f3d\") " pod="openshift-controller-manager/controller-manager-865cf8f5f4-frvwv" Mar 12 14:13:06.986537 master-0 kubenswrapper[7440]: I0312 14:13:06.986374 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b264724-e891-4923-9304-cfdcb0c97f3d-serving-cert\") pod \"controller-manager-865cf8f5f4-frvwv\" (UID: \"4b264724-e891-4923-9304-cfdcb0c97f3d\") " pod="openshift-controller-manager/controller-manager-865cf8f5f4-frvwv" Mar 12 14:13:06.988584 master-0 kubenswrapper[7440]: I0312 14:13:06.988478 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z778j\" (UniqueName: \"kubernetes.io/projected/4b264724-e891-4923-9304-cfdcb0c97f3d-kube-api-access-z778j\") pod \"controller-manager-865cf8f5f4-frvwv\" (UID: \"4b264724-e891-4923-9304-cfdcb0c97f3d\") " pod="openshift-controller-manager/controller-manager-865cf8f5f4-frvwv" Mar 12 14:13:07.005454 master-0 kubenswrapper[7440]: I0312 14:13:07.005151 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4622r"] Mar 12 14:13:07.015824 master-0 kubenswrapper[7440]: W0312 14:13:07.014408 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod191fe879_7ece_4f8c_bae6_cf46acb382c9.slice/crio-1e38cfb7b1cfebf62b43a9053c4b94b244fdd93e3535e4f97ff145735061c782 WatchSource:0}: Error finding container 1e38cfb7b1cfebf62b43a9053c4b94b244fdd93e3535e4f97ff145735061c782: Status 404 returned error can't find the container with id 1e38cfb7b1cfebf62b43a9053c4b94b244fdd93e3535e4f97ff145735061c782 Mar 12 14:13:07.111572 master-0 kubenswrapper[7440]: I0312 14:13:07.111151 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-865cf8f5f4-frvwv" Mar 12 14:13:07.135743 master-0 kubenswrapper[7440]: I0312 14:13:07.135172 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-thh89"] Mar 12 14:13:07.155097 master-0 kubenswrapper[7440]: W0312 14:13:07.155055 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda932351b_831e_4930_85a2_f2faf1e6b262.slice/crio-f8a10557be91edf1ddf87676f9207a0449cad63c109bcfc61a138873a1379236 WatchSource:0}: Error finding container f8a10557be91edf1ddf87676f9207a0449cad63c109bcfc61a138873a1379236: Status 404 returned error can't find the container with id f8a10557be91edf1ddf87676f9207a0449cad63c109bcfc61a138873a1379236 Mar 12 14:13:07.552554 master-0 kubenswrapper[7440]: I0312 14:13:07.552435 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-865cf8f5f4-frvwv"] Mar 12 14:13:07.561353 master-0 kubenswrapper[7440]: W0312 14:13:07.561307 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b264724_e891_4923_9304_cfdcb0c97f3d.slice/crio-b8c172ab9219714ca4112f5447fdb45a8746085d8175f2b6f513903d2e7c0722 WatchSource:0}: Error finding container b8c172ab9219714ca4112f5447fdb45a8746085d8175f2b6f513903d2e7c0722: Status 404 returned error can't find the container with id b8c172ab9219714ca4112f5447fdb45a8746085d8175f2b6f513903d2e7c0722 Mar 12 14:13:07.586298 master-0 kubenswrapper[7440]: I0312 14:13:07.586258 7440 generic.go:334] "Generic (PLEG): container finished" podID="191fe879-7ece-4f8c-bae6-cf46acb382c9" containerID="d719003a3f7ad0713f784a1cb591dc3c3a9a743ae24bde8d2da3a255b6858a9a" exitCode=0 Mar 12 14:13:07.586838 master-0 kubenswrapper[7440]: I0312 14:13:07.586304 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4622r" event={"ID":"191fe879-7ece-4f8c-bae6-cf46acb382c9","Type":"ContainerDied","Data":"d719003a3f7ad0713f784a1cb591dc3c3a9a743ae24bde8d2da3a255b6858a9a"} Mar 12 14:13:07.586961 master-0 kubenswrapper[7440]: I0312 14:13:07.586859 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4622r" event={"ID":"191fe879-7ece-4f8c-bae6-cf46acb382c9","Type":"ContainerStarted","Data":"1e38cfb7b1cfebf62b43a9053c4b94b244fdd93e3535e4f97ff145735061c782"} Mar 12 14:13:07.587895 master-0 kubenswrapper[7440]: I0312 14:13:07.587858 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-865cf8f5f4-frvwv" event={"ID":"4b264724-e891-4923-9304-cfdcb0c97f3d","Type":"ContainerStarted","Data":"b8c172ab9219714ca4112f5447fdb45a8746085d8175f2b6f513903d2e7c0722"} Mar 12 14:13:07.589094 master-0 kubenswrapper[7440]: I0312 14:13:07.589064 7440 generic.go:334] "Generic (PLEG): container finished" podID="a932351b-831e-4930-85a2-f2faf1e6b262" containerID="74fb402f739f96f56154340ca788d707573841081bff6bef4caa13bff71d91ab" exitCode=0 Mar 12 14:13:07.589149 master-0 kubenswrapper[7440]: I0312 14:13:07.589136 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-658bd69d7b-dgrvq" Mar 12 14:13:07.589675 master-0 kubenswrapper[7440]: I0312 14:13:07.589646 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-thh89" event={"ID":"a932351b-831e-4930-85a2-f2faf1e6b262","Type":"ContainerDied","Data":"74fb402f739f96f56154340ca788d707573841081bff6bef4caa13bff71d91ab"} Mar 12 14:13:07.589675 master-0 kubenswrapper[7440]: I0312 14:13:07.589676 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-thh89" event={"ID":"a932351b-831e-4930-85a2-f2faf1e6b262","Type":"ContainerStarted","Data":"f8a10557be91edf1ddf87676f9207a0449cad63c109bcfc61a138873a1379236"} Mar 12 14:13:07.644925 master-0 kubenswrapper[7440]: I0312 14:13:07.644643 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658bd69d7b-dgrvq"] Mar 12 14:13:07.654943 master-0 kubenswrapper[7440]: I0312 14:13:07.652348 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658bd69d7b-dgrvq"] Mar 12 14:13:07.660931 master-0 kubenswrapper[7440]: I0312 14:13:07.656259 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9qngm"] Mar 12 14:13:07.660931 master-0 kubenswrapper[7440]: I0312 14:13:07.657369 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9qngm" Mar 12 14:13:07.692975 master-0 kubenswrapper[7440]: I0312 14:13:07.688046 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qngm"] Mar 12 14:13:07.779126 master-0 kubenswrapper[7440]: I0312 14:13:07.779060 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d181b683-a575-45a3-b736-ad4e07486545-catalog-content\") pod \"redhat-marketplace-9qngm\" (UID: \"d181b683-a575-45a3-b736-ad4e07486545\") " pod="openshift-marketplace/redhat-marketplace-9qngm" Mar 12 14:13:07.779354 master-0 kubenswrapper[7440]: I0312 14:13:07.779267 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8rnk\" (UniqueName: \"kubernetes.io/projected/d181b683-a575-45a3-b736-ad4e07486545-kube-api-access-t8rnk\") pod \"redhat-marketplace-9qngm\" (UID: \"d181b683-a575-45a3-b736-ad4e07486545\") " pod="openshift-marketplace/redhat-marketplace-9qngm" Mar 12 14:13:07.779400 master-0 kubenswrapper[7440]: I0312 14:13:07.779364 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d181b683-a575-45a3-b736-ad4e07486545-utilities\") pod \"redhat-marketplace-9qngm\" (UID: \"d181b683-a575-45a3-b736-ad4e07486545\") " pod="openshift-marketplace/redhat-marketplace-9qngm" Mar 12 14:13:07.779520 master-0 kubenswrapper[7440]: I0312 14:13:07.779494 7440 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2bbc2b06-9e09-46e1-803f-d60d9a41e49d-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:07.812691 master-0 kubenswrapper[7440]: I0312 14:13:07.810589 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bbc2b06-9e09-46e1-803f-d60d9a41e49d" path="/var/lib/kubelet/pods/2bbc2b06-9e09-46e1-803f-d60d9a41e49d/volumes" Mar 12 14:13:07.812691 master-0 kubenswrapper[7440]: I0312 14:13:07.810964 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70d139ff-05ec-4733-8c0c-b7de1a535d60" path="/var/lib/kubelet/pods/70d139ff-05ec-4733-8c0c-b7de1a535d60/volumes" Mar 12 14:13:07.880965 master-0 kubenswrapper[7440]: I0312 14:13:07.880890 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d181b683-a575-45a3-b736-ad4e07486545-catalog-content\") pod \"redhat-marketplace-9qngm\" (UID: \"d181b683-a575-45a3-b736-ad4e07486545\") " pod="openshift-marketplace/redhat-marketplace-9qngm" Mar 12 14:13:07.880965 master-0 kubenswrapper[7440]: I0312 14:13:07.880971 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8rnk\" (UniqueName: \"kubernetes.io/projected/d181b683-a575-45a3-b736-ad4e07486545-kube-api-access-t8rnk\") pod \"redhat-marketplace-9qngm\" (UID: \"d181b683-a575-45a3-b736-ad4e07486545\") " pod="openshift-marketplace/redhat-marketplace-9qngm" Mar 12 14:13:07.881196 master-0 kubenswrapper[7440]: I0312 14:13:07.880996 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d181b683-a575-45a3-b736-ad4e07486545-utilities\") pod \"redhat-marketplace-9qngm\" (UID: \"d181b683-a575-45a3-b736-ad4e07486545\") " pod="openshift-marketplace/redhat-marketplace-9qngm" Mar 12 14:13:07.881400 master-0 kubenswrapper[7440]: I0312 14:13:07.881373 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d181b683-a575-45a3-b736-ad4e07486545-catalog-content\") pod \"redhat-marketplace-9qngm\" (UID: \"d181b683-a575-45a3-b736-ad4e07486545\") " pod="openshift-marketplace/redhat-marketplace-9qngm" Mar 12 14:13:07.881542 master-0 kubenswrapper[7440]: I0312 14:13:07.881502 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d181b683-a575-45a3-b736-ad4e07486545-utilities\") pod \"redhat-marketplace-9qngm\" (UID: \"d181b683-a575-45a3-b736-ad4e07486545\") " pod="openshift-marketplace/redhat-marketplace-9qngm" Mar 12 14:13:07.898394 master-0 kubenswrapper[7440]: I0312 14:13:07.898353 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8rnk\" (UniqueName: \"kubernetes.io/projected/d181b683-a575-45a3-b736-ad4e07486545-kube-api-access-t8rnk\") pod \"redhat-marketplace-9qngm\" (UID: \"d181b683-a575-45a3-b736-ad4e07486545\") " pod="openshift-marketplace/redhat-marketplace-9qngm" Mar 12 14:13:07.974790 master-0 kubenswrapper[7440]: I0312 14:13:07.974710 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9qngm" Mar 12 14:13:08.449107 master-0 kubenswrapper[7440]: I0312 14:13:08.448792 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qngm"] Mar 12 14:13:08.459770 master-0 kubenswrapper[7440]: W0312 14:13:08.459713 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd181b683_a575_45a3_b736_ad4e07486545.slice/crio-ad049afe5ae9a1ecf28a08d3c5dea4946348cb8f5a1a87ed70b45bf058b12cac WatchSource:0}: Error finding container ad049afe5ae9a1ecf28a08d3c5dea4946348cb8f5a1a87ed70b45bf058b12cac: Status 404 returned error can't find the container with id ad049afe5ae9a1ecf28a08d3c5dea4946348cb8f5a1a87ed70b45bf058b12cac Mar 12 14:13:08.468654 master-0 kubenswrapper[7440]: I0312 14:13:08.468580 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 12 14:13:08.469248 master-0 kubenswrapper[7440]: I0312 14:13:08.469211 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 14:13:08.485804 master-0 kubenswrapper[7440]: I0312 14:13:08.485720 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 12 14:13:08.591191 master-0 kubenswrapper[7440]: I0312 14:13:08.591108 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae49ad14-9025-4459-8c98-a629febe979e-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"ae49ad14-9025-4459-8c98-a629febe979e\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 14:13:08.591191 master-0 kubenswrapper[7440]: I0312 14:13:08.591185 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ae49ad14-9025-4459-8c98-a629febe979e-var-lock\") pod \"installer-2-master-0\" (UID: \"ae49ad14-9025-4459-8c98-a629febe979e\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 14:13:08.847826 master-0 kubenswrapper[7440]: I0312 14:13:08.591235 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae49ad14-9025-4459-8c98-a629febe979e-kube-api-access\") pod \"installer-2-master-0\" (UID: \"ae49ad14-9025-4459-8c98-a629febe979e\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 14:13:08.847826 master-0 kubenswrapper[7440]: I0312 14:13:08.593884 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qngm" event={"ID":"d181b683-a575-45a3-b736-ad4e07486545","Type":"ContainerStarted","Data":"ad049afe5ae9a1ecf28a08d3c5dea4946348cb8f5a1a87ed70b45bf058b12cac"} Mar 12 14:13:08.847826 master-0 kubenswrapper[7440]: I0312 14:13:08.692779 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae49ad14-9025-4459-8c98-a629febe979e-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"ae49ad14-9025-4459-8c98-a629febe979e\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 14:13:08.847826 master-0 kubenswrapper[7440]: I0312 14:13:08.692824 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ae49ad14-9025-4459-8c98-a629febe979e-var-lock\") pod \"installer-2-master-0\" (UID: \"ae49ad14-9025-4459-8c98-a629febe979e\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 14:13:08.847826 master-0 kubenswrapper[7440]: I0312 14:13:08.692855 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae49ad14-9025-4459-8c98-a629febe979e-kube-api-access\") pod \"installer-2-master-0\" (UID: \"ae49ad14-9025-4459-8c98-a629febe979e\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 14:13:08.847826 master-0 kubenswrapper[7440]: I0312 14:13:08.692994 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae49ad14-9025-4459-8c98-a629febe979e-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"ae49ad14-9025-4459-8c98-a629febe979e\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 14:13:08.847826 master-0 kubenswrapper[7440]: I0312 14:13:08.693089 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ae49ad14-9025-4459-8c98-a629febe979e-var-lock\") pod \"installer-2-master-0\" (UID: \"ae49ad14-9025-4459-8c98-a629febe979e\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 14:13:08.847826 master-0 kubenswrapper[7440]: I0312 14:13:08.716530 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae49ad14-9025-4459-8c98-a629febe979e-kube-api-access\") pod \"installer-2-master-0\" (UID: \"ae49ad14-9025-4459-8c98-a629febe979e\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 14:13:08.847826 master-0 kubenswrapper[7440]: I0312 14:13:08.798320 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 14:13:08.856848 master-0 kubenswrapper[7440]: I0312 14:13:08.856790 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ns7pm"] Mar 12 14:13:08.858306 master-0 kubenswrapper[7440]: I0312 14:13:08.858270 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ns7pm" Mar 12 14:13:08.875331 master-0 kubenswrapper[7440]: I0312 14:13:08.875284 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ns7pm"] Mar 12 14:13:08.996837 master-0 kubenswrapper[7440]: I0312 14:13:08.996800 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e0d04d0-6ea2-4b4e-a881-968db7d31c7b-catalog-content\") pod \"redhat-operators-ns7pm\" (UID: \"2e0d04d0-6ea2-4b4e-a881-968db7d31c7b\") " pod="openshift-marketplace/redhat-operators-ns7pm" Mar 12 14:13:08.996960 master-0 kubenswrapper[7440]: I0312 14:13:08.996874 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dztgb\" (UniqueName: \"kubernetes.io/projected/2e0d04d0-6ea2-4b4e-a881-968db7d31c7b-kube-api-access-dztgb\") pod \"redhat-operators-ns7pm\" (UID: \"2e0d04d0-6ea2-4b4e-a881-968db7d31c7b\") " pod="openshift-marketplace/redhat-operators-ns7pm" Mar 12 14:13:08.996960 master-0 kubenswrapper[7440]: I0312 14:13:08.996908 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e0d04d0-6ea2-4b4e-a881-968db7d31c7b-utilities\") pod \"redhat-operators-ns7pm\" (UID: \"2e0d04d0-6ea2-4b4e-a881-968db7d31c7b\") " pod="openshift-marketplace/redhat-operators-ns7pm" Mar 12 14:13:09.099178 master-0 kubenswrapper[7440]: I0312 14:13:09.099056 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn"] Mar 12 14:13:09.100217 master-0 kubenswrapper[7440]: I0312 14:13:09.099337 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dztgb\" (UniqueName: \"kubernetes.io/projected/2e0d04d0-6ea2-4b4e-a881-968db7d31c7b-kube-api-access-dztgb\") pod \"redhat-operators-ns7pm\" (UID: \"2e0d04d0-6ea2-4b4e-a881-968db7d31c7b\") " pod="openshift-marketplace/redhat-operators-ns7pm" Mar 12 14:13:09.100217 master-0 kubenswrapper[7440]: I0312 14:13:09.099375 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e0d04d0-6ea2-4b4e-a881-968db7d31c7b-utilities\") pod \"redhat-operators-ns7pm\" (UID: \"2e0d04d0-6ea2-4b4e-a881-968db7d31c7b\") " pod="openshift-marketplace/redhat-operators-ns7pm" Mar 12 14:13:09.100217 master-0 kubenswrapper[7440]: I0312 14:13:09.099418 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e0d04d0-6ea2-4b4e-a881-968db7d31c7b-catalog-content\") pod \"redhat-operators-ns7pm\" (UID: \"2e0d04d0-6ea2-4b4e-a881-968db7d31c7b\") " pod="openshift-marketplace/redhat-operators-ns7pm" Mar 12 14:13:09.100217 master-0 kubenswrapper[7440]: I0312 14:13:09.099692 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn" Mar 12 14:13:09.100217 master-0 kubenswrapper[7440]: I0312 14:13:09.099942 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e0d04d0-6ea2-4b4e-a881-968db7d31c7b-catalog-content\") pod \"redhat-operators-ns7pm\" (UID: \"2e0d04d0-6ea2-4b4e-a881-968db7d31c7b\") " pod="openshift-marketplace/redhat-operators-ns7pm" Mar 12 14:13:09.100576 master-0 kubenswrapper[7440]: I0312 14:13:09.100469 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e0d04d0-6ea2-4b4e-a881-968db7d31c7b-utilities\") pod \"redhat-operators-ns7pm\" (UID: \"2e0d04d0-6ea2-4b4e-a881-968db7d31c7b\") " pod="openshift-marketplace/redhat-operators-ns7pm" Mar 12 14:13:09.103045 master-0 kubenswrapper[7440]: I0312 14:13:09.102938 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 14:13:09.105695 master-0 kubenswrapper[7440]: I0312 14:13:09.105102 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 14:13:09.105695 master-0 kubenswrapper[7440]: I0312 14:13:09.105157 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 14:13:09.105695 master-0 kubenswrapper[7440]: I0312 14:13:09.105312 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 14:13:09.105695 master-0 kubenswrapper[7440]: I0312 14:13:09.105559 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 14:13:09.362684 master-0 kubenswrapper[7440]: I0312 14:13:09.200977 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cb9e9e2-2673-4673-8999-622c97440572-config\") pod \"route-controller-manager-84bf88fbd-c4hcn\" (UID: \"2cb9e9e2-2673-4673-8999-622c97440572\") " pod="openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn" Mar 12 14:13:09.362684 master-0 kubenswrapper[7440]: I0312 14:13:09.201030 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cb9e9e2-2673-4673-8999-622c97440572-client-ca\") pod \"route-controller-manager-84bf88fbd-c4hcn\" (UID: \"2cb9e9e2-2673-4673-8999-622c97440572\") " pod="openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn" Mar 12 14:13:09.362684 master-0 kubenswrapper[7440]: I0312 14:13:09.201058 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cb9e9e2-2673-4673-8999-622c97440572-serving-cert\") pod \"route-controller-manager-84bf88fbd-c4hcn\" (UID: \"2cb9e9e2-2673-4673-8999-622c97440572\") " pod="openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn" Mar 12 14:13:09.362684 master-0 kubenswrapper[7440]: I0312 14:13:09.201079 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dtbf\" (UniqueName: \"kubernetes.io/projected/2cb9e9e2-2673-4673-8999-622c97440572-kube-api-access-4dtbf\") pod \"route-controller-manager-84bf88fbd-c4hcn\" (UID: \"2cb9e9e2-2673-4673-8999-622c97440572\") " pod="openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn" Mar 12 14:13:09.362684 master-0 kubenswrapper[7440]: I0312 14:13:09.302032 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cb9e9e2-2673-4673-8999-622c97440572-config\") pod \"route-controller-manager-84bf88fbd-c4hcn\" (UID: \"2cb9e9e2-2673-4673-8999-622c97440572\") " pod="openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn" Mar 12 14:13:09.362684 master-0 kubenswrapper[7440]: I0312 14:13:09.302083 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cb9e9e2-2673-4673-8999-622c97440572-client-ca\") pod \"route-controller-manager-84bf88fbd-c4hcn\" (UID: \"2cb9e9e2-2673-4673-8999-622c97440572\") " pod="openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn" Mar 12 14:13:09.362684 master-0 kubenswrapper[7440]: I0312 14:13:09.302119 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cb9e9e2-2673-4673-8999-622c97440572-serving-cert\") pod \"route-controller-manager-84bf88fbd-c4hcn\" (UID: \"2cb9e9e2-2673-4673-8999-622c97440572\") " pod="openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn" Mar 12 14:13:09.362684 master-0 kubenswrapper[7440]: I0312 14:13:09.302139 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dtbf\" (UniqueName: \"kubernetes.io/projected/2cb9e9e2-2673-4673-8999-622c97440572-kube-api-access-4dtbf\") pod \"route-controller-manager-84bf88fbd-c4hcn\" (UID: \"2cb9e9e2-2673-4673-8999-622c97440572\") " pod="openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn" Mar 12 14:13:09.362684 master-0 kubenswrapper[7440]: I0312 14:13:09.304303 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cb9e9e2-2673-4673-8999-622c97440572-config\") pod \"route-controller-manager-84bf88fbd-c4hcn\" (UID: \"2cb9e9e2-2673-4673-8999-622c97440572\") " pod="openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn" Mar 12 14:13:09.362684 master-0 kubenswrapper[7440]: I0312 14:13:09.304814 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cb9e9e2-2673-4673-8999-622c97440572-client-ca\") pod \"route-controller-manager-84bf88fbd-c4hcn\" (UID: \"2cb9e9e2-2673-4673-8999-622c97440572\") " pod="openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn" Mar 12 14:13:09.362684 master-0 kubenswrapper[7440]: I0312 14:13:09.317640 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cb9e9e2-2673-4673-8999-622c97440572-serving-cert\") pod \"route-controller-manager-84bf88fbd-c4hcn\" (UID: \"2cb9e9e2-2673-4673-8999-622c97440572\") " pod="openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn" Mar 12 14:13:09.598263 master-0 kubenswrapper[7440]: I0312 14:13:09.598207 7440 generic.go:334] "Generic (PLEG): container finished" podID="d181b683-a575-45a3-b736-ad4e07486545" containerID="75cce3e44d0b316e12d6d6e14e98cf027dec02a2c5b39a8c50e537653cad5272" exitCode=0 Mar 12 14:13:09.598263 master-0 kubenswrapper[7440]: I0312 14:13:09.598255 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qngm" event={"ID":"d181b683-a575-45a3-b736-ad4e07486545","Type":"ContainerDied","Data":"75cce3e44d0b316e12d6d6e14e98cf027dec02a2c5b39a8c50e537653cad5272"} Mar 12 14:13:10.195462 master-0 kubenswrapper[7440]: I0312 14:13:10.195372 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn"] Mar 12 14:13:10.425198 master-0 kubenswrapper[7440]: I0312 14:13:10.422451 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 12 14:13:10.603057 master-0 kubenswrapper[7440]: I0312 14:13:10.603011 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"ae49ad14-9025-4459-8c98-a629febe979e","Type":"ContainerStarted","Data":"ab8264b52a1aa4d0d85a9f7d0837f6863c3554849bfa921f04685ecd2e2d8086"} Mar 12 14:13:11.199200 master-0 kubenswrapper[7440]: I0312 14:13:11.198983 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 12 14:13:11.199647 master-0 kubenswrapper[7440]: I0312 14:13:11.199626 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 12 14:13:11.203995 master-0 kubenswrapper[7440]: I0312 14:13:11.203515 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 12 14:13:11.213839 master-0 kubenswrapper[7440]: I0312 14:13:11.213795 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dztgb\" (UniqueName: \"kubernetes.io/projected/2e0d04d0-6ea2-4b4e-a881-968db7d31c7b-kube-api-access-dztgb\") pod \"redhat-operators-ns7pm\" (UID: \"2e0d04d0-6ea2-4b4e-a881-968db7d31c7b\") " pod="openshift-marketplace/redhat-operators-ns7pm" Mar 12 14:13:11.219845 master-0 kubenswrapper[7440]: I0312 14:13:11.219704 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dtbf\" (UniqueName: \"kubernetes.io/projected/2cb9e9e2-2673-4673-8999-622c97440572-kube-api-access-4dtbf\") pod \"route-controller-manager-84bf88fbd-c4hcn\" (UID: \"2cb9e9e2-2673-4673-8999-622c97440572\") " pod="openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn" Mar 12 14:13:11.222460 master-0 kubenswrapper[7440]: I0312 14:13:11.222408 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn" Mar 12 14:13:11.354072 master-0 kubenswrapper[7440]: I0312 14:13:11.317410 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ns7pm" Mar 12 14:13:11.354072 master-0 kubenswrapper[7440]: I0312 14:13:11.333715 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23b56974-d2b1-4205-af5a-70cc2b616d1a-kube-api-access\") pod \"installer-1-master-0\" (UID: \"23b56974-d2b1-4205-af5a-70cc2b616d1a\") " pod="openshift-etcd/installer-1-master-0" Mar 12 14:13:11.354072 master-0 kubenswrapper[7440]: I0312 14:13:11.333768 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/23b56974-d2b1-4205-af5a-70cc2b616d1a-var-lock\") pod \"installer-1-master-0\" (UID: \"23b56974-d2b1-4205-af5a-70cc2b616d1a\") " pod="openshift-etcd/installer-1-master-0" Mar 12 14:13:11.354072 master-0 kubenswrapper[7440]: I0312 14:13:11.333814 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23b56974-d2b1-4205-af5a-70cc2b616d1a-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"23b56974-d2b1-4205-af5a-70cc2b616d1a\") " pod="openshift-etcd/installer-1-master-0" Mar 12 14:13:11.435187 master-0 kubenswrapper[7440]: I0312 14:13:11.435138 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23b56974-d2b1-4205-af5a-70cc2b616d1a-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"23b56974-d2b1-4205-af5a-70cc2b616d1a\") " pod="openshift-etcd/installer-1-master-0" Mar 12 14:13:11.435396 master-0 kubenswrapper[7440]: I0312 14:13:11.435297 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23b56974-d2b1-4205-af5a-70cc2b616d1a-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"23b56974-d2b1-4205-af5a-70cc2b616d1a\") " pod="openshift-etcd/installer-1-master-0" Mar 12 14:13:11.435396 master-0 kubenswrapper[7440]: I0312 14:13:11.435379 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23b56974-d2b1-4205-af5a-70cc2b616d1a-kube-api-access\") pod \"installer-1-master-0\" (UID: \"23b56974-d2b1-4205-af5a-70cc2b616d1a\") " pod="openshift-etcd/installer-1-master-0" Mar 12 14:13:11.435484 master-0 kubenswrapper[7440]: I0312 14:13:11.435445 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/23b56974-d2b1-4205-af5a-70cc2b616d1a-var-lock\") pod \"installer-1-master-0\" (UID: \"23b56974-d2b1-4205-af5a-70cc2b616d1a\") " pod="openshift-etcd/installer-1-master-0" Mar 12 14:13:11.435628 master-0 kubenswrapper[7440]: I0312 14:13:11.435561 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/23b56974-d2b1-4205-af5a-70cc2b616d1a-var-lock\") pod \"installer-1-master-0\" (UID: \"23b56974-d2b1-4205-af5a-70cc2b616d1a\") " pod="openshift-etcd/installer-1-master-0" Mar 12 14:13:11.609156 master-0 kubenswrapper[7440]: I0312 14:13:11.609107 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"ae49ad14-9025-4459-8c98-a629febe979e","Type":"ContainerStarted","Data":"eb805704d20d763392b4ff446e51966f849888374acf410cabe5517b88e3fc25"} Mar 12 14:13:11.950953 master-0 kubenswrapper[7440]: I0312 14:13:11.948664 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 12 14:13:13.570850 master-0 kubenswrapper[7440]: I0312 14:13:13.570739 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23b56974-d2b1-4205-af5a-70cc2b616d1a-kube-api-access\") pod \"installer-1-master-0\" (UID: \"23b56974-d2b1-4205-af5a-70cc2b616d1a\") " pod="openshift-etcd/installer-1-master-0" Mar 12 14:13:13.575236 master-0 kubenswrapper[7440]: I0312 14:13:13.575127 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ns7pm"] Mar 12 14:13:13.587967 master-0 kubenswrapper[7440]: I0312 14:13:13.582292 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn"] Mar 12 14:13:13.618803 master-0 kubenswrapper[7440]: I0312 14:13:13.618751 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn" event={"ID":"2cb9e9e2-2673-4673-8999-622c97440572","Type":"ContainerStarted","Data":"fd5a8bd9c2ad98ab24303fb76a8cf1ab93ba119f03cf0a4291f0f8936efe0f3f"} Mar 12 14:13:13.620115 master-0 kubenswrapper[7440]: I0312 14:13:13.620076 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ns7pm" event={"ID":"2e0d04d0-6ea2-4b4e-a881-968db7d31c7b","Type":"ContainerStarted","Data":"841b9a60fc8e48fe4721499840092282bfd7c62abd981c2c9d32a9c0204e85cd"} Mar 12 14:13:13.666652 master-0 kubenswrapper[7440]: I0312 14:13:13.666615 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 12 14:13:14.626538 master-0 kubenswrapper[7440]: I0312 14:13:14.626482 7440 generic.go:334] "Generic (PLEG): container finished" podID="2e0d04d0-6ea2-4b4e-a881-968db7d31c7b" containerID="170193929f6a99afbd76eacae4d3179712e768f76f3e9ac38d49e68d3e5f5c8d" exitCode=0 Mar 12 14:13:14.626538 master-0 kubenswrapper[7440]: I0312 14:13:14.626527 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ns7pm" event={"ID":"2e0d04d0-6ea2-4b4e-a881-968db7d31c7b","Type":"ContainerDied","Data":"170193929f6a99afbd76eacae4d3179712e768f76f3e9ac38d49e68d3e5f5c8d"} Mar 12 14:13:17.855852 master-0 kubenswrapper[7440]: I0312 14:13:17.855789 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 12 14:13:18.858580 master-0 kubenswrapper[7440]: I0312 14:13:18.858027 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 12 14:13:18.859485 master-0 kubenswrapper[7440]: I0312 14:13:18.858721 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 14:13:18.882441 master-0 kubenswrapper[7440]: I0312 14:13:18.882392 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 12 14:13:18.910832 master-0 kubenswrapper[7440]: I0312 14:13:18.910777 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 12 14:13:18.914215 master-0 kubenswrapper[7440]: I0312 14:13:18.913639 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="ae49ad14-9025-4459-8c98-a629febe979e" containerName="installer" containerID="cri-o://eb805704d20d763392b4ff446e51966f849888374acf410cabe5517b88e3fc25" gracePeriod=30 Mar 12 14:13:18.920947 master-0 kubenswrapper[7440]: I0312 14:13:18.920878 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 12 14:13:18.983374 master-0 kubenswrapper[7440]: I0312 14:13:18.983303 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 12 14:13:18.983753 master-0 kubenswrapper[7440]: I0312 14:13:18.983637 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-1-master-0" podUID="1ea22ec3-d02f-4f30-accf-eba03f4d4214" containerName="installer" containerID="cri-o://0549077fdcaf4a2aa2d8ef81531f23141be4182774336ab1344ca8cff8e70c94" gracePeriod=30 Mar 12 14:13:19.074988 master-0 kubenswrapper[7440]: I0312 14:13:19.073006 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56fb91c7-1b94-4f59-82f2-3025f0b02e43-kube-api-access\") pod \"installer-1-master-0\" (UID: \"56fb91c7-1b94-4f59-82f2-3025f0b02e43\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 14:13:19.074988 master-0 kubenswrapper[7440]: I0312 14:13:19.073065 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56fb91c7-1b94-4f59-82f2-3025f0b02e43-var-lock\") pod \"installer-1-master-0\" (UID: \"56fb91c7-1b94-4f59-82f2-3025f0b02e43\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 14:13:19.074988 master-0 kubenswrapper[7440]: I0312 14:13:19.073086 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56fb91c7-1b94-4f59-82f2-3025f0b02e43-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"56fb91c7-1b94-4f59-82f2-3025f0b02e43\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 14:13:19.175601 master-0 kubenswrapper[7440]: I0312 14:13:19.175427 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56fb91c7-1b94-4f59-82f2-3025f0b02e43-kube-api-access\") pod \"installer-1-master-0\" (UID: \"56fb91c7-1b94-4f59-82f2-3025f0b02e43\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 14:13:19.175601 master-0 kubenswrapper[7440]: I0312 14:13:19.175577 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56fb91c7-1b94-4f59-82f2-3025f0b02e43-var-lock\") pod \"installer-1-master-0\" (UID: \"56fb91c7-1b94-4f59-82f2-3025f0b02e43\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 14:13:19.175818 master-0 kubenswrapper[7440]: I0312 14:13:19.175610 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56fb91c7-1b94-4f59-82f2-3025f0b02e43-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"56fb91c7-1b94-4f59-82f2-3025f0b02e43\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 14:13:19.175818 master-0 kubenswrapper[7440]: I0312 14:13:19.175682 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56fb91c7-1b94-4f59-82f2-3025f0b02e43-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"56fb91c7-1b94-4f59-82f2-3025f0b02e43\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 14:13:19.175905 master-0 kubenswrapper[7440]: I0312 14:13:19.175841 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56fb91c7-1b94-4f59-82f2-3025f0b02e43-var-lock\") pod \"installer-1-master-0\" (UID: \"56fb91c7-1b94-4f59-82f2-3025f0b02e43\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 14:13:19.179729 master-0 kubenswrapper[7440]: I0312 14:13:19.179658 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=11.179608414 podStartE2EDuration="11.179608414s" podCreationTimestamp="2026-03-12 14:13:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:13:19.178555568 +0000 UTC m=+59.513934147" watchObservedRunningTime="2026-03-12 14:13:19.179608414 +0000 UTC m=+59.514986983" Mar 12 14:13:19.203533 master-0 kubenswrapper[7440]: I0312 14:13:19.203483 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56fb91c7-1b94-4f59-82f2-3025f0b02e43-kube-api-access\") pod \"installer-1-master-0\" (UID: \"56fb91c7-1b94-4f59-82f2-3025f0b02e43\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 14:13:19.205576 master-0 kubenswrapper[7440]: I0312 14:13:19.205455 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 14:13:20.635348 master-0 kubenswrapper[7440]: I0312 14:13:20.635049 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 12 14:13:20.638384 master-0 kubenswrapper[7440]: I0312 14:13:20.638336 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 14:13:20.643075 master-0 kubenswrapper[7440]: I0312 14:13:20.643019 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 12 14:13:20.797473 master-0 kubenswrapper[7440]: I0312 14:13:20.796912 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c0743910-1ba7-490d-bc3e-5126562b04aa-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"c0743910-1ba7-490d-bc3e-5126562b04aa\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 14:13:20.797473 master-0 kubenswrapper[7440]: I0312 14:13:20.796956 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c0743910-1ba7-490d-bc3e-5126562b04aa-kube-api-access\") pod \"installer-3-master-0\" (UID: \"c0743910-1ba7-490d-bc3e-5126562b04aa\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 14:13:20.797473 master-0 kubenswrapper[7440]: I0312 14:13:20.796981 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c0743910-1ba7-490d-bc3e-5126562b04aa-var-lock\") pod \"installer-3-master-0\" (UID: \"c0743910-1ba7-490d-bc3e-5126562b04aa\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 14:13:20.898487 master-0 kubenswrapper[7440]: I0312 14:13:20.898362 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c0743910-1ba7-490d-bc3e-5126562b04aa-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"c0743910-1ba7-490d-bc3e-5126562b04aa\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 14:13:20.898487 master-0 kubenswrapper[7440]: I0312 14:13:20.898424 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c0743910-1ba7-490d-bc3e-5126562b04aa-kube-api-access\") pod \"installer-3-master-0\" (UID: \"c0743910-1ba7-490d-bc3e-5126562b04aa\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 14:13:20.898487 master-0 kubenswrapper[7440]: I0312 14:13:20.898452 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c0743910-1ba7-490d-bc3e-5126562b04aa-var-lock\") pod \"installer-3-master-0\" (UID: \"c0743910-1ba7-490d-bc3e-5126562b04aa\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 14:13:20.898841 master-0 kubenswrapper[7440]: I0312 14:13:20.898573 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c0743910-1ba7-490d-bc3e-5126562b04aa-var-lock\") pod \"installer-3-master-0\" (UID: \"c0743910-1ba7-490d-bc3e-5126562b04aa\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 14:13:20.898841 master-0 kubenswrapper[7440]: I0312 14:13:20.898635 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c0743910-1ba7-490d-bc3e-5126562b04aa-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"c0743910-1ba7-490d-bc3e-5126562b04aa\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 14:13:21.165692 master-0 kubenswrapper[7440]: I0312 14:13:21.165582 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c0743910-1ba7-490d-bc3e-5126562b04aa-kube-api-access\") pod \"installer-3-master-0\" (UID: \"c0743910-1ba7-490d-bc3e-5126562b04aa\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 14:13:21.263500 master-0 kubenswrapper[7440]: I0312 14:13:21.263444 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 14:13:21.590499 master-0 kubenswrapper[7440]: I0312 14:13:21.590411 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 12 14:13:21.591369 master-0 kubenswrapper[7440]: I0312 14:13:21.591334 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 14:13:21.600788 master-0 kubenswrapper[7440]: I0312 14:13:21.600283 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 12 14:13:21.884043 master-0 kubenswrapper[7440]: I0312 14:13:21.883806 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/efd52682-bf05-44fc-9790-8adfc87ca087-var-lock\") pod \"installer-2-master-0\" (UID: \"efd52682-bf05-44fc-9790-8adfc87ca087\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 14:13:21.884043 master-0 kubenswrapper[7440]: I0312 14:13:21.883883 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/efd52682-bf05-44fc-9790-8adfc87ca087-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"efd52682-bf05-44fc-9790-8adfc87ca087\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 14:13:21.884043 master-0 kubenswrapper[7440]: I0312 14:13:21.883920 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/efd52682-bf05-44fc-9790-8adfc87ca087-kube-api-access\") pod \"installer-2-master-0\" (UID: \"efd52682-bf05-44fc-9790-8adfc87ca087\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 14:13:21.927754 master-0 kubenswrapper[7440]: I0312 14:13:21.927700 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-7s8fj"] Mar 12 14:13:21.928420 master-0 kubenswrapper[7440]: I0312 14:13:21.928399 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-7s8fj" Mar 12 14:13:21.932832 master-0 kubenswrapper[7440]: I0312 14:13:21.931318 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 12 14:13:21.932832 master-0 kubenswrapper[7440]: I0312 14:13:21.931465 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 12 14:13:21.932832 master-0 kubenswrapper[7440]: I0312 14:13:21.931609 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 12 14:13:21.939362 master-0 kubenswrapper[7440]: I0312 14:13:21.938212 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-7s8fj"] Mar 12 14:13:21.985469 master-0 kubenswrapper[7440]: I0312 14:13:21.985399 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/efd52682-bf05-44fc-9790-8adfc87ca087-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"efd52682-bf05-44fc-9790-8adfc87ca087\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 14:13:21.985469 master-0 kubenswrapper[7440]: I0312 14:13:21.985462 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/efd52682-bf05-44fc-9790-8adfc87ca087-kube-api-access\") pod \"installer-2-master-0\" (UID: \"efd52682-bf05-44fc-9790-8adfc87ca087\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 14:13:21.985696 master-0 kubenswrapper[7440]: I0312 14:13:21.985513 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f3c13c5f-3d1f-4e0a-b77b-732255680086-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-7s8fj\" (UID: \"f3c13c5f-3d1f-4e0a-b77b-732255680086\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-7s8fj" Mar 12 14:13:21.985696 master-0 kubenswrapper[7440]: I0312 14:13:21.985542 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/efd52682-bf05-44fc-9790-8adfc87ca087-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"efd52682-bf05-44fc-9790-8adfc87ca087\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 14:13:21.985696 master-0 kubenswrapper[7440]: I0312 14:13:21.985615 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/efd52682-bf05-44fc-9790-8adfc87ca087-var-lock\") pod \"installer-2-master-0\" (UID: \"efd52682-bf05-44fc-9790-8adfc87ca087\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 14:13:21.985696 master-0 kubenswrapper[7440]: I0312 14:13:21.985562 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/efd52682-bf05-44fc-9790-8adfc87ca087-var-lock\") pod \"installer-2-master-0\" (UID: \"efd52682-bf05-44fc-9790-8adfc87ca087\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 14:13:21.985696 master-0 kubenswrapper[7440]: I0312 14:13:21.985681 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmrqg\" (UniqueName: \"kubernetes.io/projected/f3c13c5f-3d1f-4e0a-b77b-732255680086-kube-api-access-wmrqg\") pod \"control-plane-machine-set-operator-6686554ddc-7s8fj\" (UID: \"f3c13c5f-3d1f-4e0a-b77b-732255680086\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-7s8fj" Mar 12 14:13:22.001745 master-0 kubenswrapper[7440]: I0312 14:13:22.001704 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/efd52682-bf05-44fc-9790-8adfc87ca087-kube-api-access\") pod \"installer-2-master-0\" (UID: \"efd52682-bf05-44fc-9790-8adfc87ca087\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 14:13:22.087306 master-0 kubenswrapper[7440]: I0312 14:13:22.087259 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmrqg\" (UniqueName: \"kubernetes.io/projected/f3c13c5f-3d1f-4e0a-b77b-732255680086-kube-api-access-wmrqg\") pod \"control-plane-machine-set-operator-6686554ddc-7s8fj\" (UID: \"f3c13c5f-3d1f-4e0a-b77b-732255680086\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-7s8fj" Mar 12 14:13:22.087487 master-0 kubenswrapper[7440]: I0312 14:13:22.087329 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f3c13c5f-3d1f-4e0a-b77b-732255680086-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-7s8fj\" (UID: \"f3c13c5f-3d1f-4e0a-b77b-732255680086\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-7s8fj" Mar 12 14:13:22.090494 master-0 kubenswrapper[7440]: I0312 14:13:22.090472 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f3c13c5f-3d1f-4e0a-b77b-732255680086-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-7s8fj\" (UID: \"f3c13c5f-3d1f-4e0a-b77b-732255680086\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-7s8fj" Mar 12 14:13:22.123629 master-0 kubenswrapper[7440]: I0312 14:13:22.123572 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmrqg\" (UniqueName: \"kubernetes.io/projected/f3c13c5f-3d1f-4e0a-b77b-732255680086-kube-api-access-wmrqg\") pod \"control-plane-machine-set-operator-6686554ddc-7s8fj\" (UID: \"f3c13c5f-3d1f-4e0a-b77b-732255680086\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-7s8fj" Mar 12 14:13:22.270802 master-0 kubenswrapper[7440]: I0312 14:13:22.270735 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-7s8fj" Mar 12 14:13:22.271043 master-0 kubenswrapper[7440]: I0312 14:13:22.270910 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 14:13:22.684610 master-0 kubenswrapper[7440]: I0312 14:13:22.684555 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"23b56974-d2b1-4205-af5a-70cc2b616d1a","Type":"ContainerStarted","Data":"5d684cba0a95ae743814a8952b46742b894c87c51cb377826df98e54818be432"} Mar 12 14:13:22.686559 master-0 kubenswrapper[7440]: I0312 14:13:22.686521 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_ae49ad14-9025-4459-8c98-a629febe979e/installer/0.log" Mar 12 14:13:22.686703 master-0 kubenswrapper[7440]: I0312 14:13:22.686568 7440 generic.go:334] "Generic (PLEG): container finished" podID="ae49ad14-9025-4459-8c98-a629febe979e" containerID="eb805704d20d763392b4ff446e51966f849888374acf410cabe5517b88e3fc25" exitCode=1 Mar 12 14:13:22.686703 master-0 kubenswrapper[7440]: I0312 14:13:22.686597 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"ae49ad14-9025-4459-8c98-a629febe979e","Type":"ContainerDied","Data":"eb805704d20d763392b4ff446e51966f849888374acf410cabe5517b88e3fc25"} Mar 12 14:13:24.335453 master-0 kubenswrapper[7440]: I0312 14:13:24.335421 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_ae49ad14-9025-4459-8c98-a629febe979e/installer/0.log" Mar 12 14:13:24.335983 master-0 kubenswrapper[7440]: I0312 14:13:24.335485 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 14:13:24.438026 master-0 kubenswrapper[7440]: I0312 14:13:24.437973 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae49ad14-9025-4459-8c98-a629febe979e-kube-api-access\") pod \"ae49ad14-9025-4459-8c98-a629febe979e\" (UID: \"ae49ad14-9025-4459-8c98-a629febe979e\") " Mar 12 14:13:24.438026 master-0 kubenswrapper[7440]: I0312 14:13:24.438051 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae49ad14-9025-4459-8c98-a629febe979e-kubelet-dir\") pod \"ae49ad14-9025-4459-8c98-a629febe979e\" (UID: \"ae49ad14-9025-4459-8c98-a629febe979e\") " Mar 12 14:13:24.438540 master-0 kubenswrapper[7440]: I0312 14:13:24.438306 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae49ad14-9025-4459-8c98-a629febe979e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ae49ad14-9025-4459-8c98-a629febe979e" (UID: "ae49ad14-9025-4459-8c98-a629febe979e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:13:24.438540 master-0 kubenswrapper[7440]: I0312 14:13:24.438343 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ae49ad14-9025-4459-8c98-a629febe979e-var-lock\") pod \"ae49ad14-9025-4459-8c98-a629febe979e\" (UID: \"ae49ad14-9025-4459-8c98-a629febe979e\") " Mar 12 14:13:24.438540 master-0 kubenswrapper[7440]: I0312 14:13:24.438507 7440 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae49ad14-9025-4459-8c98-a629febe979e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:24.438739 master-0 kubenswrapper[7440]: I0312 14:13:24.438561 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae49ad14-9025-4459-8c98-a629febe979e-var-lock" (OuterVolumeSpecName: "var-lock") pod "ae49ad14-9025-4459-8c98-a629febe979e" (UID: "ae49ad14-9025-4459-8c98-a629febe979e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:13:24.442044 master-0 kubenswrapper[7440]: I0312 14:13:24.442012 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae49ad14-9025-4459-8c98-a629febe979e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ae49ad14-9025-4459-8c98-a629febe979e" (UID: "ae49ad14-9025-4459-8c98-a629febe979e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:13:24.539100 master-0 kubenswrapper[7440]: I0312 14:13:24.539033 7440 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ae49ad14-9025-4459-8c98-a629febe979e-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:24.539100 master-0 kubenswrapper[7440]: I0312 14:13:24.539071 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae49ad14-9025-4459-8c98-a629febe979e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:24.698982 master-0 kubenswrapper[7440]: I0312 14:13:24.698352 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_ae49ad14-9025-4459-8c98-a629febe979e/installer/0.log" Mar 12 14:13:24.698982 master-0 kubenswrapper[7440]: I0312 14:13:24.698426 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"ae49ad14-9025-4459-8c98-a629febe979e","Type":"ContainerDied","Data":"ab8264b52a1aa4d0d85a9f7d0837f6863c3554849bfa921f04685ecd2e2d8086"} Mar 12 14:13:24.698982 master-0 kubenswrapper[7440]: I0312 14:13:24.698472 7440 scope.go:117] "RemoveContainer" containerID="eb805704d20d763392b4ff446e51966f849888374acf410cabe5517b88e3fc25" Mar 12 14:13:24.698982 master-0 kubenswrapper[7440]: I0312 14:13:24.698599 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 14:13:24.734060 master-0 kubenswrapper[7440]: I0312 14:13:24.732712 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 12 14:13:24.734060 master-0 kubenswrapper[7440]: I0312 14:13:24.733138 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 12 14:13:24.962785 master-0 kubenswrapper[7440]: I0312 14:13:24.962592 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc"] Mar 12 14:13:24.963019 master-0 kubenswrapper[7440]: E0312 14:13:24.962861 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae49ad14-9025-4459-8c98-a629febe979e" containerName="installer" Mar 12 14:13:24.963019 master-0 kubenswrapper[7440]: I0312 14:13:24.962888 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae49ad14-9025-4459-8c98-a629febe979e" containerName="installer" Mar 12 14:13:24.963019 master-0 kubenswrapper[7440]: I0312 14:13:24.963021 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae49ad14-9025-4459-8c98-a629febe979e" containerName="installer" Mar 12 14:13:24.963655 master-0 kubenswrapper[7440]: I0312 14:13:24.963562 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc" Mar 12 14:13:24.968523 master-0 kubenswrapper[7440]: I0312 14:13:24.967752 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 12 14:13:24.968523 master-0 kubenswrapper[7440]: I0312 14:13:24.967814 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 12 14:13:24.968523 master-0 kubenswrapper[7440]: I0312 14:13:24.968346 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 12 14:13:24.969831 master-0 kubenswrapper[7440]: I0312 14:13:24.969809 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 12 14:13:24.970603 master-0 kubenswrapper[7440]: I0312 14:13:24.970546 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 12 14:13:25.145047 master-0 kubenswrapper[7440]: I0312 14:13:25.144998 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht5pc\" (UniqueName: \"kubernetes.io/projected/ca50659e-7afc-4c81-b89f-2386ca173c18-kube-api-access-ht5pc\") pod \"machine-approver-955fcfb87-6l2lc\" (UID: \"ca50659e-7afc-4c81-b89f-2386ca173c18\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc" Mar 12 14:13:25.145047 master-0 kubenswrapper[7440]: I0312 14:13:25.145056 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca50659e-7afc-4c81-b89f-2386ca173c18-config\") pod \"machine-approver-955fcfb87-6l2lc\" (UID: \"ca50659e-7afc-4c81-b89f-2386ca173c18\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc" Mar 12 14:13:25.145285 master-0 kubenswrapper[7440]: I0312 14:13:25.145100 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ca50659e-7afc-4c81-b89f-2386ca173c18-auth-proxy-config\") pod \"machine-approver-955fcfb87-6l2lc\" (UID: \"ca50659e-7afc-4c81-b89f-2386ca173c18\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc" Mar 12 14:13:25.145285 master-0 kubenswrapper[7440]: I0312 14:13:25.145119 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ca50659e-7afc-4c81-b89f-2386ca173c18-machine-approver-tls\") pod \"machine-approver-955fcfb87-6l2lc\" (UID: \"ca50659e-7afc-4c81-b89f-2386ca173c18\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc" Mar 12 14:13:25.248669 master-0 kubenswrapper[7440]: I0312 14:13:25.248608 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ca50659e-7afc-4c81-b89f-2386ca173c18-machine-approver-tls\") pod \"machine-approver-955fcfb87-6l2lc\" (UID: \"ca50659e-7afc-4c81-b89f-2386ca173c18\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc" Mar 12 14:13:25.248862 master-0 kubenswrapper[7440]: I0312 14:13:25.248696 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht5pc\" (UniqueName: \"kubernetes.io/projected/ca50659e-7afc-4c81-b89f-2386ca173c18-kube-api-access-ht5pc\") pod \"machine-approver-955fcfb87-6l2lc\" (UID: \"ca50659e-7afc-4c81-b89f-2386ca173c18\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc" Mar 12 14:13:25.248862 master-0 kubenswrapper[7440]: I0312 14:13:25.248732 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca50659e-7afc-4c81-b89f-2386ca173c18-config\") pod \"machine-approver-955fcfb87-6l2lc\" (UID: \"ca50659e-7afc-4c81-b89f-2386ca173c18\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc" Mar 12 14:13:25.248862 master-0 kubenswrapper[7440]: I0312 14:13:25.248784 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ca50659e-7afc-4c81-b89f-2386ca173c18-auth-proxy-config\") pod \"machine-approver-955fcfb87-6l2lc\" (UID: \"ca50659e-7afc-4c81-b89f-2386ca173c18\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc" Mar 12 14:13:25.250363 master-0 kubenswrapper[7440]: I0312 14:13:25.249729 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca50659e-7afc-4c81-b89f-2386ca173c18-config\") pod \"machine-approver-955fcfb87-6l2lc\" (UID: \"ca50659e-7afc-4c81-b89f-2386ca173c18\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc" Mar 12 14:13:25.250363 master-0 kubenswrapper[7440]: I0312 14:13:25.249844 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ca50659e-7afc-4c81-b89f-2386ca173c18-auth-proxy-config\") pod \"machine-approver-955fcfb87-6l2lc\" (UID: \"ca50659e-7afc-4c81-b89f-2386ca173c18\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc" Mar 12 14:13:25.257022 master-0 kubenswrapper[7440]: I0312 14:13:25.255524 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ca50659e-7afc-4c81-b89f-2386ca173c18-machine-approver-tls\") pod \"machine-approver-955fcfb87-6l2lc\" (UID: \"ca50659e-7afc-4c81-b89f-2386ca173c18\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc" Mar 12 14:13:25.266235 master-0 kubenswrapper[7440]: I0312 14:13:25.266193 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht5pc\" (UniqueName: \"kubernetes.io/projected/ca50659e-7afc-4c81-b89f-2386ca173c18-kube-api-access-ht5pc\") pod \"machine-approver-955fcfb87-6l2lc\" (UID: \"ca50659e-7afc-4c81-b89f-2386ca173c18\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc" Mar 12 14:13:25.292366 master-0 kubenswrapper[7440]: I0312 14:13:25.292309 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc" Mar 12 14:13:25.813282 master-0 kubenswrapper[7440]: I0312 14:13:25.813194 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae49ad14-9025-4459-8c98-a629febe979e" path="/var/lib/kubelet/pods/ae49ad14-9025-4459-8c98-a629febe979e/volumes" Mar 12 14:13:27.098932 master-0 kubenswrapper[7440]: I0312 14:13:27.098864 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-865cf8f5f4-frvwv"] Mar 12 14:13:27.151025 master-0 kubenswrapper[7440]: I0312 14:13:27.148338 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn"] Mar 12 14:13:28.631499 master-0 kubenswrapper[7440]: I0312 14:13:28.631450 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9"] Mar 12 14:13:28.633803 master-0 kubenswrapper[7440]: I0312 14:13:28.632541 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" Mar 12 14:13:28.636952 master-0 kubenswrapper[7440]: I0312 14:13:28.636451 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 12 14:13:28.637234 master-0 kubenswrapper[7440]: I0312 14:13:28.637046 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 12 14:13:28.637234 master-0 kubenswrapper[7440]: I0312 14:13:28.637148 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 12 14:13:28.642290 master-0 kubenswrapper[7440]: I0312 14:13:28.641180 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 12 14:13:28.642290 master-0 kubenswrapper[7440]: I0312 14:13:28.642075 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9"] Mar 12 14:13:28.715928 master-0 kubenswrapper[7440]: I0312 14:13:28.713040 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-pxgq9\" (UID: \"de61e1fe-294c-48a6-8cf3-aeb4637ef2cc\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" Mar 12 14:13:28.715928 master-0 kubenswrapper[7440]: I0312 14:13:28.713112 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtp2z\" (UniqueName: \"kubernetes.io/projected/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc-kube-api-access-dtp2z\") pod \"cloud-credential-operator-55d85b7b47-pxgq9\" (UID: \"de61e1fe-294c-48a6-8cf3-aeb4637ef2cc\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" Mar 12 14:13:28.715928 master-0 kubenswrapper[7440]: I0312 14:13:28.713150 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-pxgq9\" (UID: \"de61e1fe-294c-48a6-8cf3-aeb4637ef2cc\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" Mar 12 14:13:28.814670 master-0 kubenswrapper[7440]: I0312 14:13:28.814611 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-pxgq9\" (UID: \"de61e1fe-294c-48a6-8cf3-aeb4637ef2cc\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" Mar 12 14:13:28.815101 master-0 kubenswrapper[7440]: I0312 14:13:28.814706 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtp2z\" (UniqueName: \"kubernetes.io/projected/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc-kube-api-access-dtp2z\") pod \"cloud-credential-operator-55d85b7b47-pxgq9\" (UID: \"de61e1fe-294c-48a6-8cf3-aeb4637ef2cc\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" Mar 12 14:13:28.815101 master-0 kubenswrapper[7440]: I0312 14:13:28.814731 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-pxgq9\" (UID: \"de61e1fe-294c-48a6-8cf3-aeb4637ef2cc\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" Mar 12 14:13:28.818959 master-0 kubenswrapper[7440]: I0312 14:13:28.818922 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-pxgq9\" (UID: \"de61e1fe-294c-48a6-8cf3-aeb4637ef2cc\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" Mar 12 14:13:28.820359 master-0 kubenswrapper[7440]: I0312 14:13:28.820322 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-pxgq9\" (UID: \"de61e1fe-294c-48a6-8cf3-aeb4637ef2cc\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" Mar 12 14:13:29.058725 master-0 kubenswrapper[7440]: I0312 14:13:29.058656 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtp2z\" (UniqueName: \"kubernetes.io/projected/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc-kube-api-access-dtp2z\") pod \"cloud-credential-operator-55d85b7b47-pxgq9\" (UID: \"de61e1fe-294c-48a6-8cf3-aeb4637ef2cc\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" Mar 12 14:13:29.078813 master-0 kubenswrapper[7440]: I0312 14:13:29.078751 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-q29ch"] Mar 12 14:13:29.079850 master-0 kubenswrapper[7440]: I0312 14:13:29.079804 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-q29ch" Mar 12 14:13:29.086557 master-0 kubenswrapper[7440]: I0312 14:13:29.085451 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 12 14:13:29.086557 master-0 kubenswrapper[7440]: I0312 14:13:29.085619 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 12 14:13:29.086557 master-0 kubenswrapper[7440]: I0312 14:13:29.086207 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 12 14:13:29.096521 master-0 kubenswrapper[7440]: I0312 14:13:29.096460 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-q29ch"] Mar 12 14:13:29.221086 master-0 kubenswrapper[7440]: I0312 14:13:29.221015 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62ptf\" (UniqueName: \"kubernetes.io/projected/f569ed3b-924d-4829-b192-f508ee70658d-kube-api-access-62ptf\") pod \"cluster-samples-operator-664cb58b85-q29ch\" (UID: \"f569ed3b-924d-4829-b192-f508ee70658d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-q29ch" Mar 12 14:13:29.221086 master-0 kubenswrapper[7440]: I0312 14:13:29.221092 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f569ed3b-924d-4829-b192-f508ee70658d-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-q29ch\" (UID: \"f569ed3b-924d-4829-b192-f508ee70658d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-q29ch" Mar 12 14:13:29.256394 master-0 kubenswrapper[7440]: I0312 14:13:29.255914 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" Mar 12 14:13:29.322467 master-0 kubenswrapper[7440]: I0312 14:13:29.322339 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f569ed3b-924d-4829-b192-f508ee70658d-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-q29ch\" (UID: \"f569ed3b-924d-4829-b192-f508ee70658d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-q29ch" Mar 12 14:13:29.322467 master-0 kubenswrapper[7440]: I0312 14:13:29.322461 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62ptf\" (UniqueName: \"kubernetes.io/projected/f569ed3b-924d-4829-b192-f508ee70658d-kube-api-access-62ptf\") pod \"cluster-samples-operator-664cb58b85-q29ch\" (UID: \"f569ed3b-924d-4829-b192-f508ee70658d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-q29ch" Mar 12 14:13:29.326033 master-0 kubenswrapper[7440]: I0312 14:13:29.325968 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f569ed3b-924d-4829-b192-f508ee70658d-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-q29ch\" (UID: \"f569ed3b-924d-4829-b192-f508ee70658d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-q29ch" Mar 12 14:13:29.338303 master-0 kubenswrapper[7440]: I0312 14:13:29.338246 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62ptf\" (UniqueName: \"kubernetes.io/projected/f569ed3b-924d-4829-b192-f508ee70658d-kube-api-access-62ptf\") pod \"cluster-samples-operator-664cb58b85-q29ch\" (UID: \"f569ed3b-924d-4829-b192-f508ee70658d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-q29ch" Mar 12 14:13:29.415698 master-0 kubenswrapper[7440]: I0312 14:13:29.412752 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-q29ch" Mar 12 14:13:29.784868 master-0 kubenswrapper[7440]: I0312 14:13:29.784561 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc"] Mar 12 14:13:29.786830 master-0 kubenswrapper[7440]: I0312 14:13:29.786150 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:13:29.790356 master-0 kubenswrapper[7440]: I0312 14:13:29.790318 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 12 14:13:29.790591 master-0 kubenswrapper[7440]: I0312 14:13:29.790558 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 12 14:13:29.790591 master-0 kubenswrapper[7440]: I0312 14:13:29.790578 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 12 14:13:29.791763 master-0 kubenswrapper[7440]: I0312 14:13:29.790738 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 12 14:13:29.798198 master-0 kubenswrapper[7440]: I0312 14:13:29.798166 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc"] Mar 12 14:13:29.930945 master-0 kubenswrapper[7440]: I0312 14:13:29.930889 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3edaa533-ecbb-443e-a270-4cb4f923daf6-config\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:13:29.931537 master-0 kubenswrapper[7440]: I0312 14:13:29.931502 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3edaa533-ecbb-443e-a270-4cb4f923daf6-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:13:29.931595 master-0 kubenswrapper[7440]: I0312 14:13:29.931575 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/3edaa533-ecbb-443e-a270-4cb4f923daf6-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:13:29.931627 master-0 kubenswrapper[7440]: I0312 14:13:29.931614 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3edaa533-ecbb-443e-a270-4cb4f923daf6-images\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:13:29.931662 master-0 kubenswrapper[7440]: I0312 14:13:29.931644 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smwtd\" (UniqueName: \"kubernetes.io/projected/3edaa533-ecbb-443e-a270-4cb4f923daf6-kube-api-access-smwtd\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:13:30.033875 master-0 kubenswrapper[7440]: I0312 14:13:30.033798 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3edaa533-ecbb-443e-a270-4cb4f923daf6-config\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:13:30.033875 master-0 kubenswrapper[7440]: I0312 14:13:30.033881 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3edaa533-ecbb-443e-a270-4cb4f923daf6-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:13:30.034157 master-0 kubenswrapper[7440]: I0312 14:13:30.034008 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/3edaa533-ecbb-443e-a270-4cb4f923daf6-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:13:30.034289 master-0 kubenswrapper[7440]: I0312 14:13:30.034213 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3edaa533-ecbb-443e-a270-4cb4f923daf6-images\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:13:30.034449 master-0 kubenswrapper[7440]: I0312 14:13:30.034415 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smwtd\" (UniqueName: \"kubernetes.io/projected/3edaa533-ecbb-443e-a270-4cb4f923daf6-kube-api-access-smwtd\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:13:30.034633 master-0 kubenswrapper[7440]: I0312 14:13:30.034600 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3edaa533-ecbb-443e-a270-4cb4f923daf6-config\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:13:30.034906 master-0 kubenswrapper[7440]: I0312 14:13:30.034849 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3edaa533-ecbb-443e-a270-4cb4f923daf6-images\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:13:30.037237 master-0 kubenswrapper[7440]: I0312 14:13:30.037173 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3edaa533-ecbb-443e-a270-4cb4f923daf6-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:13:30.038453 master-0 kubenswrapper[7440]: I0312 14:13:30.038385 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/3edaa533-ecbb-443e-a270-4cb4f923daf6-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:13:30.072044 master-0 kubenswrapper[7440]: I0312 14:13:30.071998 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smwtd\" (UniqueName: \"kubernetes.io/projected/3edaa533-ecbb-443e-a270-4cb4f923daf6-kube-api-access-smwtd\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:13:30.115521 master-0 kubenswrapper[7440]: I0312 14:13:30.115468 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:13:30.202764 master-0 kubenswrapper[7440]: I0312 14:13:30.202719 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 12 14:13:31.007412 master-0 kubenswrapper[7440]: I0312 14:13:31.007343 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296"] Mar 12 14:13:31.008337 master-0 kubenswrapper[7440]: I0312 14:13:31.008303 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" Mar 12 14:13:31.016467 master-0 kubenswrapper[7440]: I0312 14:13:31.011176 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 12 14:13:31.016467 master-0 kubenswrapper[7440]: I0312 14:13:31.013750 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 12 14:13:31.018454 master-0 kubenswrapper[7440]: I0312 14:13:31.018021 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296"] Mar 12 14:13:31.146834 master-0 kubenswrapper[7440]: I0312 14:13:31.146710 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9757edbb-8ce2-4513-9b32-a552df50634c-cert\") pod \"cluster-autoscaler-operator-69576476f7-b7296\" (UID: \"9757edbb-8ce2-4513-9b32-a552df50634c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" Mar 12 14:13:31.146834 master-0 kubenswrapper[7440]: I0312 14:13:31.146766 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2cq8\" (UniqueName: \"kubernetes.io/projected/9757edbb-8ce2-4513-9b32-a552df50634c-kube-api-access-m2cq8\") pod \"cluster-autoscaler-operator-69576476f7-b7296\" (UID: \"9757edbb-8ce2-4513-9b32-a552df50634c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" Mar 12 14:13:31.147258 master-0 kubenswrapper[7440]: I0312 14:13:31.147209 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9757edbb-8ce2-4513-9b32-a552df50634c-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-b7296\" (UID: \"9757edbb-8ce2-4513-9b32-a552df50634c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" Mar 12 14:13:31.249580 master-0 kubenswrapper[7440]: I0312 14:13:31.248378 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9757edbb-8ce2-4513-9b32-a552df50634c-cert\") pod \"cluster-autoscaler-operator-69576476f7-b7296\" (UID: \"9757edbb-8ce2-4513-9b32-a552df50634c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" Mar 12 14:13:31.249580 master-0 kubenswrapper[7440]: I0312 14:13:31.248428 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2cq8\" (UniqueName: \"kubernetes.io/projected/9757edbb-8ce2-4513-9b32-a552df50634c-kube-api-access-m2cq8\") pod \"cluster-autoscaler-operator-69576476f7-b7296\" (UID: \"9757edbb-8ce2-4513-9b32-a552df50634c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" Mar 12 14:13:31.249580 master-0 kubenswrapper[7440]: I0312 14:13:31.249195 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9757edbb-8ce2-4513-9b32-a552df50634c-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-b7296\" (UID: \"9757edbb-8ce2-4513-9b32-a552df50634c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" Mar 12 14:13:31.251955 master-0 kubenswrapper[7440]: I0312 14:13:31.250029 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9757edbb-8ce2-4513-9b32-a552df50634c-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-b7296\" (UID: \"9757edbb-8ce2-4513-9b32-a552df50634c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" Mar 12 14:13:31.257040 master-0 kubenswrapper[7440]: I0312 14:13:31.252423 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9757edbb-8ce2-4513-9b32-a552df50634c-cert\") pod \"cluster-autoscaler-operator-69576476f7-b7296\" (UID: \"9757edbb-8ce2-4513-9b32-a552df50634c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" Mar 12 14:13:31.270066 master-0 kubenswrapper[7440]: I0312 14:13:31.268670 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2cq8\" (UniqueName: \"kubernetes.io/projected/9757edbb-8ce2-4513-9b32-a552df50634c-kube-api-access-m2cq8\") pod \"cluster-autoscaler-operator-69576476f7-b7296\" (UID: \"9757edbb-8ce2-4513-9b32-a552df50634c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" Mar 12 14:13:31.312019 master-0 kubenswrapper[7440]: I0312 14:13:31.311927 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-gltz7"] Mar 12 14:13:31.313464 master-0 kubenswrapper[7440]: I0312 14:13:31.312792 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:13:31.316361 master-0 kubenswrapper[7440]: I0312 14:13:31.316169 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 12 14:13:31.316361 master-0 kubenswrapper[7440]: I0312 14:13:31.316244 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 12 14:13:31.316361 master-0 kubenswrapper[7440]: I0312 14:13:31.316175 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 12 14:13:31.316765 master-0 kubenswrapper[7440]: I0312 14:13:31.316659 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 12 14:13:31.320993 master-0 kubenswrapper[7440]: I0312 14:13:31.320567 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 12 14:13:31.329521 master-0 kubenswrapper[7440]: I0312 14:13:31.327352 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-gltz7"] Mar 12 14:13:31.329521 master-0 kubenswrapper[7440]: I0312 14:13:31.328219 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" Mar 12 14:13:31.351633 master-0 kubenswrapper[7440]: I0312 14:13:31.351417 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd29b21c-7a0e-4311-952f-427b00468e66-serving-cert\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:13:31.351633 master-0 kubenswrapper[7440]: I0312 14:13:31.351468 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcq7v\" (UniqueName: \"kubernetes.io/projected/dd29b21c-7a0e-4311-952f-427b00468e66-kube-api-access-rcq7v\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:13:31.351633 master-0 kubenswrapper[7440]: I0312 14:13:31.351486 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd29b21c-7a0e-4311-952f-427b00468e66-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:13:31.351633 master-0 kubenswrapper[7440]: I0312 14:13:31.351503 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd29b21c-7a0e-4311-952f-427b00468e66-service-ca-bundle\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:13:31.351633 master-0 kubenswrapper[7440]: I0312 14:13:31.351536 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/dd29b21c-7a0e-4311-952f-427b00468e66-snapshots\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:13:31.452554 master-0 kubenswrapper[7440]: I0312 14:13:31.452488 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd29b21c-7a0e-4311-952f-427b00468e66-serving-cert\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:13:31.452827 master-0 kubenswrapper[7440]: I0312 14:13:31.452812 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcq7v\" (UniqueName: \"kubernetes.io/projected/dd29b21c-7a0e-4311-952f-427b00468e66-kube-api-access-rcq7v\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:13:31.452938 master-0 kubenswrapper[7440]: I0312 14:13:31.452921 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd29b21c-7a0e-4311-952f-427b00468e66-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:13:31.453061 master-0 kubenswrapper[7440]: I0312 14:13:31.453040 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd29b21c-7a0e-4311-952f-427b00468e66-service-ca-bundle\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:13:31.453165 master-0 kubenswrapper[7440]: I0312 14:13:31.453153 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/dd29b21c-7a0e-4311-952f-427b00468e66-snapshots\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:13:31.454598 master-0 kubenswrapper[7440]: I0312 14:13:31.453811 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/dd29b21c-7a0e-4311-952f-427b00468e66-snapshots\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:13:31.454598 master-0 kubenswrapper[7440]: I0312 14:13:31.453982 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd29b21c-7a0e-4311-952f-427b00468e66-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:13:31.455137 master-0 kubenswrapper[7440]: I0312 14:13:31.454994 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd29b21c-7a0e-4311-952f-427b00468e66-service-ca-bundle\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:13:31.469960 master-0 kubenswrapper[7440]: I0312 14:13:31.456970 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd29b21c-7a0e-4311-952f-427b00468e66-serving-cert\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:13:31.469960 master-0 kubenswrapper[7440]: I0312 14:13:31.469540 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcq7v\" (UniqueName: \"kubernetes.io/projected/dd29b21c-7a0e-4311-952f-427b00468e66-kube-api-access-rcq7v\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:13:31.636531 master-0 kubenswrapper[7440]: I0312 14:13:31.636410 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:13:34.757927 master-0 kubenswrapper[7440]: I0312 14:13:34.756289 7440 generic.go:334] "Generic (PLEG): container finished" podID="7433d9bf-4edf-4787-a7a1-e5102c7264c7" containerID="9ba513db643889b41a810dd1c7684949b6c126d71f8ce738dd6a0c0db835816a" exitCode=0 Mar 12 14:13:34.757927 master-0 kubenswrapper[7440]: I0312 14:13:34.756369 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" event={"ID":"7433d9bf-4edf-4787-a7a1-e5102c7264c7","Type":"ContainerDied","Data":"9ba513db643889b41a810dd1c7684949b6c126d71f8ce738dd6a0c0db835816a"} Mar 12 14:13:34.757927 master-0 kubenswrapper[7440]: I0312 14:13:34.757192 7440 scope.go:117] "RemoveContainer" containerID="9ba513db643889b41a810dd1c7684949b6c126d71f8ce738dd6a0c0db835816a" Mar 12 14:13:34.758620 master-0 kubenswrapper[7440]: I0312 14:13:34.758473 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc" event={"ID":"ca50659e-7afc-4c81-b89f-2386ca173c18","Type":"ContainerStarted","Data":"6f7345e68d0284239c7ffeb41360ca60627706e3ed5e6f0ee04f56580c16d2e9"} Mar 12 14:13:34.760171 master-0 kubenswrapper[7440]: I0312 14:13:34.759739 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_d3364860-0708-4eef-ac94-94992bf2d631/installer/0.log" Mar 12 14:13:34.760171 master-0 kubenswrapper[7440]: I0312 14:13:34.759769 7440 generic.go:334] "Generic (PLEG): container finished" podID="d3364860-0708-4eef-ac94-94992bf2d631" containerID="f82502f50ac79890c44461c13992c782465cf9d5879da841305e795b8aa38182" exitCode=1 Mar 12 14:13:34.760171 master-0 kubenswrapper[7440]: I0312 14:13:34.759823 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"d3364860-0708-4eef-ac94-94992bf2d631","Type":"ContainerDied","Data":"f82502f50ac79890c44461c13992c782465cf9d5879da841305e795b8aa38182"} Mar 12 14:13:34.760673 master-0 kubenswrapper[7440]: I0312 14:13:34.760646 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"c0743910-1ba7-490d-bc3e-5126562b04aa","Type":"ContainerStarted","Data":"ad667b1962e9be89dad22c04e8baae0b8b39d88482f4ed8d30c8828a965ec326"} Mar 12 14:13:34.886392 master-0 kubenswrapper[7440]: I0312 14:13:34.886347 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_d3364860-0708-4eef-ac94-94992bf2d631/installer/0.log" Mar 12 14:13:34.886569 master-0 kubenswrapper[7440]: I0312 14:13:34.886409 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 14:13:34.907009 master-0 kubenswrapper[7440]: I0312 14:13:34.906005 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d3364860-0708-4eef-ac94-94992bf2d631-var-lock\") pod \"d3364860-0708-4eef-ac94-94992bf2d631\" (UID: \"d3364860-0708-4eef-ac94-94992bf2d631\") " Mar 12 14:13:34.907009 master-0 kubenswrapper[7440]: I0312 14:13:34.906061 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3364860-0708-4eef-ac94-94992bf2d631-kube-api-access\") pod \"d3364860-0708-4eef-ac94-94992bf2d631\" (UID: \"d3364860-0708-4eef-ac94-94992bf2d631\") " Mar 12 14:13:34.907009 master-0 kubenswrapper[7440]: I0312 14:13:34.906120 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3364860-0708-4eef-ac94-94992bf2d631-kubelet-dir\") pod \"d3364860-0708-4eef-ac94-94992bf2d631\" (UID: \"d3364860-0708-4eef-ac94-94992bf2d631\") " Mar 12 14:13:34.907009 master-0 kubenswrapper[7440]: I0312 14:13:34.906978 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3364860-0708-4eef-ac94-94992bf2d631-var-lock" (OuterVolumeSpecName: "var-lock") pod "d3364860-0708-4eef-ac94-94992bf2d631" (UID: "d3364860-0708-4eef-ac94-94992bf2d631"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:13:34.910922 master-0 kubenswrapper[7440]: I0312 14:13:34.908220 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3364860-0708-4eef-ac94-94992bf2d631-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d3364860-0708-4eef-ac94-94992bf2d631" (UID: "d3364860-0708-4eef-ac94-94992bf2d631"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:13:34.912539 master-0 kubenswrapper[7440]: I0312 14:13:34.911620 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3364860-0708-4eef-ac94-94992bf2d631-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d3364860-0708-4eef-ac94-94992bf2d631" (UID: "d3364860-0708-4eef-ac94-94992bf2d631"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:13:35.007872 master-0 kubenswrapper[7440]: I0312 14:13:35.007813 7440 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3364860-0708-4eef-ac94-94992bf2d631-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:35.007872 master-0 kubenswrapper[7440]: I0312 14:13:35.007857 7440 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d3364860-0708-4eef-ac94-94992bf2d631-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:35.007872 master-0 kubenswrapper[7440]: I0312 14:13:35.007870 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3364860-0708-4eef-ac94-94992bf2d631-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:35.766292 master-0 kubenswrapper[7440]: I0312 14:13:35.766248 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"c0743910-1ba7-490d-bc3e-5126562b04aa","Type":"ContainerStarted","Data":"763faa898e18449dd9a50b708e0137c7362e38addce32c4afec9964d733e4f39"} Mar 12 14:13:35.767810 master-0 kubenswrapper[7440]: I0312 14:13:35.767789 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"23b56974-d2b1-4205-af5a-70cc2b616d1a","Type":"ContainerStarted","Data":"44912c45860c53bd920d6344d008ca95bda45324f0583a0a019e5ef0a05b1d24"} Mar 12 14:13:35.769584 master-0 kubenswrapper[7440]: I0312 14:13:35.769557 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4622r" event={"ID":"191fe879-7ece-4f8c-bae6-cf46acb382c9","Type":"ContainerStarted","Data":"0874bc42d51f1eeb79284639fd2174ae8726c365d0a5e04de38df5932f77ea4a"} Mar 12 14:13:35.771000 master-0 kubenswrapper[7440]: I0312 14:13:35.770979 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn" event={"ID":"2cb9e9e2-2673-4673-8999-622c97440572","Type":"ContainerStarted","Data":"d75ad7b9042c782ebc9ea76ed0355d35975d6be9617884f809f81bec8eddcd07"} Mar 12 14:13:35.771093 master-0 kubenswrapper[7440]: I0312 14:13:35.771072 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn" podUID="2cb9e9e2-2673-4673-8999-622c97440572" containerName="route-controller-manager" containerID="cri-o://d75ad7b9042c782ebc9ea76ed0355d35975d6be9617884f809f81bec8eddcd07" gracePeriod=30 Mar 12 14:13:35.772053 master-0 kubenswrapper[7440]: I0312 14:13:35.772021 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn" Mar 12 14:13:35.773587 master-0 kubenswrapper[7440]: I0312 14:13:35.773564 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_d3364860-0708-4eef-ac94-94992bf2d631/installer/0.log" Mar 12 14:13:35.773644 master-0 kubenswrapper[7440]: I0312 14:13:35.773620 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"d3364860-0708-4eef-ac94-94992bf2d631","Type":"ContainerDied","Data":"8da3b2fa0fcd528d3c970486ebbff3077b0323e9beb917763b1f850b9e4f435f"} Mar 12 14:13:35.773673 master-0 kubenswrapper[7440]: I0312 14:13:35.773649 7440 scope.go:117] "RemoveContainer" containerID="f82502f50ac79890c44461c13992c782465cf9d5879da841305e795b8aa38182" Mar 12 14:13:35.773764 master-0 kubenswrapper[7440]: I0312 14:13:35.773746 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 14:13:35.779537 master-0 kubenswrapper[7440]: I0312 14:13:35.779492 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-865cf8f5f4-frvwv" event={"ID":"4b264724-e891-4923-9304-cfdcb0c97f3d","Type":"ContainerStarted","Data":"840b06f58f0c684eb701fcd5c6fecbe4a37d2a069aff6750e71351c48cd50008"} Mar 12 14:13:35.779685 master-0 kubenswrapper[7440]: I0312 14:13:35.779648 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-865cf8f5f4-frvwv" podUID="4b264724-e891-4923-9304-cfdcb0c97f3d" containerName="controller-manager" containerID="cri-o://840b06f58f0c684eb701fcd5c6fecbe4a37d2a069aff6750e71351c48cd50008" gracePeriod=30 Mar 12 14:13:35.780060 master-0 kubenswrapper[7440]: I0312 14:13:35.780019 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-865cf8f5f4-frvwv" Mar 12 14:13:35.782449 master-0 kubenswrapper[7440]: I0312 14:13:35.782314 7440 patch_prober.go:28] interesting pod/route-controller-manager-84bf88fbd-c4hcn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.50:8443/healthz\": read tcp 10.128.0.2:51236->10.128.0.50:8443: read: connection reset by peer" start-of-body= Mar 12 14:13:35.782449 master-0 kubenswrapper[7440]: I0312 14:13:35.782352 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn" podUID="2cb9e9e2-2673-4673-8999-622c97440572" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.50:8443/healthz\": read tcp 10.128.0.2:51236->10.128.0.50:8443: read: connection reset by peer" Mar 12 14:13:35.783528 master-0 kubenswrapper[7440]: I0312 14:13:35.783389 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" event={"ID":"7433d9bf-4edf-4787-a7a1-e5102c7264c7","Type":"ContainerStarted","Data":"93fc043f83fd1d3afac8895480948677e740498aeff368b3ec9e23d75ce7f261"} Mar 12 14:13:35.785667 master-0 kubenswrapper[7440]: I0312 14:13:35.785257 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-thh89" event={"ID":"a932351b-831e-4930-85a2-f2faf1e6b262","Type":"ContainerStarted","Data":"301380442ddd774e8f58eb782166994c76dcab49ea2cd60afb98a69d120ab1da"} Mar 12 14:13:35.786451 master-0 kubenswrapper[7440]: I0312 14:13:35.786397 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc" event={"ID":"ca50659e-7afc-4c81-b89f-2386ca173c18","Type":"ContainerStarted","Data":"60d4e07599b638379384c9ffcfcd09977c6b4d80b1728d1e52c718e08335973e"} Mar 12 14:13:35.788645 master-0 kubenswrapper[7440]: I0312 14:13:35.788620 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qngm" event={"ID":"d181b683-a575-45a3-b736-ad4e07486545","Type":"ContainerStarted","Data":"d8d7fa2e13e14e0b984d9f7e1f54f4e5ea0c4414c868abc17d41c48f7a68c9ba"} Mar 12 14:13:35.789866 master-0 kubenswrapper[7440]: I0312 14:13:35.789682 7440 patch_prober.go:28] interesting pod/controller-manager-865cf8f5f4-frvwv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.46:8443/healthz\": read tcp 10.128.0.2:47240->10.128.0.46:8443: read: connection reset by peer" start-of-body= Mar 12 14:13:35.789866 master-0 kubenswrapper[7440]: I0312 14:13:35.789724 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-865cf8f5f4-frvwv" podUID="4b264724-e891-4923-9304-cfdcb0c97f3d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.46:8443/healthz\": read tcp 10.128.0.2:47240->10.128.0.46:8443: read: connection reset by peer" Mar 12 14:13:36.177965 master-0 kubenswrapper[7440]: I0312 14:13:36.174608 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-7s8fj"] Mar 12 14:13:36.196263 master-0 kubenswrapper[7440]: I0312 14:13:36.193361 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc"] Mar 12 14:13:36.196348 master-0 kubenswrapper[7440]: I0312 14:13:36.196233 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9"] Mar 12 14:13:36.205625 master-0 kubenswrapper[7440]: I0312 14:13:36.203598 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-gltz7"] Mar 12 14:13:36.210451 master-0 kubenswrapper[7440]: I0312 14:13:36.210383 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-q29ch"] Mar 12 14:13:36.232328 master-0 kubenswrapper[7440]: I0312 14:13:36.227644 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296"] Mar 12 14:13:36.233587 master-0 kubenswrapper[7440]: I0312 14:13:36.233536 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 12 14:13:36.245915 master-0 kubenswrapper[7440]: I0312 14:13:36.245868 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 12 14:13:36.258764 master-0 kubenswrapper[7440]: I0312 14:13:36.250712 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v"] Mar 12 14:13:36.258764 master-0 kubenswrapper[7440]: E0312 14:13:36.250927 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3364860-0708-4eef-ac94-94992bf2d631" containerName="installer" Mar 12 14:13:36.258764 master-0 kubenswrapper[7440]: I0312 14:13:36.250939 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3364860-0708-4eef-ac94-94992bf2d631" containerName="installer" Mar 12 14:13:36.258764 master-0 kubenswrapper[7440]: I0312 14:13:36.251030 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3364860-0708-4eef-ac94-94992bf2d631" containerName="installer" Mar 12 14:13:36.258764 master-0 kubenswrapper[7440]: I0312 14:13:36.251529 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw"] Mar 12 14:13:36.258764 master-0 kubenswrapper[7440]: I0312 14:13:36.252143 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw" Mar 12 14:13:36.258764 master-0 kubenswrapper[7440]: I0312 14:13:36.255415 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 12 14:13:36.258764 master-0 kubenswrapper[7440]: I0312 14:13:36.258552 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v"] Mar 12 14:13:36.302283 master-0 kubenswrapper[7440]: I0312 14:13:36.270604 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" Mar 12 14:13:36.302283 master-0 kubenswrapper[7440]: I0312 14:13:36.271207 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw"] Mar 12 14:13:36.302283 master-0 kubenswrapper[7440]: I0312 14:13:36.273055 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 12 14:13:36.302283 master-0 kubenswrapper[7440]: I0312 14:13:36.273246 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 12 14:13:36.302283 master-0 kubenswrapper[7440]: I0312 14:13:36.273443 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 12 14:13:36.302283 master-0 kubenswrapper[7440]: I0312 14:13:36.275986 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 12 14:13:36.302283 master-0 kubenswrapper[7440]: I0312 14:13:36.276677 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 12 14:13:36.302283 master-0 kubenswrapper[7440]: I0312 14:13:36.279767 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 12 14:13:36.302283 master-0 kubenswrapper[7440]: I0312 14:13:36.290846 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 12 14:13:36.311924 master-0 kubenswrapper[7440]: I0312 14:13:36.311395 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-865cf8f5f4-frvwv" podStartSLOduration=9.148738399 podStartE2EDuration="31.31137739s" podCreationTimestamp="2026-03-12 14:13:05 +0000 UTC" firstStartedPulling="2026-03-12 14:13:07.563093696 +0000 UTC m=+47.898472255" lastFinishedPulling="2026-03-12 14:13:29.725732687 +0000 UTC m=+70.061111246" observedRunningTime="2026-03-12 14:13:36.310399565 +0000 UTC m=+76.645778134" watchObservedRunningTime="2026-03-12 14:13:36.31137739 +0000 UTC m=+76.646755949" Mar 12 14:13:36.326060 master-0 kubenswrapper[7440]: I0312 14:13:36.324818 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b77ad35-2fff-47bb-ad34-abb3868b09a9-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-lds9v\" (UID: \"6b77ad35-2fff-47bb-ad34-abb3868b09a9\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" Mar 12 14:13:36.326060 master-0 kubenswrapper[7440]: I0312 14:13:36.324877 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b77ad35-2fff-47bb-ad34-abb3868b09a9-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-lds9v\" (UID: \"6b77ad35-2fff-47bb-ad34-abb3868b09a9\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" Mar 12 14:13:36.326060 master-0 kubenswrapper[7440]: I0312 14:13:36.324941 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/06eb9f4b-167e-435b-8ef6-ae44fc0b85a9-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-xgrsw\" (UID: \"06eb9f4b-167e-435b-8ef6-ae44fc0b85a9\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw" Mar 12 14:13:36.326060 master-0 kubenswrapper[7440]: I0312 14:13:36.324963 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b77ad35-2fff-47bb-ad34-abb3868b09a9-images\") pod \"machine-config-operator-fdb5c78b5-lds9v\" (UID: \"6b77ad35-2fff-47bb-ad34-abb3868b09a9\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" Mar 12 14:13:36.326060 master-0 kubenswrapper[7440]: I0312 14:13:36.325250 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-276qm\" (UniqueName: \"kubernetes.io/projected/06eb9f4b-167e-435b-8ef6-ae44fc0b85a9-kube-api-access-276qm\") pod \"cluster-storage-operator-6fbfc8dc8f-xgrsw\" (UID: \"06eb9f4b-167e-435b-8ef6-ae44fc0b85a9\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw" Mar 12 14:13:36.326060 master-0 kubenswrapper[7440]: I0312 14:13:36.325366 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m97zx\" (UniqueName: \"kubernetes.io/projected/6b77ad35-2fff-47bb-ad34-abb3868b09a9-kube-api-access-m97zx\") pod \"machine-config-operator-fdb5c78b5-lds9v\" (UID: \"6b77ad35-2fff-47bb-ad34-abb3868b09a9\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" Mar 12 14:13:36.381751 master-0 kubenswrapper[7440]: I0312 14:13:36.381667 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn" podStartSLOduration=10.776107126 podStartE2EDuration="31.381645711s" podCreationTimestamp="2026-03-12 14:13:05 +0000 UTC" firstStartedPulling="2026-03-12 14:13:13.575991116 +0000 UTC m=+53.911369675" lastFinishedPulling="2026-03-12 14:13:34.181529701 +0000 UTC m=+74.516908260" observedRunningTime="2026-03-12 14:13:36.380966294 +0000 UTC m=+76.716344853" watchObservedRunningTime="2026-03-12 14:13:36.381645711 +0000 UTC m=+76.717024270" Mar 12 14:13:36.426827 master-0 kubenswrapper[7440]: I0312 14:13:36.426503 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-276qm\" (UniqueName: \"kubernetes.io/projected/06eb9f4b-167e-435b-8ef6-ae44fc0b85a9-kube-api-access-276qm\") pod \"cluster-storage-operator-6fbfc8dc8f-xgrsw\" (UID: \"06eb9f4b-167e-435b-8ef6-ae44fc0b85a9\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw" Mar 12 14:13:36.426827 master-0 kubenswrapper[7440]: I0312 14:13:36.426564 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m97zx\" (UniqueName: \"kubernetes.io/projected/6b77ad35-2fff-47bb-ad34-abb3868b09a9-kube-api-access-m97zx\") pod \"machine-config-operator-fdb5c78b5-lds9v\" (UID: \"6b77ad35-2fff-47bb-ad34-abb3868b09a9\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" Mar 12 14:13:36.426827 master-0 kubenswrapper[7440]: I0312 14:13:36.426600 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b77ad35-2fff-47bb-ad34-abb3868b09a9-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-lds9v\" (UID: \"6b77ad35-2fff-47bb-ad34-abb3868b09a9\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" Mar 12 14:13:36.426827 master-0 kubenswrapper[7440]: I0312 14:13:36.426626 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b77ad35-2fff-47bb-ad34-abb3868b09a9-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-lds9v\" (UID: \"6b77ad35-2fff-47bb-ad34-abb3868b09a9\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" Mar 12 14:13:36.426827 master-0 kubenswrapper[7440]: I0312 14:13:36.426647 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/06eb9f4b-167e-435b-8ef6-ae44fc0b85a9-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-xgrsw\" (UID: \"06eb9f4b-167e-435b-8ef6-ae44fc0b85a9\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw" Mar 12 14:13:36.426827 master-0 kubenswrapper[7440]: I0312 14:13:36.426667 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b77ad35-2fff-47bb-ad34-abb3868b09a9-images\") pod \"machine-config-operator-fdb5c78b5-lds9v\" (UID: \"6b77ad35-2fff-47bb-ad34-abb3868b09a9\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" Mar 12 14:13:36.428386 master-0 kubenswrapper[7440]: I0312 14:13:36.427981 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b77ad35-2fff-47bb-ad34-abb3868b09a9-images\") pod \"machine-config-operator-fdb5c78b5-lds9v\" (UID: \"6b77ad35-2fff-47bb-ad34-abb3868b09a9\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" Mar 12 14:13:36.428690 master-0 kubenswrapper[7440]: I0312 14:13:36.428656 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b77ad35-2fff-47bb-ad34-abb3868b09a9-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-lds9v\" (UID: \"6b77ad35-2fff-47bb-ad34-abb3868b09a9\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" Mar 12 14:13:36.438317 master-0 kubenswrapper[7440]: I0312 14:13:36.438163 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b77ad35-2fff-47bb-ad34-abb3868b09a9-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-lds9v\" (UID: \"6b77ad35-2fff-47bb-ad34-abb3868b09a9\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" Mar 12 14:13:36.439054 master-0 kubenswrapper[7440]: I0312 14:13:36.438953 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/06eb9f4b-167e-435b-8ef6-ae44fc0b85a9-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-xgrsw\" (UID: \"06eb9f4b-167e-435b-8ef6-ae44fc0b85a9\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw" Mar 12 14:13:36.474940 master-0 kubenswrapper[7440]: I0312 14:13:36.467573 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m97zx\" (UniqueName: \"kubernetes.io/projected/6b77ad35-2fff-47bb-ad34-abb3868b09a9-kube-api-access-m97zx\") pod \"machine-config-operator-fdb5c78b5-lds9v\" (UID: \"6b77ad35-2fff-47bb-ad34-abb3868b09a9\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" Mar 12 14:13:36.474940 master-0 kubenswrapper[7440]: I0312 14:13:36.472801 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-276qm\" (UniqueName: \"kubernetes.io/projected/06eb9f4b-167e-435b-8ef6-ae44fc0b85a9-kube-api-access-276qm\") pod \"cluster-storage-operator-6fbfc8dc8f-xgrsw\" (UID: \"06eb9f4b-167e-435b-8ef6-ae44fc0b85a9\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw" Mar 12 14:13:36.509277 master-0 kubenswrapper[7440]: I0312 14:13:36.502053 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=26.502040524 podStartE2EDuration="26.502040524s" podCreationTimestamp="2026-03-12 14:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:13:36.500977898 +0000 UTC m=+76.836356467" watchObservedRunningTime="2026-03-12 14:13:36.502040524 +0000 UTC m=+76.837419083" Mar 12 14:13:36.536228 master-0 kubenswrapper[7440]: I0312 14:13:36.536176 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2"] Mar 12 14:13:36.537199 master-0 kubenswrapper[7440]: I0312 14:13:36.537178 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" Mar 12 14:13:36.551724 master-0 kubenswrapper[7440]: I0312 14:13:36.549659 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 12 14:13:36.551724 master-0 kubenswrapper[7440]: I0312 14:13:36.549858 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 14:13:36.551724 master-0 kubenswrapper[7440]: I0312 14:13:36.551082 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 12 14:13:36.551724 master-0 kubenswrapper[7440]: I0312 14:13:36.551424 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 12 14:13:36.551724 master-0 kubenswrapper[7440]: I0312 14:13:36.551428 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 12 14:13:36.590552 master-0 kubenswrapper[7440]: I0312 14:13:36.588459 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=16.588439425 podStartE2EDuration="16.588439425s" podCreationTimestamp="2026-03-12 14:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:13:36.58620819 +0000 UTC m=+76.921586749" watchObservedRunningTime="2026-03-12 14:13:36.588439425 +0000 UTC m=+76.923817984" Mar 12 14:13:36.599397 master-0 kubenswrapper[7440]: W0312 14:13:36.597051 7440 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b264724_e891_4923_9304_cfdcb0c97f3d.slice/crio-conmon-840b06f58f0c684eb701fcd5c6fecbe4a37d2a069aff6750e71351c48cd50008.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b264724_e891_4923_9304_cfdcb0c97f3d.slice/crio-conmon-840b06f58f0c684eb701fcd5c6fecbe4a37d2a069aff6750e71351c48cd50008.scope: no such file or directory Mar 12 14:13:36.599397 master-0 kubenswrapper[7440]: W0312 14:13:36.597098 7440 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda932351b_831e_4930_85a2_f2faf1e6b262.slice/crio-conmon-301380442ddd774e8f58eb782166994c76dcab49ea2cd60afb98a69d120ab1da.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda932351b_831e_4930_85a2_f2faf1e6b262.slice/crio-conmon-301380442ddd774e8f58eb782166994c76dcab49ea2cd60afb98a69d120ab1da.scope: no such file or directory Mar 12 14:13:36.599397 master-0 kubenswrapper[7440]: W0312 14:13:36.597119 7440 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cb9e9e2_2673_4673_8999_622c97440572.slice/crio-conmon-d75ad7b9042c782ebc9ea76ed0355d35975d6be9617884f809f81bec8eddcd07.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cb9e9e2_2673_4673_8999_622c97440572.slice/crio-conmon-d75ad7b9042c782ebc9ea76ed0355d35975d6be9617884f809f81bec8eddcd07.scope: no such file or directory Mar 12 14:13:36.599397 master-0 kubenswrapper[7440]: W0312 14:13:36.597134 7440 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd181b683_a575_45a3_b736_ad4e07486545.slice/crio-conmon-d8d7fa2e13e14e0b984d9f7e1f54f4e5ea0c4414c868abc17d41c48f7a68c9ba.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd181b683_a575_45a3_b736_ad4e07486545.slice/crio-conmon-d8d7fa2e13e14e0b984d9f7e1f54f4e5ea0c4414c868abc17d41c48f7a68c9ba.scope: no such file or directory Mar 12 14:13:36.599397 master-0 kubenswrapper[7440]: W0312 14:13:36.597151 7440 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod191fe879_7ece_4f8c_bae6_cf46acb382c9.slice/crio-conmon-0874bc42d51f1eeb79284639fd2174ae8726c365d0a5e04de38df5932f77ea4a.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod191fe879_7ece_4f8c_bae6_cf46acb382c9.slice/crio-conmon-0874bc42d51f1eeb79284639fd2174ae8726c365d0a5e04de38df5932f77ea4a.scope: no such file or directory Mar 12 14:13:36.602942 master-0 kubenswrapper[7440]: W0312 14:13:36.599998 7440 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b264724_e891_4923_9304_cfdcb0c97f3d.slice/crio-840b06f58f0c684eb701fcd5c6fecbe4a37d2a069aff6750e71351c48cd50008.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b264724_e891_4923_9304_cfdcb0c97f3d.slice/crio-840b06f58f0c684eb701fcd5c6fecbe4a37d2a069aff6750e71351c48cd50008.scope: no such file or directory Mar 12 14:13:36.602942 master-0 kubenswrapper[7440]: W0312 14:13:36.600082 7440 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cb9e9e2_2673_4673_8999_622c97440572.slice/crio-d75ad7b9042c782ebc9ea76ed0355d35975d6be9617884f809f81bec8eddcd07.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cb9e9e2_2673_4673_8999_622c97440572.slice/crio-d75ad7b9042c782ebc9ea76ed0355d35975d6be9617884f809f81bec8eddcd07.scope: no such file or directory Mar 12 14:13:36.602942 master-0 kubenswrapper[7440]: W0312 14:13:36.600110 7440 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod191fe879_7ece_4f8c_bae6_cf46acb382c9.slice/crio-0874bc42d51f1eeb79284639fd2174ae8726c365d0a5e04de38df5932f77ea4a.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod191fe879_7ece_4f8c_bae6_cf46acb382c9.slice/crio-0874bc42d51f1eeb79284639fd2174ae8726c365d0a5e04de38df5932f77ea4a.scope: no such file or directory Mar 12 14:13:36.602942 master-0 kubenswrapper[7440]: W0312 14:13:36.600134 7440 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda932351b_831e_4930_85a2_f2faf1e6b262.slice/crio-301380442ddd774e8f58eb782166994c76dcab49ea2cd60afb98a69d120ab1da.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda932351b_831e_4930_85a2_f2faf1e6b262.slice/crio-301380442ddd774e8f58eb782166994c76dcab49ea2cd60afb98a69d120ab1da.scope: no such file or directory Mar 12 14:13:36.605224 master-0 kubenswrapper[7440]: W0312 14:13:36.605142 7440 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd181b683_a575_45a3_b736_ad4e07486545.slice/crio-d8d7fa2e13e14e0b984d9f7e1f54f4e5ea0c4414c868abc17d41c48f7a68c9ba.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd181b683_a575_45a3_b736_ad4e07486545.slice/crio-d8d7fa2e13e14e0b984d9f7e1f54f4e5ea0c4414c868abc17d41c48f7a68c9ba.scope: no such file or directory Mar 12 14:13:36.611067 master-0 kubenswrapper[7440]: I0312 14:13:36.611023 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw" Mar 12 14:13:36.691352 master-0 kubenswrapper[7440]: I0312 14:13:36.685796 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" Mar 12 14:13:36.787125 master-0 kubenswrapper[7440]: I0312 14:13:36.787088 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc86a749-8fef-462c-b422-95155cb6ca21-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-9d9f2\" (UID: \"bc86a749-8fef-462c-b422-95155cb6ca21\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" Mar 12 14:13:36.787495 master-0 kubenswrapper[7440]: I0312 14:13:36.787135 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/bc86a749-8fef-462c-b422-95155cb6ca21-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-9d9f2\" (UID: \"bc86a749-8fef-462c-b422-95155cb6ca21\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" Mar 12 14:13:36.787495 master-0 kubenswrapper[7440]: I0312 14:13:36.787157 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bc86a749-8fef-462c-b422-95155cb6ca21-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-9d9f2\" (UID: \"bc86a749-8fef-462c-b422-95155cb6ca21\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" Mar 12 14:13:36.787495 master-0 kubenswrapper[7440]: I0312 14:13:36.787193 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c95r5\" (UniqueName: \"kubernetes.io/projected/bc86a749-8fef-462c-b422-95155cb6ca21-kube-api-access-c95r5\") pod \"cluster-cloud-controller-manager-operator-559568b945-9d9f2\" (UID: \"bc86a749-8fef-462c-b422-95155cb6ca21\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" Mar 12 14:13:36.787495 master-0 kubenswrapper[7440]: I0312 14:13:36.787224 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/bc86a749-8fef-462c-b422-95155cb6ca21-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-9d9f2\" (UID: \"bc86a749-8fef-462c-b422-95155cb6ca21\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" Mar 12 14:13:36.802064 master-0 kubenswrapper[7440]: I0312 14:13:36.801976 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-84bf88fbd-c4hcn_2cb9e9e2-2673-4673-8999-622c97440572/route-controller-manager/0.log" Mar 12 14:13:36.802064 master-0 kubenswrapper[7440]: I0312 14:13:36.802031 7440 generic.go:334] "Generic (PLEG): container finished" podID="2cb9e9e2-2673-4673-8999-622c97440572" containerID="d75ad7b9042c782ebc9ea76ed0355d35975d6be9617884f809f81bec8eddcd07" exitCode=255 Mar 12 14:13:36.802510 master-0 kubenswrapper[7440]: I0312 14:13:36.802443 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn" event={"ID":"2cb9e9e2-2673-4673-8999-622c97440572","Type":"ContainerDied","Data":"d75ad7b9042c782ebc9ea76ed0355d35975d6be9617884f809f81bec8eddcd07"} Mar 12 14:13:36.803918 master-0 kubenswrapper[7440]: I0312 14:13:36.803850 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" event={"ID":"9757edbb-8ce2-4513-9b32-a552df50634c","Type":"ContainerStarted","Data":"3dcc154c9494e2fe36c0a3115ac75b0708464d60dbbe1a7436789b256f05252a"} Mar 12 14:13:36.804005 master-0 kubenswrapper[7440]: I0312 14:13:36.803937 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" event={"ID":"9757edbb-8ce2-4513-9b32-a552df50634c","Type":"ContainerStarted","Data":"16c9911f528d88ff6368917af5d3a0bfb97b0cd22d43dad86b75920f982a3c90"} Mar 12 14:13:36.817282 master-0 kubenswrapper[7440]: I0312 14:13:36.817150 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-q29ch" event={"ID":"f569ed3b-924d-4829-b192-f508ee70658d","Type":"ContainerStarted","Data":"1086c8d5071e504e73694312636385db33200a4d801de67bcefe278f7df988d9"} Mar 12 14:13:36.873015 master-0 kubenswrapper[7440]: I0312 14:13:36.872837 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x"] Mar 12 14:13:36.873596 master-0 kubenswrapper[7440]: I0312 14:13:36.873474 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:13:36.877940 master-0 kubenswrapper[7440]: I0312 14:13:36.877500 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 12 14:13:36.886302 master-0 kubenswrapper[7440]: I0312 14:13:36.885540 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_1ea22ec3-d02f-4f30-accf-eba03f4d4214/installer/0.log" Mar 12 14:13:36.886302 master-0 kubenswrapper[7440]: I0312 14:13:36.885589 7440 generic.go:334] "Generic (PLEG): container finished" podID="1ea22ec3-d02f-4f30-accf-eba03f4d4214" containerID="0549077fdcaf4a2aa2d8ef81531f23141be4182774336ab1344ca8cff8e70c94" exitCode=1 Mar 12 14:13:36.886392 master-0 kubenswrapper[7440]: I0312 14:13:36.886303 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"1ea22ec3-d02f-4f30-accf-eba03f4d4214","Type":"ContainerDied","Data":"0549077fdcaf4a2aa2d8ef81531f23141be4182774336ab1344ca8cff8e70c94"} Mar 12 14:13:36.892652 master-0 kubenswrapper[7440]: I0312 14:13:36.892517 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc86a749-8fef-462c-b422-95155cb6ca21-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-9d9f2\" (UID: \"bc86a749-8fef-462c-b422-95155cb6ca21\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" Mar 12 14:13:36.892652 master-0 kubenswrapper[7440]: I0312 14:13:36.892590 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/bc86a749-8fef-462c-b422-95155cb6ca21-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-9d9f2\" (UID: \"bc86a749-8fef-462c-b422-95155cb6ca21\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" Mar 12 14:13:36.892652 master-0 kubenswrapper[7440]: I0312 14:13:36.892621 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bc86a749-8fef-462c-b422-95155cb6ca21-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-9d9f2\" (UID: \"bc86a749-8fef-462c-b422-95155cb6ca21\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" Mar 12 14:13:36.892984 master-0 kubenswrapper[7440]: I0312 14:13:36.892668 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c95r5\" (UniqueName: \"kubernetes.io/projected/bc86a749-8fef-462c-b422-95155cb6ca21-kube-api-access-c95r5\") pod \"cluster-cloud-controller-manager-operator-559568b945-9d9f2\" (UID: \"bc86a749-8fef-462c-b422-95155cb6ca21\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" Mar 12 14:13:36.892984 master-0 kubenswrapper[7440]: I0312 14:13:36.892704 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/bc86a749-8fef-462c-b422-95155cb6ca21-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-9d9f2\" (UID: \"bc86a749-8fef-462c-b422-95155cb6ca21\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" Mar 12 14:13:36.900433 master-0 kubenswrapper[7440]: I0312 14:13:36.898584 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/bc86a749-8fef-462c-b422-95155cb6ca21-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-9d9f2\" (UID: \"bc86a749-8fef-462c-b422-95155cb6ca21\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" Mar 12 14:13:36.900433 master-0 kubenswrapper[7440]: I0312 14:13:36.899990 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc86a749-8fef-462c-b422-95155cb6ca21-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-9d9f2\" (UID: \"bc86a749-8fef-462c-b422-95155cb6ca21\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" Mar 12 14:13:36.901708 master-0 kubenswrapper[7440]: I0312 14:13:36.901495 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bc86a749-8fef-462c-b422-95155cb6ca21-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-9d9f2\" (UID: \"bc86a749-8fef-462c-b422-95155cb6ca21\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" Mar 12 14:13:36.909013 master-0 kubenswrapper[7440]: I0312 14:13:36.907491 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/bc86a749-8fef-462c-b422-95155cb6ca21-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-9d9f2\" (UID: \"bc86a749-8fef-462c-b422-95155cb6ca21\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" Mar 12 14:13:36.911149 master-0 kubenswrapper[7440]: I0312 14:13:36.909584 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x"] Mar 12 14:13:36.919861 master-0 kubenswrapper[7440]: I0312 14:13:36.914725 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d"] Mar 12 14:13:36.920116 master-0 kubenswrapper[7440]: I0312 14:13:36.919875 7440 generic.go:334] "Generic (PLEG): container finished" podID="a932351b-831e-4930-85a2-f2faf1e6b262" containerID="301380442ddd774e8f58eb782166994c76dcab49ea2cd60afb98a69d120ab1da" exitCode=0 Mar 12 14:13:36.928205 master-0 kubenswrapper[7440]: I0312 14:13:36.920743 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d"] Mar 12 14:13:36.928205 master-0 kubenswrapper[7440]: I0312 14:13:36.920775 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-thh89" event={"ID":"a932351b-831e-4930-85a2-f2faf1e6b262","Type":"ContainerDied","Data":"301380442ddd774e8f58eb782166994c76dcab49ea2cd60afb98a69d120ab1da"} Mar 12 14:13:36.928205 master-0 kubenswrapper[7440]: I0312 14:13:36.920862 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" Mar 12 14:13:36.936228 master-0 kubenswrapper[7440]: I0312 14:13:36.936171 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 12 14:13:36.940387 master-0 kubenswrapper[7440]: I0312 14:13:36.940164 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" event={"ID":"de61e1fe-294c-48a6-8cf3-aeb4637ef2cc","Type":"ContainerStarted","Data":"e7a243dee19ff7c60c3cfd7b46d1da9cee4b1db91f6862f6afe950a9febf71ef"} Mar 12 14:13:36.940387 master-0 kubenswrapper[7440]: I0312 14:13:36.940213 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" event={"ID":"de61e1fe-294c-48a6-8cf3-aeb4637ef2cc","Type":"ContainerStarted","Data":"2376cfb1ee60c237c8964f78aeee837ea12e09f11b9b3dfc1320568c3b4a4743"} Mar 12 14:13:36.945356 master-0 kubenswrapper[7440]: I0312 14:13:36.943782 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 12 14:13:36.945356 master-0 kubenswrapper[7440]: I0312 14:13:36.943996 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 12 14:13:36.948723 master-0 kubenswrapper[7440]: I0312 14:13:36.948671 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" event={"ID":"3edaa533-ecbb-443e-a270-4cb4f923daf6","Type":"ContainerStarted","Data":"8bae2bf48688fed38a08346cb01a13f07f5d6ebf571f08738d916c6d12d3bb19"} Mar 12 14:13:36.961825 master-0 kubenswrapper[7440]: I0312 14:13:36.961743 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c95r5\" (UniqueName: \"kubernetes.io/projected/bc86a749-8fef-462c-b422-95155cb6ca21-kube-api-access-c95r5\") pod \"cluster-cloud-controller-manager-operator-559568b945-9d9f2\" (UID: \"bc86a749-8fef-462c-b422-95155cb6ca21\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" Mar 12 14:13:36.978121 master-0 kubenswrapper[7440]: I0312 14:13:36.970923 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"56fb91c7-1b94-4f59-82f2-3025f0b02e43","Type":"ContainerStarted","Data":"df05180a9aba2b044bc8cc4f8bc493121c5ae7f993a124591b67cf0a86c60578"} Mar 12 14:13:37.004497 master-0 kubenswrapper[7440]: I0312 14:13:36.996440 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef824102-83a5-4629-8057-d4f1a57a530d-apiservice-cert\") pod \"packageserver-5957c5c5dc-njb8x\" (UID: \"ef824102-83a5-4629-8057-d4f1a57a530d\") " pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:13:37.004497 master-0 kubenswrapper[7440]: I0312 14:13:36.996489 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kvhc\" (UniqueName: \"kubernetes.io/projected/ef824102-83a5-4629-8057-d4f1a57a530d-kube-api-access-5kvhc\") pod \"packageserver-5957c5c5dc-njb8x\" (UID: \"ef824102-83a5-4629-8057-d4f1a57a530d\") " pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:13:37.004497 master-0 kubenswrapper[7440]: I0312 14:13:36.996515 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef824102-83a5-4629-8057-d4f1a57a530d-webhook-cert\") pod \"packageserver-5957c5c5dc-njb8x\" (UID: \"ef824102-83a5-4629-8057-d4f1a57a530d\") " pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:13:37.004497 master-0 kubenswrapper[7440]: I0312 14:13:36.996544 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ef824102-83a5-4629-8057-d4f1a57a530d-tmpfs\") pod \"packageserver-5957c5c5dc-njb8x\" (UID: \"ef824102-83a5-4629-8057-d4f1a57a530d\") " pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:13:37.019436 master-0 kubenswrapper[7440]: I0312 14:13:37.014463 7440 generic.go:334] "Generic (PLEG): container finished" podID="57930a54-89ab-4ec8-a504-74035bb74d63" containerID="7066c3f8af944b7c30200b6b3afe942d0daf91534e053c2a5abd37ae5b0f3435" exitCode=0 Mar 12 14:13:37.019436 master-0 kubenswrapper[7440]: I0312 14:13:37.014552 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" event={"ID":"57930a54-89ab-4ec8-a504-74035bb74d63","Type":"ContainerDied","Data":"7066c3f8af944b7c30200b6b3afe942d0daf91534e053c2a5abd37ae5b0f3435"} Mar 12 14:13:37.019436 master-0 kubenswrapper[7440]: I0312 14:13:37.015039 7440 scope.go:117] "RemoveContainer" containerID="7066c3f8af944b7c30200b6b3afe942d0daf91534e053c2a5abd37ae5b0f3435" Mar 12 14:13:37.024455 master-0 kubenswrapper[7440]: I0312 14:13:37.024431 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-84bf88fbd-c4hcn_2cb9e9e2-2673-4673-8999-622c97440572/route-controller-manager/0.log" Mar 12 14:13:37.024588 master-0 kubenswrapper[7440]: I0312 14:13:37.024498 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn" Mar 12 14:13:37.060201 master-0 kubenswrapper[7440]: I0312 14:13:37.059364 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-7s8fj" event={"ID":"f3c13c5f-3d1f-4e0a-b77b-732255680086","Type":"ContainerStarted","Data":"7f4e5afa4afe018a7c389e007a13d614d179ad2102c4e104bffdef509a1d7c7b"} Mar 12 14:13:37.061634 master-0 kubenswrapper[7440]: I0312 14:13:37.061611 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-865cf8f5f4-frvwv" Mar 12 14:13:37.072120 master-0 kubenswrapper[7440]: I0312 14:13:37.072028 7440 generic.go:334] "Generic (PLEG): container finished" podID="76d596c0-6a41-43e1-9516-aee9ad834ec2" containerID="1841efbfaab3b877f3dc66a0b9aac7bcfbfafdb9f154e9dca3b878d156db51a3" exitCode=0 Mar 12 14:13:37.072120 master-0 kubenswrapper[7440]: I0312 14:13:37.072097 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" event={"ID":"76d596c0-6a41-43e1-9516-aee9ad834ec2","Type":"ContainerDied","Data":"1841efbfaab3b877f3dc66a0b9aac7bcfbfafdb9f154e9dca3b878d156db51a3"} Mar 12 14:13:37.072508 master-0 kubenswrapper[7440]: I0312 14:13:37.072487 7440 scope.go:117] "RemoveContainer" containerID="1841efbfaab3b877f3dc66a0b9aac7bcfbfafdb9f154e9dca3b878d156db51a3" Mar 12 14:13:37.112007 master-0 kubenswrapper[7440]: I0312 14:13:37.110356 7440 generic.go:334] "Generic (PLEG): container finished" podID="191fe879-7ece-4f8c-bae6-cf46acb382c9" containerID="0874bc42d51f1eeb79284639fd2174ae8726c365d0a5e04de38df5932f77ea4a" exitCode=0 Mar 12 14:13:37.112007 master-0 kubenswrapper[7440]: I0312 14:13:37.110445 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4622r" event={"ID":"191fe879-7ece-4f8c-bae6-cf46acb382c9","Type":"ContainerDied","Data":"0874bc42d51f1eeb79284639fd2174ae8726c365d0a5e04de38df5932f77ea4a"} Mar 12 14:13:37.112007 master-0 kubenswrapper[7440]: E0312 14:13:37.111108 7440 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76d596c0_6a41_43e1_9516_aee9ad834ec2.slice/crio-conmon-1841efbfaab3b877f3dc66a0b9aac7bcfbfafdb9f154e9dca3b878d156db51a3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57930a54_89ab_4ec8_a504_74035bb74d63.slice/crio-conmon-7066c3f8af944b7c30200b6b3afe942d0daf91534e053c2a5abd37ae5b0f3435.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76d596c0_6a41_43e1_9516_aee9ad834ec2.slice/crio-1841efbfaab3b877f3dc66a0b9aac7bcfbfafdb9f154e9dca3b878d156db51a3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd3364860_0708_4eef_ac94_94992bf2d631.slice/crio-8da3b2fa0fcd528d3c970486ebbff3077b0323e9beb917763b1f850b9e4f435f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd3364860_0708_4eef_ac94_94992bf2d631.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57930a54_89ab_4ec8_a504_74035bb74d63.slice/crio-7066c3f8af944b7c30200b6b3afe942d0daf91534e053c2a5abd37ae5b0f3435.scope\": RecentStats: unable to find data in memory cache]" Mar 12 14:13:37.112007 master-0 kubenswrapper[7440]: I0312 14:13:37.111296 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b264724-e891-4923-9304-cfdcb0c97f3d-config\") pod \"4b264724-e891-4923-9304-cfdcb0c97f3d\" (UID: \"4b264724-e891-4923-9304-cfdcb0c97f3d\") " Mar 12 14:13:37.112007 master-0 kubenswrapper[7440]: I0312 14:13:37.111366 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z778j\" (UniqueName: \"kubernetes.io/projected/4b264724-e891-4923-9304-cfdcb0c97f3d-kube-api-access-z778j\") pod \"4b264724-e891-4923-9304-cfdcb0c97f3d\" (UID: \"4b264724-e891-4923-9304-cfdcb0c97f3d\") " Mar 12 14:13:37.112007 master-0 kubenswrapper[7440]: I0312 14:13:37.111391 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4b264724-e891-4923-9304-cfdcb0c97f3d-proxy-ca-bundles\") pod \"4b264724-e891-4923-9304-cfdcb0c97f3d\" (UID: \"4b264724-e891-4923-9304-cfdcb0c97f3d\") " Mar 12 14:13:37.112007 master-0 kubenswrapper[7440]: I0312 14:13:37.111409 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b264724-e891-4923-9304-cfdcb0c97f3d-client-ca\") pod \"4b264724-e891-4923-9304-cfdcb0c97f3d\" (UID: \"4b264724-e891-4923-9304-cfdcb0c97f3d\") " Mar 12 14:13:37.112007 master-0 kubenswrapper[7440]: I0312 14:13:37.111430 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dtbf\" (UniqueName: \"kubernetes.io/projected/2cb9e9e2-2673-4673-8999-622c97440572-kube-api-access-4dtbf\") pod \"2cb9e9e2-2673-4673-8999-622c97440572\" (UID: \"2cb9e9e2-2673-4673-8999-622c97440572\") " Mar 12 14:13:37.112007 master-0 kubenswrapper[7440]: I0312 14:13:37.111474 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cb9e9e2-2673-4673-8999-622c97440572-client-ca\") pod \"2cb9e9e2-2673-4673-8999-622c97440572\" (UID: \"2cb9e9e2-2673-4673-8999-622c97440572\") " Mar 12 14:13:37.112007 master-0 kubenswrapper[7440]: I0312 14:13:37.111511 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cb9e9e2-2673-4673-8999-622c97440572-config\") pod \"2cb9e9e2-2673-4673-8999-622c97440572\" (UID: \"2cb9e9e2-2673-4673-8999-622c97440572\") " Mar 12 14:13:37.112007 master-0 kubenswrapper[7440]: I0312 14:13:37.111544 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cb9e9e2-2673-4673-8999-622c97440572-serving-cert\") pod \"2cb9e9e2-2673-4673-8999-622c97440572\" (UID: \"2cb9e9e2-2673-4673-8999-622c97440572\") " Mar 12 14:13:37.112007 master-0 kubenswrapper[7440]: I0312 14:13:37.111565 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b264724-e891-4923-9304-cfdcb0c97f3d-serving-cert\") pod \"4b264724-e891-4923-9304-cfdcb0c97f3d\" (UID: \"4b264724-e891-4923-9304-cfdcb0c97f3d\") " Mar 12 14:13:37.112007 master-0 kubenswrapper[7440]: I0312 14:13:37.111694 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dkwb\" (UniqueName: \"kubernetes.io/projected/6f5cd3ff-ced6-47e3-8054-d83053d87680-kube-api-access-7dkwb\") pod \"machine-api-operator-84bf6db4f9-qtx2d\" (UID: \"6f5cd3ff-ced6-47e3-8054-d83053d87680\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" Mar 12 14:13:37.112007 master-0 kubenswrapper[7440]: I0312 14:13:37.111721 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f5cd3ff-ced6-47e3-8054-d83053d87680-config\") pod \"machine-api-operator-84bf6db4f9-qtx2d\" (UID: \"6f5cd3ff-ced6-47e3-8054-d83053d87680\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" Mar 12 14:13:37.112007 master-0 kubenswrapper[7440]: I0312 14:13:37.111810 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6f5cd3ff-ced6-47e3-8054-d83053d87680-images\") pod \"machine-api-operator-84bf6db4f9-qtx2d\" (UID: \"6f5cd3ff-ced6-47e3-8054-d83053d87680\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" Mar 12 14:13:37.112007 master-0 kubenswrapper[7440]: I0312 14:13:37.111848 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef824102-83a5-4629-8057-d4f1a57a530d-apiservice-cert\") pod \"packageserver-5957c5c5dc-njb8x\" (UID: \"ef824102-83a5-4629-8057-d4f1a57a530d\") " pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:13:37.112007 master-0 kubenswrapper[7440]: I0312 14:13:37.111867 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f5cd3ff-ced6-47e3-8054-d83053d87680-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-qtx2d\" (UID: \"6f5cd3ff-ced6-47e3-8054-d83053d87680\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" Mar 12 14:13:37.112007 master-0 kubenswrapper[7440]: I0312 14:13:37.111919 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kvhc\" (UniqueName: \"kubernetes.io/projected/ef824102-83a5-4629-8057-d4f1a57a530d-kube-api-access-5kvhc\") pod \"packageserver-5957c5c5dc-njb8x\" (UID: \"ef824102-83a5-4629-8057-d4f1a57a530d\") " pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:13:37.112007 master-0 kubenswrapper[7440]: I0312 14:13:37.111970 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef824102-83a5-4629-8057-d4f1a57a530d-webhook-cert\") pod \"packageserver-5957c5c5dc-njb8x\" (UID: \"ef824102-83a5-4629-8057-d4f1a57a530d\") " pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:13:37.112007 master-0 kubenswrapper[7440]: I0312 14:13:37.112004 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ef824102-83a5-4629-8057-d4f1a57a530d-tmpfs\") pod \"packageserver-5957c5c5dc-njb8x\" (UID: \"ef824102-83a5-4629-8057-d4f1a57a530d\") " pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:13:37.112669 master-0 kubenswrapper[7440]: I0312 14:13:37.112358 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ef824102-83a5-4629-8057-d4f1a57a530d-tmpfs\") pod \"packageserver-5957c5c5dc-njb8x\" (UID: \"ef824102-83a5-4629-8057-d4f1a57a530d\") " pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:13:37.129222 master-0 kubenswrapper[7440]: I0312 14:13:37.124983 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb9e9e2-2673-4673-8999-622c97440572-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2cb9e9e2-2673-4673-8999-622c97440572" (UID: "2cb9e9e2-2673-4673-8999-622c97440572"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:13:37.129222 master-0 kubenswrapper[7440]: I0312 14:13:37.126315 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cb9e9e2-2673-4673-8999-622c97440572-client-ca" (OuterVolumeSpecName: "client-ca") pod "2cb9e9e2-2673-4673-8999-622c97440572" (UID: "2cb9e9e2-2673-4673-8999-622c97440572"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:13:37.129222 master-0 kubenswrapper[7440]: I0312 14:13:37.126684 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cb9e9e2-2673-4673-8999-622c97440572-config" (OuterVolumeSpecName: "config") pod "2cb9e9e2-2673-4673-8999-622c97440572" (UID: "2cb9e9e2-2673-4673-8999-622c97440572"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:13:37.129222 master-0 kubenswrapper[7440]: I0312 14:13:37.127880 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b264724-e891-4923-9304-cfdcb0c97f3d-kube-api-access-z778j" (OuterVolumeSpecName: "kube-api-access-z778j") pod "4b264724-e891-4923-9304-cfdcb0c97f3d" (UID: "4b264724-e891-4923-9304-cfdcb0c97f3d"). InnerVolumeSpecName "kube-api-access-z778j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:13:37.129222 master-0 kubenswrapper[7440]: I0312 14:13:37.128190 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef824102-83a5-4629-8057-d4f1a57a530d-webhook-cert\") pod \"packageserver-5957c5c5dc-njb8x\" (UID: \"ef824102-83a5-4629-8057-d4f1a57a530d\") " pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:13:37.129222 master-0 kubenswrapper[7440]: I0312 14:13:37.128507 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b264724-e891-4923-9304-cfdcb0c97f3d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4b264724-e891-4923-9304-cfdcb0c97f3d" (UID: "4b264724-e891-4923-9304-cfdcb0c97f3d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:13:37.129222 master-0 kubenswrapper[7440]: I0312 14:13:37.129144 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b264724-e891-4923-9304-cfdcb0c97f3d-client-ca" (OuterVolumeSpecName: "client-ca") pod "4b264724-e891-4923-9304-cfdcb0c97f3d" (UID: "4b264724-e891-4923-9304-cfdcb0c97f3d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:13:37.134242 master-0 kubenswrapper[7440]: I0312 14:13:37.131515 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef824102-83a5-4629-8057-d4f1a57a530d-apiservice-cert\") pod \"packageserver-5957c5c5dc-njb8x\" (UID: \"ef824102-83a5-4629-8057-d4f1a57a530d\") " pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:13:37.134242 master-0 kubenswrapper[7440]: I0312 14:13:37.132919 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b264724-e891-4923-9304-cfdcb0c97f3d-config" (OuterVolumeSpecName: "config") pod "4b264724-e891-4923-9304-cfdcb0c97f3d" (UID: "4b264724-e891-4923-9304-cfdcb0c97f3d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:13:37.134242 master-0 kubenswrapper[7440]: I0312 14:13:37.134130 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b264724-e891-4923-9304-cfdcb0c97f3d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4b264724-e891-4923-9304-cfdcb0c97f3d" (UID: "4b264724-e891-4923-9304-cfdcb0c97f3d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:13:37.145668 master-0 kubenswrapper[7440]: I0312 14:13:37.144938 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb9e9e2-2673-4673-8999-622c97440572-kube-api-access-4dtbf" (OuterVolumeSpecName: "kube-api-access-4dtbf") pod "2cb9e9e2-2673-4673-8999-622c97440572" (UID: "2cb9e9e2-2673-4673-8999-622c97440572"). InnerVolumeSpecName "kube-api-access-4dtbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:13:37.148706 master-0 kubenswrapper[7440]: I0312 14:13:37.147958 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ns7pm" event={"ID":"2e0d04d0-6ea2-4b4e-a881-968db7d31c7b","Type":"ContainerStarted","Data":"f241461857ce59ea560667138db043106ff9a507cada6dbe3fa25235fbd8ecbd"} Mar 12 14:13:37.155237 master-0 kubenswrapper[7440]: I0312 14:13:37.151733 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kvhc\" (UniqueName: \"kubernetes.io/projected/ef824102-83a5-4629-8057-d4f1a57a530d-kube-api-access-5kvhc\") pod \"packageserver-5957c5c5dc-njb8x\" (UID: \"ef824102-83a5-4629-8057-d4f1a57a530d\") " pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:13:37.185636 master-0 kubenswrapper[7440]: I0312 14:13:37.185598 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw"] Mar 12 14:13:37.186125 master-0 kubenswrapper[7440]: I0312 14:13:37.185976 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"efd52682-bf05-44fc-9790-8adfc87ca087","Type":"ContainerStarted","Data":"83a78b6bdc6bac34701501df7342c8dd451a72192f273fdc21aa0b983df21030"} Mar 12 14:13:37.210563 master-0 kubenswrapper[7440]: I0312 14:13:37.210523 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" Mar 12 14:13:37.213495 master-0 kubenswrapper[7440]: I0312 14:13:37.212965 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6f5cd3ff-ced6-47e3-8054-d83053d87680-images\") pod \"machine-api-operator-84bf6db4f9-qtx2d\" (UID: \"6f5cd3ff-ced6-47e3-8054-d83053d87680\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" Mar 12 14:13:37.213495 master-0 kubenswrapper[7440]: I0312 14:13:37.213018 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f5cd3ff-ced6-47e3-8054-d83053d87680-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-qtx2d\" (UID: \"6f5cd3ff-ced6-47e3-8054-d83053d87680\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" Mar 12 14:13:37.213495 master-0 kubenswrapper[7440]: I0312 14:13:37.213063 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dkwb\" (UniqueName: \"kubernetes.io/projected/6f5cd3ff-ced6-47e3-8054-d83053d87680-kube-api-access-7dkwb\") pod \"machine-api-operator-84bf6db4f9-qtx2d\" (UID: \"6f5cd3ff-ced6-47e3-8054-d83053d87680\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" Mar 12 14:13:37.213495 master-0 kubenswrapper[7440]: I0312 14:13:37.213084 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f5cd3ff-ced6-47e3-8054-d83053d87680-config\") pod \"machine-api-operator-84bf6db4f9-qtx2d\" (UID: \"6f5cd3ff-ced6-47e3-8054-d83053d87680\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" Mar 12 14:13:37.213495 master-0 kubenswrapper[7440]: I0312 14:13:37.213201 7440 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cb9e9e2-2673-4673-8999-622c97440572-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:37.213495 master-0 kubenswrapper[7440]: I0312 14:13:37.213215 7440 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b264724-e891-4923-9304-cfdcb0c97f3d-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:37.213495 master-0 kubenswrapper[7440]: I0312 14:13:37.213224 7440 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b264724-e891-4923-9304-cfdcb0c97f3d-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:37.213495 master-0 kubenswrapper[7440]: I0312 14:13:37.213235 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z778j\" (UniqueName: \"kubernetes.io/projected/4b264724-e891-4923-9304-cfdcb0c97f3d-kube-api-access-z778j\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:37.213495 master-0 kubenswrapper[7440]: I0312 14:13:37.213244 7440 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4b264724-e891-4923-9304-cfdcb0c97f3d-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:37.213495 master-0 kubenswrapper[7440]: I0312 14:13:37.213252 7440 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b264724-e891-4923-9304-cfdcb0c97f3d-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:37.213495 master-0 kubenswrapper[7440]: I0312 14:13:37.213261 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dtbf\" (UniqueName: \"kubernetes.io/projected/2cb9e9e2-2673-4673-8999-622c97440572-kube-api-access-4dtbf\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:37.213495 master-0 kubenswrapper[7440]: I0312 14:13:37.213270 7440 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cb9e9e2-2673-4673-8999-622c97440572-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:37.213495 master-0 kubenswrapper[7440]: I0312 14:13:37.213278 7440 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cb9e9e2-2673-4673-8999-622c97440572-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:37.214849 master-0 kubenswrapper[7440]: I0312 14:13:37.214636 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f5cd3ff-ced6-47e3-8054-d83053d87680-config\") pod \"machine-api-operator-84bf6db4f9-qtx2d\" (UID: \"6f5cd3ff-ced6-47e3-8054-d83053d87680\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" Mar 12 14:13:37.214849 master-0 kubenswrapper[7440]: I0312 14:13:37.214810 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6f5cd3ff-ced6-47e3-8054-d83053d87680-images\") pod \"machine-api-operator-84bf6db4f9-qtx2d\" (UID: \"6f5cd3ff-ced6-47e3-8054-d83053d87680\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" Mar 12 14:13:37.219028 master-0 kubenswrapper[7440]: I0312 14:13:37.219007 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f5cd3ff-ced6-47e3-8054-d83053d87680-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-qtx2d\" (UID: \"6f5cd3ff-ced6-47e3-8054-d83053d87680\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" Mar 12 14:13:37.245480 master-0 kubenswrapper[7440]: W0312 14:13:37.245397 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06eb9f4b_167e_435b_8ef6_ae44fc0b85a9.slice/crio-7e1bd495d46e0c7a0ac9149686af3fabe8525fa70c85e91b10cc34e43bcb54d8 WatchSource:0}: Error finding container 7e1bd495d46e0c7a0ac9149686af3fabe8525fa70c85e91b10cc34e43bcb54d8: Status 404 returned error can't find the container with id 7e1bd495d46e0c7a0ac9149686af3fabe8525fa70c85e91b10cc34e43bcb54d8 Mar 12 14:13:37.246969 master-0 kubenswrapper[7440]: I0312 14:13:37.246622 7440 generic.go:334] "Generic (PLEG): container finished" podID="d181b683-a575-45a3-b736-ad4e07486545" containerID="d8d7fa2e13e14e0b984d9f7e1f54f4e5ea0c4414c868abc17d41c48f7a68c9ba" exitCode=0 Mar 12 14:13:37.246969 master-0 kubenswrapper[7440]: I0312 14:13:37.246683 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qngm" event={"ID":"d181b683-a575-45a3-b736-ad4e07486545","Type":"ContainerDied","Data":"d8d7fa2e13e14e0b984d9f7e1f54f4e5ea0c4414c868abc17d41c48f7a68c9ba"} Mar 12 14:13:37.284770 master-0 kubenswrapper[7440]: I0312 14:13:37.284690 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dkwb\" (UniqueName: \"kubernetes.io/projected/6f5cd3ff-ced6-47e3-8054-d83053d87680-kube-api-access-7dkwb\") pod \"machine-api-operator-84bf6db4f9-qtx2d\" (UID: \"6f5cd3ff-ced6-47e3-8054-d83053d87680\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" Mar 12 14:13:37.287393 master-0 kubenswrapper[7440]: I0312 14:13:37.287079 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-gltz7" event={"ID":"dd29b21c-7a0e-4311-952f-427b00468e66","Type":"ContainerStarted","Data":"ddff8978b61211cf6981c8dcb5ac20ebbd703343ccf0d4864c6b4d8c7b748d88"} Mar 12 14:13:37.292380 master-0 kubenswrapper[7440]: I0312 14:13:37.292353 7440 generic.go:334] "Generic (PLEG): container finished" podID="4b264724-e891-4923-9304-cfdcb0c97f3d" containerID="840b06f58f0c684eb701fcd5c6fecbe4a37d2a069aff6750e71351c48cd50008" exitCode=0 Mar 12 14:13:37.293555 master-0 kubenswrapper[7440]: I0312 14:13:37.293537 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-865cf8f5f4-frvwv" Mar 12 14:13:37.294800 master-0 kubenswrapper[7440]: I0312 14:13:37.294767 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-865cf8f5f4-frvwv" event={"ID":"4b264724-e891-4923-9304-cfdcb0c97f3d","Type":"ContainerDied","Data":"840b06f58f0c684eb701fcd5c6fecbe4a37d2a069aff6750e71351c48cd50008"} Mar 12 14:13:37.294849 master-0 kubenswrapper[7440]: I0312 14:13:37.294822 7440 scope.go:117] "RemoveContainer" containerID="840b06f58f0c684eb701fcd5c6fecbe4a37d2a069aff6750e71351c48cd50008" Mar 12 14:13:37.384754 master-0 kubenswrapper[7440]: I0312 14:13:37.384711 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_1ea22ec3-d02f-4f30-accf-eba03f4d4214/installer/0.log" Mar 12 14:13:37.385005 master-0 kubenswrapper[7440]: I0312 14:13:37.384988 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 14:13:37.390089 master-0 kubenswrapper[7440]: I0312 14:13:37.387171 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:13:37.407739 master-0 kubenswrapper[7440]: I0312 14:13:37.407697 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-865cf8f5f4-frvwv"] Mar 12 14:13:37.407739 master-0 kubenswrapper[7440]: I0312 14:13:37.407741 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-865cf8f5f4-frvwv"] Mar 12 14:13:37.432182 master-0 kubenswrapper[7440]: I0312 14:13:37.432136 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" Mar 12 14:13:37.464657 master-0 kubenswrapper[7440]: I0312 14:13:37.448888 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ea22ec3-d02f-4f30-accf-eba03f4d4214-var-lock\") pod \"1ea22ec3-d02f-4f30-accf-eba03f4d4214\" (UID: \"1ea22ec3-d02f-4f30-accf-eba03f4d4214\") " Mar 12 14:13:37.464770 master-0 kubenswrapper[7440]: I0312 14:13:37.464665 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ea22ec3-d02f-4f30-accf-eba03f4d4214-kube-api-access\") pod \"1ea22ec3-d02f-4f30-accf-eba03f4d4214\" (UID: \"1ea22ec3-d02f-4f30-accf-eba03f4d4214\") " Mar 12 14:13:37.464826 master-0 kubenswrapper[7440]: I0312 14:13:37.464768 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ea22ec3-d02f-4f30-accf-eba03f4d4214-kubelet-dir\") pod \"1ea22ec3-d02f-4f30-accf-eba03f4d4214\" (UID: \"1ea22ec3-d02f-4f30-accf-eba03f4d4214\") " Mar 12 14:13:37.465294 master-0 kubenswrapper[7440]: I0312 14:13:37.449107 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ea22ec3-d02f-4f30-accf-eba03f4d4214-var-lock" (OuterVolumeSpecName: "var-lock") pod "1ea22ec3-d02f-4f30-accf-eba03f4d4214" (UID: "1ea22ec3-d02f-4f30-accf-eba03f4d4214"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:13:37.465294 master-0 kubenswrapper[7440]: I0312 14:13:37.464991 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ea22ec3-d02f-4f30-accf-eba03f4d4214-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1ea22ec3-d02f-4f30-accf-eba03f4d4214" (UID: "1ea22ec3-d02f-4f30-accf-eba03f4d4214"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:13:37.465294 master-0 kubenswrapper[7440]: I0312 14:13:37.465248 7440 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ea22ec3-d02f-4f30-accf-eba03f4d4214-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:37.465294 master-0 kubenswrapper[7440]: I0312 14:13:37.465272 7440 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ea22ec3-d02f-4f30-accf-eba03f4d4214-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:37.472182 master-0 kubenswrapper[7440]: I0312 14:13:37.471089 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea22ec3-d02f-4f30-accf-eba03f4d4214-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1ea22ec3-d02f-4f30-accf-eba03f4d4214" (UID: "1ea22ec3-d02f-4f30-accf-eba03f4d4214"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:13:37.492049 master-0 kubenswrapper[7440]: I0312 14:13:37.492014 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v"] Mar 12 14:13:37.572111 master-0 kubenswrapper[7440]: I0312 14:13:37.567558 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ea22ec3-d02f-4f30-accf-eba03f4d4214-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 14:13:37.811993 master-0 kubenswrapper[7440]: I0312 14:13:37.811936 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b264724-e891-4923-9304-cfdcb0c97f3d" path="/var/lib/kubelet/pods/4b264724-e891-4923-9304-cfdcb0c97f3d/volumes" Mar 12 14:13:37.812524 master-0 kubenswrapper[7440]: I0312 14:13:37.812362 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3364860-0708-4eef-ac94-94992bf2d631" path="/var/lib/kubelet/pods/d3364860-0708-4eef-ac94-94992bf2d631/volumes" Mar 12 14:13:37.812999 master-0 kubenswrapper[7440]: I0312 14:13:37.812981 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:13:38.305916 master-0 kubenswrapper[7440]: I0312 14:13:38.305856 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" event={"ID":"76d596c0-6a41-43e1-9516-aee9ad834ec2","Type":"ContainerStarted","Data":"3229df69e2e642a1705181c6aea965ce680072f14717e055b2a989c42f067dc0"} Mar 12 14:13:38.310485 master-0 kubenswrapper[7440]: I0312 14:13:38.310435 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-84bf88fbd-c4hcn_2cb9e9e2-2673-4673-8999-622c97440572/route-controller-manager/0.log" Mar 12 14:13:38.310596 master-0 kubenswrapper[7440]: I0312 14:13:38.310580 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn" event={"ID":"2cb9e9e2-2673-4673-8999-622c97440572","Type":"ContainerDied","Data":"fd5a8bd9c2ad98ab24303fb76a8cf1ab93ba119f03cf0a4291f0f8936efe0f3f"} Mar 12 14:13:38.310662 master-0 kubenswrapper[7440]: I0312 14:13:38.310647 7440 scope.go:117] "RemoveContainer" containerID="d75ad7b9042c782ebc9ea76ed0355d35975d6be9617884f809f81bec8eddcd07" Mar 12 14:13:38.310853 master-0 kubenswrapper[7440]: I0312 14:13:38.310837 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn" Mar 12 14:13:38.314546 master-0 kubenswrapper[7440]: I0312 14:13:38.314442 7440 generic.go:334] "Generic (PLEG): container finished" podID="a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2" containerID="4bcb9b48cc8fca228497ac0b2a61db8d6fd6ac7df91adf72143bbed36d3bb12a" exitCode=0 Mar 12 14:13:38.314546 master-0 kubenswrapper[7440]: I0312 14:13:38.314512 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" event={"ID":"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2","Type":"ContainerDied","Data":"4bcb9b48cc8fca228497ac0b2a61db8d6fd6ac7df91adf72143bbed36d3bb12a"} Mar 12 14:13:38.315040 master-0 kubenswrapper[7440]: I0312 14:13:38.315006 7440 scope.go:117] "RemoveContainer" containerID="4bcb9b48cc8fca228497ac0b2a61db8d6fd6ac7df91adf72143bbed36d3bb12a" Mar 12 14:13:38.320002 master-0 kubenswrapper[7440]: I0312 14:13:38.319969 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"efd52682-bf05-44fc-9790-8adfc87ca087","Type":"ContainerStarted","Data":"a7a831aba8d50e763154f735949d2f89a1f0e98463882117ee4053d40ba3f7ce"} Mar 12 14:13:38.321805 master-0 kubenswrapper[7440]: I0312 14:13:38.321758 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" event={"ID":"bc86a749-8fef-462c-b422-95155cb6ca21","Type":"ContainerStarted","Data":"2d5c60f6fb14b7b43695baab60e1577ff08272e1f7ae298ac4d7d3adc1ea87f7"} Mar 12 14:13:38.325859 master-0 kubenswrapper[7440]: I0312 14:13:38.325818 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"56fb91c7-1b94-4f59-82f2-3025f0b02e43","Type":"ContainerStarted","Data":"03429e462f0622cfb4b81f008568fcb386a658560e44c8b3a80cc0aa9bf08473"} Mar 12 14:13:38.333306 master-0 kubenswrapper[7440]: I0312 14:13:38.332937 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" event={"ID":"6b77ad35-2fff-47bb-ad34-abb3868b09a9","Type":"ContainerStarted","Data":"b8d113d4078bf75e05e20466c91ff71f4f6b488f7676b497a0a45f5dab626d36"} Mar 12 14:13:38.333306 master-0 kubenswrapper[7440]: I0312 14:13:38.333049 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" event={"ID":"6b77ad35-2fff-47bb-ad34-abb3868b09a9","Type":"ContainerStarted","Data":"9ae8ffe0fbe6457550dbcfde92cc569b256c78e408c6b4f88c41a2524eefcfab"} Mar 12 14:13:38.341934 master-0 kubenswrapper[7440]: I0312 14:13:38.341098 7440 generic.go:334] "Generic (PLEG): container finished" podID="2e0d04d0-6ea2-4b4e-a881-968db7d31c7b" containerID="f241461857ce59ea560667138db043106ff9a507cada6dbe3fa25235fbd8ecbd" exitCode=0 Mar 12 14:13:38.341934 master-0 kubenswrapper[7440]: I0312 14:13:38.341178 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ns7pm" event={"ID":"2e0d04d0-6ea2-4b4e-a881-968db7d31c7b","Type":"ContainerDied","Data":"f241461857ce59ea560667138db043106ff9a507cada6dbe3fa25235fbd8ecbd"} Mar 12 14:13:38.346016 master-0 kubenswrapper[7440]: I0312 14:13:38.345931 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" event={"ID":"57930a54-89ab-4ec8-a504-74035bb74d63","Type":"ContainerStarted","Data":"926a040435e0968b248eb5c7123d8465f49b77a778c24d92b17563fbe0da4bd1"} Mar 12 14:13:38.348438 master-0 kubenswrapper[7440]: I0312 14:13:38.348383 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_1ea22ec3-d02f-4f30-accf-eba03f4d4214/installer/0.log" Mar 12 14:13:38.348526 master-0 kubenswrapper[7440]: I0312 14:13:38.348469 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"1ea22ec3-d02f-4f30-accf-eba03f4d4214","Type":"ContainerDied","Data":"8c1d080975e9cbaa7de4b702849cfa0b771fcdb691b2e504d05cd9dac5b3bd8f"} Mar 12 14:13:38.348526 master-0 kubenswrapper[7440]: I0312 14:13:38.348506 7440 scope.go:117] "RemoveContainer" containerID="0549077fdcaf4a2aa2d8ef81531f23141be4182774336ab1344ca8cff8e70c94" Mar 12 14:13:38.348651 master-0 kubenswrapper[7440]: I0312 14:13:38.348578 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 14:13:38.351127 master-0 kubenswrapper[7440]: I0312 14:13:38.350320 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw" event={"ID":"06eb9f4b-167e-435b-8ef6-ae44fc0b85a9","Type":"ContainerStarted","Data":"7e1bd495d46e0c7a0ac9149686af3fabe8525fa70c85e91b10cc34e43bcb54d8"} Mar 12 14:13:38.819725 master-0 kubenswrapper[7440]: I0312 14:13:38.812622 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d"] Mar 12 14:13:38.819725 master-0 kubenswrapper[7440]: I0312 14:13:38.816362 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x"] Mar 12 14:13:38.833940 master-0 kubenswrapper[7440]: W0312 14:13:38.832866 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef824102_83a5_4629_8057_d4f1a57a530d.slice/crio-3e2810ad638aff3594c8253ba5203ae1a01b05deb352d63eb28794aa543ce257 WatchSource:0}: Error finding container 3e2810ad638aff3594c8253ba5203ae1a01b05deb352d63eb28794aa543ce257: Status 404 returned error can't find the container with id 3e2810ad638aff3594c8253ba5203ae1a01b05deb352d63eb28794aa543ce257 Mar 12 14:13:39.378190 master-0 kubenswrapper[7440]: I0312 14:13:39.378085 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" event={"ID":"6f5cd3ff-ced6-47e3-8054-d83053d87680","Type":"ContainerStarted","Data":"b5c77a8f26bcb62c099e151f8163e284029fc1893f65e49773f468da1bd7a06d"} Mar 12 14:13:39.378418 master-0 kubenswrapper[7440]: I0312 14:13:39.378404 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" event={"ID":"6f5cd3ff-ced6-47e3-8054-d83053d87680","Type":"ContainerStarted","Data":"1325db6b5fc63da3d3f80a9e903b690f2007b20dd9156b1536d772080219b0fc"} Mar 12 14:13:39.387588 master-0 kubenswrapper[7440]: I0312 14:13:39.387353 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" event={"ID":"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2","Type":"ContainerStarted","Data":"ea065bab14dca0766dced510f8f192078bd28fcc445355d287138a674e19946f"} Mar 12 14:13:39.399172 master-0 kubenswrapper[7440]: I0312 14:13:39.397455 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" event={"ID":"ef824102-83a5-4629-8057-d4f1a57a530d","Type":"ContainerStarted","Data":"c6ae01a88bdc3790dd26c96718f2304e8d180c5079a242449e4507767ce03d7c"} Mar 12 14:13:39.399172 master-0 kubenswrapper[7440]: I0312 14:13:39.397507 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" event={"ID":"ef824102-83a5-4629-8057-d4f1a57a530d","Type":"ContainerStarted","Data":"3e2810ad638aff3594c8253ba5203ae1a01b05deb352d63eb28794aa543ce257"} Mar 12 14:13:39.400424 master-0 kubenswrapper[7440]: I0312 14:13:39.400348 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:13:39.403556 master-0 kubenswrapper[7440]: I0312 14:13:39.403081 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" event={"ID":"6b77ad35-2fff-47bb-ad34-abb3868b09a9","Type":"ContainerStarted","Data":"a34b6a72c3251dd9a1c20dfe3ee9652cef595595471cc3d6289a1e7342e2aae3"} Mar 12 14:13:39.407928 master-0 kubenswrapper[7440]: I0312 14:13:39.406888 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd"] Mar 12 14:13:39.408848 master-0 kubenswrapper[7440]: E0312 14:13:39.408825 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b264724-e891-4923-9304-cfdcb0c97f3d" containerName="controller-manager" Mar 12 14:13:39.408927 master-0 kubenswrapper[7440]: I0312 14:13:39.408877 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b264724-e891-4923-9304-cfdcb0c97f3d" containerName="controller-manager" Mar 12 14:13:39.408968 master-0 kubenswrapper[7440]: E0312 14:13:39.408932 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea22ec3-d02f-4f30-accf-eba03f4d4214" containerName="installer" Mar 12 14:13:39.408968 master-0 kubenswrapper[7440]: I0312 14:13:39.408943 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea22ec3-d02f-4f30-accf-eba03f4d4214" containerName="installer" Mar 12 14:13:39.408968 master-0 kubenswrapper[7440]: E0312 14:13:39.408958 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cb9e9e2-2673-4673-8999-622c97440572" containerName="route-controller-manager" Mar 12 14:13:39.409052 master-0 kubenswrapper[7440]: I0312 14:13:39.408965 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cb9e9e2-2673-4673-8999-622c97440572" containerName="route-controller-manager" Mar 12 14:13:39.411472 master-0 kubenswrapper[7440]: I0312 14:13:39.411440 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea22ec3-d02f-4f30-accf-eba03f4d4214" containerName="installer" Mar 12 14:13:39.411546 master-0 kubenswrapper[7440]: I0312 14:13:39.411478 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cb9e9e2-2673-4673-8999-622c97440572" containerName="route-controller-manager" Mar 12 14:13:39.411546 master-0 kubenswrapper[7440]: I0312 14:13:39.411497 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b264724-e891-4923-9304-cfdcb0c97f3d" containerName="controller-manager" Mar 12 14:13:39.414110 master-0 kubenswrapper[7440]: I0312 14:13:39.414085 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:13:39.418393 master-0 kubenswrapper[7440]: I0312 14:13:39.418360 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 14:13:39.418479 master-0 kubenswrapper[7440]: I0312 14:13:39.418393 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 14:13:39.418541 master-0 kubenswrapper[7440]: I0312 14:13:39.418512 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 14:13:39.418589 master-0 kubenswrapper[7440]: I0312 14:13:39.418560 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 14:13:39.418838 master-0 kubenswrapper[7440]: I0312 14:13:39.418820 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 14:13:39.432030 master-0 kubenswrapper[7440]: I0312 14:13:39.431314 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc"] Mar 12 14:13:39.432030 master-0 kubenswrapper[7440]: I0312 14:13:39.431373 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 14:13:39.437578 master-0 kubenswrapper[7440]: I0312 14:13:39.437527 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:13:39.446874 master-0 kubenswrapper[7440]: I0312 14:13:39.446844 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 14:13:39.447055 master-0 kubenswrapper[7440]: I0312 14:13:39.447021 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 14:13:39.447143 master-0 kubenswrapper[7440]: I0312 14:13:39.447125 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 14:13:39.447275 master-0 kubenswrapper[7440]: I0312 14:13:39.447260 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 14:13:39.452622 master-0 kubenswrapper[7440]: I0312 14:13:39.452578 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 14:13:39.520419 master-0 kubenswrapper[7440]: I0312 14:13:39.520371 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99433993-93cf-46cb-bb66-485672cb2554-serving-cert\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:13:39.520735 master-0 kubenswrapper[7440]: I0312 14:13:39.520696 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg62n\" (UniqueName: \"kubernetes.io/projected/df31c4c2-304e-4bad-8e6f-18c174eba675-kube-api-access-gg62n\") pod \"route-controller-manager-7f8bfc67b-pz8rc\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:13:39.520976 master-0 kubenswrapper[7440]: I0312 14:13:39.520946 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df31c4c2-304e-4bad-8e6f-18c174eba675-client-ca\") pod \"route-controller-manager-7f8bfc67b-pz8rc\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:13:39.521026 master-0 kubenswrapper[7440]: I0312 14:13:39.520978 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-client-ca\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:13:39.521026 master-0 kubenswrapper[7440]: I0312 14:13:39.520999 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dlf2\" (UniqueName: \"kubernetes.io/projected/99433993-93cf-46cb-bb66-485672cb2554-kube-api-access-2dlf2\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:13:39.521087 master-0 kubenswrapper[7440]: I0312 14:13:39.521028 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-config\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:13:39.522398 master-0 kubenswrapper[7440]: I0312 14:13:39.522339 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df31c4c2-304e-4bad-8e6f-18c174eba675-serving-cert\") pod \"route-controller-manager-7f8bfc67b-pz8rc\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:13:39.522465 master-0 kubenswrapper[7440]: I0312 14:13:39.522441 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df31c4c2-304e-4bad-8e6f-18c174eba675-config\") pod \"route-controller-manager-7f8bfc67b-pz8rc\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:13:39.522548 master-0 kubenswrapper[7440]: I0312 14:13:39.522527 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-proxy-ca-bundles\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:13:39.623559 master-0 kubenswrapper[7440]: I0312 14:13:39.623491 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-proxy-ca-bundles\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:13:39.623559 master-0 kubenswrapper[7440]: I0312 14:13:39.623550 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99433993-93cf-46cb-bb66-485672cb2554-serving-cert\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:13:39.623787 master-0 kubenswrapper[7440]: I0312 14:13:39.623692 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg62n\" (UniqueName: \"kubernetes.io/projected/df31c4c2-304e-4bad-8e6f-18c174eba675-kube-api-access-gg62n\") pod \"route-controller-manager-7f8bfc67b-pz8rc\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:13:39.623787 master-0 kubenswrapper[7440]: I0312 14:13:39.623742 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df31c4c2-304e-4bad-8e6f-18c174eba675-client-ca\") pod \"route-controller-manager-7f8bfc67b-pz8rc\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:13:39.623787 master-0 kubenswrapper[7440]: I0312 14:13:39.623764 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-client-ca\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:13:39.623787 master-0 kubenswrapper[7440]: I0312 14:13:39.623779 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dlf2\" (UniqueName: \"kubernetes.io/projected/99433993-93cf-46cb-bb66-485672cb2554-kube-api-access-2dlf2\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:13:39.623983 master-0 kubenswrapper[7440]: I0312 14:13:39.623795 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-config\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:13:39.623983 master-0 kubenswrapper[7440]: I0312 14:13:39.623816 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df31c4c2-304e-4bad-8e6f-18c174eba675-serving-cert\") pod \"route-controller-manager-7f8bfc67b-pz8rc\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:13:39.623983 master-0 kubenswrapper[7440]: I0312 14:13:39.623837 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df31c4c2-304e-4bad-8e6f-18c174eba675-config\") pod \"route-controller-manager-7f8bfc67b-pz8rc\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:13:39.625058 master-0 kubenswrapper[7440]: I0312 14:13:39.624803 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-proxy-ca-bundles\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:13:39.625633 master-0 kubenswrapper[7440]: I0312 14:13:39.625601 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-client-ca\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:13:39.626094 master-0 kubenswrapper[7440]: I0312 14:13:39.626061 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df31c4c2-304e-4bad-8e6f-18c174eba675-config\") pod \"route-controller-manager-7f8bfc67b-pz8rc\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:13:39.626647 master-0 kubenswrapper[7440]: I0312 14:13:39.626607 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df31c4c2-304e-4bad-8e6f-18c174eba675-client-ca\") pod \"route-controller-manager-7f8bfc67b-pz8rc\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:13:39.629371 master-0 kubenswrapper[7440]: I0312 14:13:39.629297 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99433993-93cf-46cb-bb66-485672cb2554-serving-cert\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:13:39.629463 master-0 kubenswrapper[7440]: I0312 14:13:39.629431 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df31c4c2-304e-4bad-8e6f-18c174eba675-serving-cert\") pod \"route-controller-manager-7f8bfc67b-pz8rc\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:13:39.633136 master-0 kubenswrapper[7440]: I0312 14:13:39.633078 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-config\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:13:39.659699 master-0 kubenswrapper[7440]: I0312 14:13:39.656814 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=18.656796639 podStartE2EDuration="18.656796639s" podCreationTimestamp="2026-03-12 14:13:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:13:39.655356524 +0000 UTC m=+79.990735103" watchObservedRunningTime="2026-03-12 14:13:39.656796639 +0000 UTC m=+79.992175198" Mar 12 14:13:39.666073 master-0 kubenswrapper[7440]: I0312 14:13:39.665732 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dlf2\" (UniqueName: \"kubernetes.io/projected/99433993-93cf-46cb-bb66-485672cb2554-kube-api-access-2dlf2\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:13:39.677625 master-0 kubenswrapper[7440]: I0312 14:13:39.669784 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd"] Mar 12 14:13:39.677625 master-0 kubenswrapper[7440]: I0312 14:13:39.673297 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc"] Mar 12 14:13:39.684996 master-0 kubenswrapper[7440]: I0312 14:13:39.684643 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg62n\" (UniqueName: \"kubernetes.io/projected/df31c4c2-304e-4bad-8e6f-18c174eba675-kube-api-access-gg62n\") pod \"route-controller-manager-7f8bfc67b-pz8rc\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:13:39.762068 master-0 kubenswrapper[7440]: I0312 14:13:39.761484 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:13:39.786836 master-0 kubenswrapper[7440]: I0312 14:13:39.786740 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:13:40.398888 master-0 kubenswrapper[7440]: I0312 14:13:40.398763 7440 patch_prober.go:28] interesting pod/packageserver-5957c5c5dc-njb8x container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.63:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:13:40.398888 master-0 kubenswrapper[7440]: I0312 14:13:40.398824 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" podUID="ef824102-83a5-4629-8057-d4f1a57a530d" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.63:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:13:41.268685 master-0 kubenswrapper[7440]: I0312 14:13:41.268607 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=23.268587479 podStartE2EDuration="23.268587479s" podCreationTimestamp="2026-03-12 14:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:13:40.552476124 +0000 UTC m=+80.887854683" watchObservedRunningTime="2026-03-12 14:13:41.268587479 +0000 UTC m=+81.603966028" Mar 12 14:13:41.419344 master-0 kubenswrapper[7440]: I0312 14:13:41.419246 7440 patch_prober.go:28] interesting pod/packageserver-5957c5c5dc-njb8x container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.63:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:13:41.419344 master-0 kubenswrapper[7440]: I0312 14:13:41.419320 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" podUID="ef824102-83a5-4629-8057-d4f1a57a530d" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.63:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:13:41.723989 master-0 kubenswrapper[7440]: I0312 14:13:41.723865 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:13:43.270038 master-0 kubenswrapper[7440]: I0312 14:13:43.265936 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:13:44.579590 master-0 kubenswrapper[7440]: I0312 14:13:44.579551 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 12 14:13:44.961700 master-0 kubenswrapper[7440]: I0312 14:13:44.960352 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 12 14:13:45.715809 master-0 kubenswrapper[7440]: I0312 14:13:45.715583 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn"] Mar 12 14:13:45.811338 master-0 kubenswrapper[7440]: I0312 14:13:45.811045 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ea22ec3-d02f-4f30-accf-eba03f4d4214" path="/var/lib/kubelet/pods/1ea22ec3-d02f-4f30-accf-eba03f4d4214/volumes" Mar 12 14:13:46.176281 master-0 kubenswrapper[7440]: I0312 14:13:46.176219 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bf88fbd-c4hcn"] Mar 12 14:13:46.622027 master-0 kubenswrapper[7440]: I0312 14:13:46.615743 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" podStartSLOduration=11.615700347 podStartE2EDuration="11.615700347s" podCreationTimestamp="2026-03-12 14:13:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:13:46.610815923 +0000 UTC m=+86.946194502" watchObservedRunningTime="2026-03-12 14:13:46.615700347 +0000 UTC m=+86.951078916" Mar 12 14:13:46.725983 master-0 kubenswrapper[7440]: I0312 14:13:46.723498 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" podStartSLOduration=10.723481033 podStartE2EDuration="10.723481033s" podCreationTimestamp="2026-03-12 14:13:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:13:46.721350879 +0000 UTC m=+87.056729438" watchObservedRunningTime="2026-03-12 14:13:46.723481033 +0000 UTC m=+87.058859592" Mar 12 14:13:47.644949 master-0 kubenswrapper[7440]: I0312 14:13:47.644867 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-ngzc8"] Mar 12 14:13:47.645812 master-0 kubenswrapper[7440]: I0312 14:13:47.645787 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:13:47.648748 master-0 kubenswrapper[7440]: I0312 14:13:47.648689 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 12 14:13:47.671936 master-0 kubenswrapper[7440]: I0312 14:13:47.671859 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjsjh\" (UniqueName: \"kubernetes.io/projected/8e4d9407-ff79-4396-a37f-896617e024d4-kube-api-access-sjsjh\") pod \"machine-config-daemon-ngzc8\" (UID: \"8e4d9407-ff79-4396-a37f-896617e024d4\") " pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:13:47.672116 master-0 kubenswrapper[7440]: I0312 14:13:47.671956 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e4d9407-ff79-4396-a37f-896617e024d4-proxy-tls\") pod \"machine-config-daemon-ngzc8\" (UID: \"8e4d9407-ff79-4396-a37f-896617e024d4\") " pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:13:47.672116 master-0 kubenswrapper[7440]: I0312 14:13:47.671984 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e4d9407-ff79-4396-a37f-896617e024d4-mcd-auth-proxy-config\") pod \"machine-config-daemon-ngzc8\" (UID: \"8e4d9407-ff79-4396-a37f-896617e024d4\") " pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:13:47.672116 master-0 kubenswrapper[7440]: I0312 14:13:47.672005 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8e4d9407-ff79-4396-a37f-896617e024d4-rootfs\") pod \"machine-config-daemon-ngzc8\" (UID: \"8e4d9407-ff79-4396-a37f-896617e024d4\") " pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:13:47.773888 master-0 kubenswrapper[7440]: I0312 14:13:47.773825 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e4d9407-ff79-4396-a37f-896617e024d4-proxy-tls\") pod \"machine-config-daemon-ngzc8\" (UID: \"8e4d9407-ff79-4396-a37f-896617e024d4\") " pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:13:47.774343 master-0 kubenswrapper[7440]: I0312 14:13:47.773918 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e4d9407-ff79-4396-a37f-896617e024d4-mcd-auth-proxy-config\") pod \"machine-config-daemon-ngzc8\" (UID: \"8e4d9407-ff79-4396-a37f-896617e024d4\") " pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:13:47.774343 master-0 kubenswrapper[7440]: I0312 14:13:47.773959 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8e4d9407-ff79-4396-a37f-896617e024d4-rootfs\") pod \"machine-config-daemon-ngzc8\" (UID: \"8e4d9407-ff79-4396-a37f-896617e024d4\") " pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:13:47.774343 master-0 kubenswrapper[7440]: I0312 14:13:47.774018 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjsjh\" (UniqueName: \"kubernetes.io/projected/8e4d9407-ff79-4396-a37f-896617e024d4-kube-api-access-sjsjh\") pod \"machine-config-daemon-ngzc8\" (UID: \"8e4d9407-ff79-4396-a37f-896617e024d4\") " pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:13:47.774343 master-0 kubenswrapper[7440]: I0312 14:13:47.774155 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8e4d9407-ff79-4396-a37f-896617e024d4-rootfs\") pod \"machine-config-daemon-ngzc8\" (UID: \"8e4d9407-ff79-4396-a37f-896617e024d4\") " pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:13:47.775168 master-0 kubenswrapper[7440]: I0312 14:13:47.774934 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e4d9407-ff79-4396-a37f-896617e024d4-mcd-auth-proxy-config\") pod \"machine-config-daemon-ngzc8\" (UID: \"8e4d9407-ff79-4396-a37f-896617e024d4\") " pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:13:47.788040 master-0 kubenswrapper[7440]: I0312 14:13:47.788012 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e4d9407-ff79-4396-a37f-896617e024d4-proxy-tls\") pod \"machine-config-daemon-ngzc8\" (UID: \"8e4d9407-ff79-4396-a37f-896617e024d4\") " pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:13:47.797953 master-0 kubenswrapper[7440]: I0312 14:13:47.797888 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjsjh\" (UniqueName: \"kubernetes.io/projected/8e4d9407-ff79-4396-a37f-896617e024d4-kube-api-access-sjsjh\") pod \"machine-config-daemon-ngzc8\" (UID: \"8e4d9407-ff79-4396-a37f-896617e024d4\") " pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:13:47.811585 master-0 kubenswrapper[7440]: I0312 14:13:47.811532 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cb9e9e2-2673-4673-8999-622c97440572" path="/var/lib/kubelet/pods/2cb9e9e2-2673-4673-8999-622c97440572/volumes" Mar 12 14:13:47.966762 master-0 kubenswrapper[7440]: I0312 14:13:47.966675 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:13:48.459165 master-0 kubenswrapper[7440]: I0312 14:13:48.459096 7440 generic.go:334] "Generic (PLEG): container finished" podID="8d775283-2696-4411-8ddf-d4e6000f0a0c" containerID="7a2823c237ff92e61d73f497473360f5c4e92a6a6cb9f9ef1530c99732f22a88" exitCode=0 Mar 12 14:13:48.459337 master-0 kubenswrapper[7440]: I0312 14:13:48.459151 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" event={"ID":"8d775283-2696-4411-8ddf-d4e6000f0a0c","Type":"ContainerDied","Data":"7a2823c237ff92e61d73f497473360f5c4e92a6a6cb9f9ef1530c99732f22a88"} Mar 12 14:13:48.459793 master-0 kubenswrapper[7440]: I0312 14:13:48.459762 7440 scope.go:117] "RemoveContainer" containerID="7a2823c237ff92e61d73f497473360f5c4e92a6a6cb9f9ef1530c99732f22a88" Mar 12 14:13:48.630893 master-0 kubenswrapper[7440]: I0312 14:13:48.630845 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc"] Mar 12 14:13:56.239043 master-0 kubenswrapper[7440]: I0312 14:13:56.238352 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-s5wj4"] Mar 12 14:13:56.247268 master-0 kubenswrapper[7440]: I0312 14:13:56.247213 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd"] Mar 12 14:13:56.247490 master-0 kubenswrapper[7440]: I0312 14:13:56.247342 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7769569c45-s5wj4" Mar 12 14:13:56.357973 master-0 kubenswrapper[7440]: I0312 14:13:56.357936 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27tm9\" (UniqueName: \"kubernetes.io/projected/8dd912f8-2c4d-4a0a-ba41-918ab5c235ba-kube-api-access-27tm9\") pod \"multus-admission-controller-7769569c45-s5wj4\" (UID: \"8dd912f8-2c4d-4a0a-ba41-918ab5c235ba\") " pod="openshift-multus/multus-admission-controller-7769569c45-s5wj4" Mar 12 14:13:56.358185 master-0 kubenswrapper[7440]: I0312 14:13:56.358002 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8dd912f8-2c4d-4a0a-ba41-918ab5c235ba-webhook-certs\") pod \"multus-admission-controller-7769569c45-s5wj4\" (UID: \"8dd912f8-2c4d-4a0a-ba41-918ab5c235ba\") " pod="openshift-multus/multus-admission-controller-7769569c45-s5wj4" Mar 12 14:13:56.459222 master-0 kubenswrapper[7440]: I0312 14:13:56.459143 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27tm9\" (UniqueName: \"kubernetes.io/projected/8dd912f8-2c4d-4a0a-ba41-918ab5c235ba-kube-api-access-27tm9\") pod \"multus-admission-controller-7769569c45-s5wj4\" (UID: \"8dd912f8-2c4d-4a0a-ba41-918ab5c235ba\") " pod="openshift-multus/multus-admission-controller-7769569c45-s5wj4" Mar 12 14:13:56.459440 master-0 kubenswrapper[7440]: I0312 14:13:56.459242 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8dd912f8-2c4d-4a0a-ba41-918ab5c235ba-webhook-certs\") pod \"multus-admission-controller-7769569c45-s5wj4\" (UID: \"8dd912f8-2c4d-4a0a-ba41-918ab5c235ba\") " pod="openshift-multus/multus-admission-controller-7769569c45-s5wj4" Mar 12 14:13:56.464180 master-0 kubenswrapper[7440]: I0312 14:13:56.464119 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8dd912f8-2c4d-4a0a-ba41-918ab5c235ba-webhook-certs\") pod \"multus-admission-controller-7769569c45-s5wj4\" (UID: \"8dd912f8-2c4d-4a0a-ba41-918ab5c235ba\") " pod="openshift-multus/multus-admission-controller-7769569c45-s5wj4" Mar 12 14:13:56.513184 master-0 kubenswrapper[7440]: I0312 14:13:56.513070 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-s5wj4"] Mar 12 14:13:56.775870 master-0 kubenswrapper[7440]: I0312 14:13:56.775800 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27tm9\" (UniqueName: \"kubernetes.io/projected/8dd912f8-2c4d-4a0a-ba41-918ab5c235ba-kube-api-access-27tm9\") pod \"multus-admission-controller-7769569c45-s5wj4\" (UID: \"8dd912f8-2c4d-4a0a-ba41-918ab5c235ba\") " pod="openshift-multus/multus-admission-controller-7769569c45-s5wj4" Mar 12 14:13:56.866511 master-0 kubenswrapper[7440]: I0312 14:13:56.866462 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7769569c45-s5wj4" Mar 12 14:13:58.526389 master-0 kubenswrapper[7440]: I0312 14:13:58.526329 7440 generic.go:334] "Generic (PLEG): container finished" podID="1edf236b-654d-4568-ab33-b1f408dcbec6" containerID="7baf84a669ae145308ec696ea2a3c0f0d8a3eaa1489aa598dc200ebb070fc533" exitCode=0 Mar 12 14:13:58.527071 master-0 kubenswrapper[7440]: I0312 14:13:58.526416 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" event={"ID":"1edf236b-654d-4568-ab33-b1f408dcbec6","Type":"ContainerDied","Data":"7baf84a669ae145308ec696ea2a3c0f0d8a3eaa1489aa598dc200ebb070fc533"} Mar 12 14:13:58.528466 master-0 kubenswrapper[7440]: I0312 14:13:58.528437 7440 generic.go:334] "Generic (PLEG): container finished" podID="3dc73c14-852d-4957-b6ac-84366ba0594f" containerID="fa8693b6924bc011b2e5ff580645ad5ee2dc963897660400a6b7a2add716cfc2" exitCode=0 Mar 12 14:13:58.528540 master-0 kubenswrapper[7440]: I0312 14:13:58.528503 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" event={"ID":"3dc73c14-852d-4957-b6ac-84366ba0594f","Type":"ContainerDied","Data":"fa8693b6924bc011b2e5ff580645ad5ee2dc963897660400a6b7a2add716cfc2"} Mar 12 14:13:58.528998 master-0 kubenswrapper[7440]: I0312 14:13:58.528973 7440 scope.go:117] "RemoveContainer" containerID="fa8693b6924bc011b2e5ff580645ad5ee2dc963897660400a6b7a2add716cfc2" Mar 12 14:13:58.531565 master-0 kubenswrapper[7440]: I0312 14:13:58.531519 7440 generic.go:334] "Generic (PLEG): container finished" podID="3f72fbbe-69f0-4622-be05-b839ff9b4d45" containerID="35b73de7804cd72eded0d5a260eb4f658c50b3bf884978dd585c75921ee17b06" exitCode=0 Mar 12 14:13:58.531565 master-0 kubenswrapper[7440]: I0312 14:13:58.531560 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" event={"ID":"3f72fbbe-69f0-4622-be05-b839ff9b4d45","Type":"ContainerDied","Data":"35b73de7804cd72eded0d5a260eb4f658c50b3bf884978dd585c75921ee17b06"} Mar 12 14:13:58.532147 master-0 kubenswrapper[7440]: I0312 14:13:58.532107 7440 scope.go:117] "RemoveContainer" containerID="35b73de7804cd72eded0d5a260eb4f658c50b3bf884978dd585c75921ee17b06" Mar 12 14:13:58.920948 master-0 kubenswrapper[7440]: I0312 14:13:58.915269 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2"] Mar 12 14:13:59.953431 master-0 kubenswrapper[7440]: I0312 14:13:59.953374 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:13:59.953431 master-0 kubenswrapper[7440]: I0312 14:13:59.953429 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:14:00.454850 master-0 kubenswrapper[7440]: I0312 14:14:00.454805 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:14:00.455415 master-0 kubenswrapper[7440]: I0312 14:14:00.455384 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:14:02.209793 master-0 kubenswrapper[7440]: W0312 14:14:02.209744 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99433993_93cf_46cb_bb66_485672cb2554.slice/crio-2e21aa41c709714c621e81f34dd2940d383309852477d3447a69f2b11767e16e WatchSource:0}: Error finding container 2e21aa41c709714c621e81f34dd2940d383309852477d3447a69f2b11767e16e: Status 404 returned error can't find the container with id 2e21aa41c709714c621e81f34dd2940d383309852477d3447a69f2b11767e16e Mar 12 14:14:02.352045 master-0 kubenswrapper[7440]: I0312 14:14:02.351133 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:14:02.407711 master-0 kubenswrapper[7440]: I0312 14:14:02.407383 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-794bf69795-vntlz"] Mar 12 14:14:02.407711 master-0 kubenswrapper[7440]: E0312 14:14:02.407635 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edf236b-654d-4568-ab33-b1f408dcbec6" containerName="oauth-apiserver" Mar 12 14:14:02.407711 master-0 kubenswrapper[7440]: I0312 14:14:02.407646 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edf236b-654d-4568-ab33-b1f408dcbec6" containerName="oauth-apiserver" Mar 12 14:14:02.407711 master-0 kubenswrapper[7440]: E0312 14:14:02.407659 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edf236b-654d-4568-ab33-b1f408dcbec6" containerName="fix-audit-permissions" Mar 12 14:14:02.407711 master-0 kubenswrapper[7440]: I0312 14:14:02.407666 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edf236b-654d-4568-ab33-b1f408dcbec6" containerName="fix-audit-permissions" Mar 12 14:14:02.408115 master-0 kubenswrapper[7440]: I0312 14:14:02.407757 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edf236b-654d-4568-ab33-b1f408dcbec6" containerName="oauth-apiserver" Mar 12 14:14:02.408432 master-0 kubenswrapper[7440]: I0312 14:14:02.408409 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:02.410748 master-0 kubenswrapper[7440]: I0312 14:14:02.410654 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-794bf69795-vntlz"] Mar 12 14:14:02.458534 master-0 kubenswrapper[7440]: I0312 14:14:02.448568 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1edf236b-654d-4568-ab33-b1f408dcbec6-trusted-ca-bundle\") pod \"1edf236b-654d-4568-ab33-b1f408dcbec6\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " Mar 12 14:14:02.458534 master-0 kubenswrapper[7440]: I0312 14:14:02.448652 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1edf236b-654d-4568-ab33-b1f408dcbec6-audit-policies\") pod \"1edf236b-654d-4568-ab33-b1f408dcbec6\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " Mar 12 14:14:02.458534 master-0 kubenswrapper[7440]: I0312 14:14:02.448683 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1edf236b-654d-4568-ab33-b1f408dcbec6-encryption-config\") pod \"1edf236b-654d-4568-ab33-b1f408dcbec6\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " Mar 12 14:14:02.458534 master-0 kubenswrapper[7440]: I0312 14:14:02.448711 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1edf236b-654d-4568-ab33-b1f408dcbec6-audit-dir\") pod \"1edf236b-654d-4568-ab33-b1f408dcbec6\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " Mar 12 14:14:02.458534 master-0 kubenswrapper[7440]: I0312 14:14:02.448766 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1edf236b-654d-4568-ab33-b1f408dcbec6-etcd-client\") pod \"1edf236b-654d-4568-ab33-b1f408dcbec6\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " Mar 12 14:14:02.458534 master-0 kubenswrapper[7440]: I0312 14:14:02.448789 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5dwz\" (UniqueName: \"kubernetes.io/projected/1edf236b-654d-4568-ab33-b1f408dcbec6-kube-api-access-t5dwz\") pod \"1edf236b-654d-4568-ab33-b1f408dcbec6\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " Mar 12 14:14:02.458534 master-0 kubenswrapper[7440]: I0312 14:14:02.448818 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1edf236b-654d-4568-ab33-b1f408dcbec6-serving-cert\") pod \"1edf236b-654d-4568-ab33-b1f408dcbec6\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " Mar 12 14:14:02.458534 master-0 kubenswrapper[7440]: I0312 14:14:02.448840 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1edf236b-654d-4568-ab33-b1f408dcbec6-etcd-serving-ca\") pod \"1edf236b-654d-4568-ab33-b1f408dcbec6\" (UID: \"1edf236b-654d-4568-ab33-b1f408dcbec6\") " Mar 12 14:14:02.458534 master-0 kubenswrapper[7440]: I0312 14:14:02.449382 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1edf236b-654d-4568-ab33-b1f408dcbec6-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1edf236b-654d-4568-ab33-b1f408dcbec6" (UID: "1edf236b-654d-4568-ab33-b1f408dcbec6"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:14:02.458534 master-0 kubenswrapper[7440]: I0312 14:14:02.449475 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1edf236b-654d-4568-ab33-b1f408dcbec6-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1edf236b-654d-4568-ab33-b1f408dcbec6" (UID: "1edf236b-654d-4568-ab33-b1f408dcbec6"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:14:02.458534 master-0 kubenswrapper[7440]: I0312 14:14:02.449466 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1edf236b-654d-4568-ab33-b1f408dcbec6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "1edf236b-654d-4568-ab33-b1f408dcbec6" (UID: "1edf236b-654d-4568-ab33-b1f408dcbec6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:14:02.458534 master-0 kubenswrapper[7440]: I0312 14:14:02.449783 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1edf236b-654d-4568-ab33-b1f408dcbec6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "1edf236b-654d-4568-ab33-b1f408dcbec6" (UID: "1edf236b-654d-4568-ab33-b1f408dcbec6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:14:02.458534 master-0 kubenswrapper[7440]: I0312 14:14:02.450230 7440 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1edf236b-654d-4568-ab33-b1f408dcbec6-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:02.458534 master-0 kubenswrapper[7440]: I0312 14:14:02.450246 7440 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1edf236b-654d-4568-ab33-b1f408dcbec6-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:02.458534 master-0 kubenswrapper[7440]: I0312 14:14:02.450261 7440 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1edf236b-654d-4568-ab33-b1f408dcbec6-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:02.458534 master-0 kubenswrapper[7440]: I0312 14:14:02.450270 7440 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1edf236b-654d-4568-ab33-b1f408dcbec6-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:02.458534 master-0 kubenswrapper[7440]: I0312 14:14:02.453233 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1edf236b-654d-4568-ab33-b1f408dcbec6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1edf236b-654d-4568-ab33-b1f408dcbec6" (UID: "1edf236b-654d-4568-ab33-b1f408dcbec6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:14:02.458534 master-0 kubenswrapper[7440]: I0312 14:14:02.453289 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1edf236b-654d-4568-ab33-b1f408dcbec6-kube-api-access-t5dwz" (OuterVolumeSpecName: "kube-api-access-t5dwz") pod "1edf236b-654d-4568-ab33-b1f408dcbec6" (UID: "1edf236b-654d-4568-ab33-b1f408dcbec6"). InnerVolumeSpecName "kube-api-access-t5dwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:14:02.458534 master-0 kubenswrapper[7440]: I0312 14:14:02.453535 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1edf236b-654d-4568-ab33-b1f408dcbec6-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1edf236b-654d-4568-ab33-b1f408dcbec6" (UID: "1edf236b-654d-4568-ab33-b1f408dcbec6"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:14:02.458534 master-0 kubenswrapper[7440]: I0312 14:14:02.455026 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1edf236b-654d-4568-ab33-b1f408dcbec6-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1edf236b-654d-4568-ab33-b1f408dcbec6" (UID: "1edf236b-654d-4568-ab33-b1f408dcbec6"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:14:02.553795 master-0 kubenswrapper[7440]: I0312 14:14:02.551116 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-etcd-client\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:02.553795 master-0 kubenswrapper[7440]: I0312 14:14:02.551166 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh2zk\" (UniqueName: \"kubernetes.io/projected/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-kube-api-access-jh2zk\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:02.553795 master-0 kubenswrapper[7440]: I0312 14:14:02.551199 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-trusted-ca-bundle\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:02.553795 master-0 kubenswrapper[7440]: I0312 14:14:02.551215 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-audit-dir\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:02.553795 master-0 kubenswrapper[7440]: I0312 14:14:02.551245 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-etcd-serving-ca\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:02.553795 master-0 kubenswrapper[7440]: I0312 14:14:02.551360 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-serving-cert\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:02.553795 master-0 kubenswrapper[7440]: I0312 14:14:02.551444 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-audit-policies\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:02.553795 master-0 kubenswrapper[7440]: I0312 14:14:02.551481 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-encryption-config\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:02.553795 master-0 kubenswrapper[7440]: I0312 14:14:02.551584 7440 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1edf236b-654d-4568-ab33-b1f408dcbec6-encryption-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:02.553795 master-0 kubenswrapper[7440]: I0312 14:14:02.551602 7440 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1edf236b-654d-4568-ab33-b1f408dcbec6-etcd-client\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:02.553795 master-0 kubenswrapper[7440]: I0312 14:14:02.551614 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5dwz\" (UniqueName: \"kubernetes.io/projected/1edf236b-654d-4568-ab33-b1f408dcbec6-kube-api-access-t5dwz\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:02.553795 master-0 kubenswrapper[7440]: I0312 14:14:02.551627 7440 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1edf236b-654d-4568-ab33-b1f408dcbec6-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:02.571435 master-0 kubenswrapper[7440]: I0312 14:14:02.571408 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-zwdgk_d00a8cc7-7774-40bd-94a1-9ac2d0f63234/openshift-controller-manager-operator/0.log" Mar 12 14:14:02.571598 master-0 kubenswrapper[7440]: I0312 14:14:02.571449 7440 generic.go:334] "Generic (PLEG): container finished" podID="d00a8cc7-7774-40bd-94a1-9ac2d0f63234" containerID="4767c99ca8b14443f1382cd9b5a19a4aba786928a26c41b8fce765c6d6383500" exitCode=1 Mar 12 14:14:02.571598 master-0 kubenswrapper[7440]: I0312 14:14:02.571580 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" event={"ID":"d00a8cc7-7774-40bd-94a1-9ac2d0f63234","Type":"ContainerDied","Data":"4767c99ca8b14443f1382cd9b5a19a4aba786928a26c41b8fce765c6d6383500"} Mar 12 14:14:02.571977 master-0 kubenswrapper[7440]: I0312 14:14:02.571956 7440 scope.go:117] "RemoveContainer" containerID="4767c99ca8b14443f1382cd9b5a19a4aba786928a26c41b8fce765c6d6383500" Mar 12 14:14:02.580114 master-0 kubenswrapper[7440]: I0312 14:14:02.580006 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" event={"ID":"99433993-93cf-46cb-bb66-485672cb2554","Type":"ContainerStarted","Data":"2e21aa41c709714c621e81f34dd2940d383309852477d3447a69f2b11767e16e"} Mar 12 14:14:02.598170 master-0 kubenswrapper[7440]: I0312 14:14:02.598128 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc"] Mar 12 14:14:02.614733 master-0 kubenswrapper[7440]: I0312 14:14:02.614707 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" Mar 12 14:14:02.614987 master-0 kubenswrapper[7440]: I0312 14:14:02.614884 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-757d65d745-gzpdw" event={"ID":"1edf236b-654d-4568-ab33-b1f408dcbec6","Type":"ContainerDied","Data":"8d6e945225bb5f896e615cb1136c4b7a8164a71da35bf0c82c5fc6e8b79b6cc2"} Mar 12 14:14:02.615189 master-0 kubenswrapper[7440]: I0312 14:14:02.615170 7440 scope.go:117] "RemoveContainer" containerID="7baf84a669ae145308ec696ea2a3c0f0d8a3eaa1489aa598dc200ebb070fc533" Mar 12 14:14:02.632030 master-0 kubenswrapper[7440]: I0312 14:14:02.629059 7440 generic.go:334] "Generic (PLEG): container finished" podID="0a898118-6d01-4211-92f0-43967b75405c" containerID="1abac70444f37ebc5d0a9feab691c5f95fb4db1e5c3e7cd1fedbd5970be25447" exitCode=0 Mar 12 14:14:02.632030 master-0 kubenswrapper[7440]: I0312 14:14:02.629147 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" event={"ID":"0a898118-6d01-4211-92f0-43967b75405c","Type":"ContainerDied","Data":"1abac70444f37ebc5d0a9feab691c5f95fb4db1e5c3e7cd1fedbd5970be25447"} Mar 12 14:14:02.632030 master-0 kubenswrapper[7440]: I0312 14:14:02.629607 7440 scope.go:117] "RemoveContainer" containerID="1abac70444f37ebc5d0a9feab691c5f95fb4db1e5c3e7cd1fedbd5970be25447" Mar 12 14:14:02.647618 master-0 kubenswrapper[7440]: I0312 14:14:02.647174 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" event={"ID":"8e4d9407-ff79-4396-a37f-896617e024d4","Type":"ContainerStarted","Data":"b0d9b5d35890bf7ee8f33755b50b3d62e47a389cd7d7e50fa4af660965af6cae"} Mar 12 14:14:02.653121 master-0 kubenswrapper[7440]: I0312 14:14:02.653062 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-etcd-client\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:02.653121 master-0 kubenswrapper[7440]: I0312 14:14:02.653124 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh2zk\" (UniqueName: \"kubernetes.io/projected/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-kube-api-access-jh2zk\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:02.653365 master-0 kubenswrapper[7440]: I0312 14:14:02.653202 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-trusted-ca-bundle\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:02.653365 master-0 kubenswrapper[7440]: I0312 14:14:02.653224 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-audit-dir\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:02.653365 master-0 kubenswrapper[7440]: I0312 14:14:02.653245 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-etcd-serving-ca\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:02.653365 master-0 kubenswrapper[7440]: I0312 14:14:02.653279 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-serving-cert\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:02.653365 master-0 kubenswrapper[7440]: I0312 14:14:02.653309 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-audit-policies\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:02.653365 master-0 kubenswrapper[7440]: I0312 14:14:02.653338 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-encryption-config\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:02.654030 master-0 kubenswrapper[7440]: I0312 14:14:02.653987 7440 generic.go:334] "Generic (PLEG): container finished" podID="1bba274a-38c7-4d13-88a5-6bc39228416c" containerID="b4956129e01655acfb40ce60e009de2d9707827560481d924db590d2b05e8343" exitCode=0 Mar 12 14:14:02.654102 master-0 kubenswrapper[7440]: I0312 14:14:02.654028 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" event={"ID":"1bba274a-38c7-4d13-88a5-6bc39228416c","Type":"ContainerDied","Data":"b4956129e01655acfb40ce60e009de2d9707827560481d924db590d2b05e8343"} Mar 12 14:14:02.654374 master-0 kubenswrapper[7440]: I0312 14:14:02.654342 7440 scope.go:117] "RemoveContainer" containerID="b4956129e01655acfb40ce60e009de2d9707827560481d924db590d2b05e8343" Mar 12 14:14:02.659151 master-0 kubenswrapper[7440]: I0312 14:14:02.659119 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-audit-dir\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:02.663305 master-0 kubenswrapper[7440]: I0312 14:14:02.662042 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-audit-policies\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:02.663305 master-0 kubenswrapper[7440]: I0312 14:14:02.663042 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-etcd-serving-ca\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:02.665813 master-0 kubenswrapper[7440]: I0312 14:14:02.665668 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-trusted-ca-bundle\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:02.670604 master-0 kubenswrapper[7440]: I0312 14:14:02.670557 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-serving-cert\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:02.680174 master-0 kubenswrapper[7440]: I0312 14:14:02.680110 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-etcd-client\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:02.689304 master-0 kubenswrapper[7440]: I0312 14:14:02.689211 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-encryption-config\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:02.694588 master-0 kubenswrapper[7440]: I0312 14:14:02.694370 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh2zk\" (UniqueName: \"kubernetes.io/projected/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-kube-api-access-jh2zk\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:02.727434 master-0 kubenswrapper[7440]: I0312 14:14:02.725433 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-757d65d745-gzpdw"] Mar 12 14:14:02.728705 master-0 kubenswrapper[7440]: I0312 14:14:02.728654 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-oauth-apiserver/apiserver-757d65d745-gzpdw"] Mar 12 14:14:02.743142 master-0 kubenswrapper[7440]: I0312 14:14:02.743084 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:02.802327 master-0 kubenswrapper[7440]: I0312 14:14:02.802294 7440 scope.go:117] "RemoveContainer" containerID="46ceffc4cf5b43d6667c001d7ca724c81abc46d22b9354d94d793fd041e473d2" Mar 12 14:14:02.921104 master-0 kubenswrapper[7440]: I0312 14:14:02.921061 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-s5wj4"] Mar 12 14:14:02.957148 master-0 kubenswrapper[7440]: I0312 14:14:02.954595 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:14:03.449413 master-0 kubenswrapper[7440]: I0312 14:14:03.449368 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-794bf69795-vntlz"] Mar 12 14:14:03.454432 master-0 kubenswrapper[7440]: I0312 14:14:03.454200 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:14:03.780212 master-0 kubenswrapper[7440]: I0312 14:14:03.777250 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4622r" event={"ID":"191fe879-7ece-4f8c-bae6-cf46acb382c9","Type":"ContainerStarted","Data":"a56be2a786928fd5eaf82b7365566e1dbacec830c9324c47c9ab044e97cd0ce5"} Mar 12 14:14:03.869940 master-0 kubenswrapper[7440]: I0312 14:14:03.866762 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1edf236b-654d-4568-ab33-b1f408dcbec6" path="/var/lib/kubelet/pods/1edf236b-654d-4568-ab33-b1f408dcbec6/volumes" Mar 12 14:14:03.869940 master-0 kubenswrapper[7440]: I0312 14:14:03.867271 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-gltz7" event={"ID":"dd29b21c-7a0e-4311-952f-427b00468e66","Type":"ContainerStarted","Data":"06754d581cc8aca46ceb909759a4cdf83f5358cda6d0633cc92ae3b0cb8c8c05"} Mar 12 14:14:03.869940 master-0 kubenswrapper[7440]: I0312 14:14:03.867795 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4622r" podStartSLOduration=2.9702114760000002 podStartE2EDuration="57.867777692s" podCreationTimestamp="2026-03-12 14:13:06 +0000 UTC" firstStartedPulling="2026-03-12 14:13:07.587494811 +0000 UTC m=+47.922873370" lastFinishedPulling="2026-03-12 14:14:02.485061027 +0000 UTC m=+102.820439586" observedRunningTime="2026-03-12 14:14:03.834180863 +0000 UTC m=+104.169559422" watchObservedRunningTime="2026-03-12 14:14:03.867777692 +0000 UTC m=+104.203156261" Mar 12 14:14:03.895028 master-0 kubenswrapper[7440]: I0312 14:14:03.890196 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" event={"ID":"3edaa533-ecbb-443e-a270-4cb4f923daf6","Type":"ContainerStarted","Data":"b7d782d5bb2308ec609e902e46de5e46198bee9122afbefc61233a9ba61991af"} Mar 12 14:14:03.926927 master-0 kubenswrapper[7440]: I0312 14:14:03.923066 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-8f89dfddd-gltz7" podStartSLOduration=6.90418283 podStartE2EDuration="32.923050194s" podCreationTimestamp="2026-03-12 14:13:31 +0000 UTC" firstStartedPulling="2026-03-12 14:13:36.291878267 +0000 UTC m=+76.627256826" lastFinishedPulling="2026-03-12 14:14:02.310745631 +0000 UTC m=+102.646124190" observedRunningTime="2026-03-12 14:14:03.922373937 +0000 UTC m=+104.257752506" watchObservedRunningTime="2026-03-12 14:14:03.923050194 +0000 UTC m=+104.258428753" Mar 12 14:14:03.944955 master-0 kubenswrapper[7440]: I0312 14:14:03.939149 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" event={"ID":"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b","Type":"ContainerStarted","Data":"39547af9c96ab9ffa0c68d5520b2aefe82b1e2e9c5c31895677204de893a9b6a"} Mar 12 14:14:04.028408 master-0 kubenswrapper[7440]: I0312 14:14:04.025609 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-7s8fj" event={"ID":"f3c13c5f-3d1f-4e0a-b77b-732255680086","Type":"ContainerStarted","Data":"c67f823638be00e0ed74a2579b7dd1b4da80134d340ad18f11466d7e3913888f"} Mar 12 14:14:04.052263 master-0 kubenswrapper[7440]: I0312 14:14:04.051753 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" event={"ID":"8e4d9407-ff79-4396-a37f-896617e024d4","Type":"ContainerStarted","Data":"8f8be4405a8d4e6b47e3984fee4354cff707b030f91ac3d80bc5aee09db3ea4a"} Mar 12 14:14:04.194173 master-0 kubenswrapper[7440]: I0312 14:14:04.175595 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ns7pm" event={"ID":"2e0d04d0-6ea2-4b4e-a881-968db7d31c7b","Type":"ContainerStarted","Data":"88258e715bc540b76097bb99083cec5e9e7c8071119a50353c605425f13a6d2b"} Mar 12 14:14:04.194173 master-0 kubenswrapper[7440]: I0312 14:14:04.193857 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" podStartSLOduration=17.193835597 podStartE2EDuration="17.193835597s" podCreationTimestamp="2026-03-12 14:13:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:14:04.19361265 +0000 UTC m=+104.528991209" watchObservedRunningTime="2026-03-12 14:14:04.193835597 +0000 UTC m=+104.529214156" Mar 12 14:14:04.208059 master-0 kubenswrapper[7440]: I0312 14:14:04.194618 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-7s8fj" podStartSLOduration=17.128129009 podStartE2EDuration="43.194612086s" podCreationTimestamp="2026-03-12 14:13:21 +0000 UTC" firstStartedPulling="2026-03-12 14:13:36.234022492 +0000 UTC m=+76.569401051" lastFinishedPulling="2026-03-12 14:14:02.300505569 +0000 UTC m=+102.635884128" observedRunningTime="2026-03-12 14:14:04.124915724 +0000 UTC m=+104.460294283" watchObservedRunningTime="2026-03-12 14:14:04.194612086 +0000 UTC m=+104.529990645" Mar 12 14:14:04.208059 master-0 kubenswrapper[7440]: I0312 14:14:04.203701 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" event={"ID":"99433993-93cf-46cb-bb66-485672cb2554","Type":"ContainerStarted","Data":"942edb2086b196730f2050c8c10e7943616ea284812689341f08412925b12705"} Mar 12 14:14:04.208059 master-0 kubenswrapper[7440]: I0312 14:14:04.204595 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:14:04.208059 master-0 kubenswrapper[7440]: I0312 14:14:04.207242 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" event={"ID":"8d775283-2696-4411-8ddf-d4e6000f0a0c","Type":"ContainerStarted","Data":"0eed999a49dbae8cddba70df11741d86114a7456650eda2650c12101e15de11f"} Mar 12 14:14:04.213960 master-0 kubenswrapper[7440]: I0312 14:14:04.210520 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" event={"ID":"bc86a749-8fef-462c-b422-95155cb6ca21","Type":"ContainerStarted","Data":"9e4a052d66b8a86f9dde7d3e1771bff40120b5229ae08cab4c4d06bd8c8a4ec7"} Mar 12 14:14:04.265407 master-0 kubenswrapper[7440]: I0312 14:14:04.259688 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:14:04.286927 master-0 kubenswrapper[7440]: I0312 14:14:04.285275 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" event={"ID":"3f72fbbe-69f0-4622-be05-b839ff9b4d45","Type":"ContainerStarted","Data":"e7dea74eb883602f1f3d133f192958f321d40672d5572126aaddfb68d54ed527"} Mar 12 14:14:04.295881 master-0 kubenswrapper[7440]: I0312 14:14:04.291022 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9qngm" podStartSLOduration=4.303989668 podStartE2EDuration="57.29100268s" podCreationTimestamp="2026-03-12 14:13:07 +0000 UTC" firstStartedPulling="2026-03-12 14:13:09.599261152 +0000 UTC m=+49.934639711" lastFinishedPulling="2026-03-12 14:14:02.586274164 +0000 UTC m=+102.921652723" observedRunningTime="2026-03-12 14:14:04.288142727 +0000 UTC m=+104.623521316" watchObservedRunningTime="2026-03-12 14:14:04.29100268 +0000 UTC m=+104.626381229" Mar 12 14:14:04.300973 master-0 kubenswrapper[7440]: I0312 14:14:04.298554 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" event={"ID":"6f5cd3ff-ced6-47e3-8054-d83053d87680","Type":"ContainerStarted","Data":"d0767e3a40f949712be9170d0b8f7cd2c338fed5faee0a7ad41873676dd6e5ae"} Mar 12 14:14:04.302656 master-0 kubenswrapper[7440]: I0312 14:14:04.302599 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-q29ch" event={"ID":"f569ed3b-924d-4829-b192-f508ee70658d","Type":"ContainerStarted","Data":"3acdc56c43692bcfd84f78b7447975cc602b8dce78d52adc35c712d43e43e0fa"} Mar 12 14:14:04.316919 master-0 kubenswrapper[7440]: I0312 14:14:04.310264 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw" event={"ID":"06eb9f4b-167e-435b-8ef6-ae44fc0b85a9","Type":"ContainerStarted","Data":"10ebd0ad67dc09a94de6455e90b725a93074cf336ebd90eea3f8574d71ab8322"} Mar 12 14:14:04.371921 master-0 kubenswrapper[7440]: I0312 14:14:04.364839 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:14:04.389918 master-0 kubenswrapper[7440]: I0312 14:14:04.386515 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ns7pm" podStartSLOduration=9.435097761 podStartE2EDuration="56.38649872s" podCreationTimestamp="2026-03-12 14:13:08 +0000 UTC" firstStartedPulling="2026-03-12 14:13:15.634889785 +0000 UTC m=+55.970268344" lastFinishedPulling="2026-03-12 14:14:02.586290744 +0000 UTC m=+102.921669303" observedRunningTime="2026-03-12 14:14:04.338171846 +0000 UTC m=+104.673550405" watchObservedRunningTime="2026-03-12 14:14:04.38649872 +0000 UTC m=+104.721877299" Mar 12 14:14:04.406149 master-0 kubenswrapper[7440]: I0312 14:14:04.404429 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc" event={"ID":"ca50659e-7afc-4c81-b89f-2386ca173c18","Type":"ContainerStarted","Data":"098ed9f2c427b81d65b020546224fb35f717dc84b8bb4d73a8d9597bd5875a4f"} Mar 12 14:14:04.406149 master-0 kubenswrapper[7440]: I0312 14:14:04.404813 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc" podUID="ca50659e-7afc-4c81-b89f-2386ca173c18" containerName="kube-rbac-proxy" containerID="cri-o://60d4e07599b638379384c9ffcfcd09977c6b4d80b1728d1e52c718e08335973e" gracePeriod=30 Mar 12 14:14:04.406149 master-0 kubenswrapper[7440]: I0312 14:14:04.404943 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc" podUID="ca50659e-7afc-4c81-b89f-2386ca173c18" containerName="machine-approver-controller" containerID="cri-o://098ed9f2c427b81d65b020546224fb35f717dc84b8bb4d73a8d9597bd5875a4f" gracePeriod=30 Mar 12 14:14:04.436044 master-0 kubenswrapper[7440]: I0312 14:14:04.435235 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" event={"ID":"9757edbb-8ce2-4513-9b32-a552df50634c","Type":"ContainerStarted","Data":"1f6d2570897da6801ddcca5ad1dff41b4e29f16cbcc5ab930745b1a932963f31"} Mar 12 14:14:04.452751 master-0 kubenswrapper[7440]: I0312 14:14:04.449861 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" podStartSLOduration=37.44984155 podStartE2EDuration="37.44984155s" podCreationTimestamp="2026-03-12 14:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:14:04.387738272 +0000 UTC m=+104.723116831" watchObservedRunningTime="2026-03-12 14:14:04.44984155 +0000 UTC m=+104.785220129" Mar 12 14:14:04.510583 master-0 kubenswrapper[7440]: I0312 14:14:04.489074 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-zwdgk_d00a8cc7-7774-40bd-94a1-9ac2d0f63234/openshift-controller-manager-operator/0.log" Mar 12 14:14:04.510583 master-0 kubenswrapper[7440]: I0312 14:14:04.489207 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" event={"ID":"d00a8cc7-7774-40bd-94a1-9ac2d0f63234","Type":"ContainerStarted","Data":"9187f76670a738ddd581636a016ef4d6741503d5745e898edf219cba574d1307"} Mar 12 14:14:04.551996 master-0 kubenswrapper[7440]: I0312 14:14:04.546312 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-s5wj4" event={"ID":"8dd912f8-2c4d-4a0a-ba41-918ab5c235ba","Type":"ContainerStarted","Data":"aca8c7cb3cefb96ea167603c4fdab132577bdaf6be51eb609e79f8b9ea4df1b7"} Mar 12 14:14:04.584331 master-0 kubenswrapper[7440]: I0312 14:14:04.584281 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" event={"ID":"df31c4c2-304e-4bad-8e6f-18c174eba675","Type":"ContainerStarted","Data":"0797fe88dc9adea8392e9b93088b1a0313bddd85f5318d3039e5b08dcf043b58"} Mar 12 14:14:04.587411 master-0 kubenswrapper[7440]: I0312 14:14:04.586006 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:14:04.591233 master-0 kubenswrapper[7440]: I0312 14:14:04.591201 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" event={"ID":"3dc73c14-852d-4957-b6ac-84366ba0594f","Type":"ContainerStarted","Data":"e69ae5e560439e8be83727200f3f70b72e784d09cd8dbceed926d8123583ce1c"} Mar 12 14:14:04.670417 master-0 kubenswrapper[7440]: I0312 14:14:04.664845 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" podStartSLOduration=24.591050522 podStartE2EDuration="34.664821665s" podCreationTimestamp="2026-03-12 14:13:30 +0000 UTC" firstStartedPulling="2026-03-12 14:13:36.564155493 +0000 UTC m=+76.899534052" lastFinishedPulling="2026-03-12 14:13:46.637926636 +0000 UTC m=+86.973305195" observedRunningTime="2026-03-12 14:14:04.628852716 +0000 UTC m=+104.964231275" watchObservedRunningTime="2026-03-12 14:14:04.664821665 +0000 UTC m=+105.000200224" Mar 12 14:14:04.708923 master-0 kubenswrapper[7440]: I0312 14:14:04.706023 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:14:04.718083 master-0 kubenswrapper[7440]: I0312 14:14:04.718038 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc" Mar 12 14:14:04.766338 master-0 kubenswrapper[7440]: I0312 14:14:04.766269 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc" podStartSLOduration=20.464978417 podStartE2EDuration="40.766248867s" podCreationTimestamp="2026-03-12 14:13:24 +0000 UTC" firstStartedPulling="2026-03-12 14:13:36.185163291 +0000 UTC m=+76.520541850" lastFinishedPulling="2026-03-12 14:13:56.486433741 +0000 UTC m=+96.821812300" observedRunningTime="2026-03-12 14:14:04.685020881 +0000 UTC m=+105.020399450" watchObservedRunningTime="2026-03-12 14:14:04.766248867 +0000 UTC m=+105.101627436" Mar 12 14:14:04.833513 master-0 kubenswrapper[7440]: I0312 14:14:04.833437 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-q29ch" podStartSLOduration=9.908288491 podStartE2EDuration="35.833416644s" podCreationTimestamp="2026-03-12 14:13:29 +0000 UTC" firstStartedPulling="2026-03-12 14:13:36.386275495 +0000 UTC m=+76.721654054" lastFinishedPulling="2026-03-12 14:14:02.311403648 +0000 UTC m=+102.646782207" observedRunningTime="2026-03-12 14:14:04.826626182 +0000 UTC m=+105.162004751" watchObservedRunningTime="2026-03-12 14:14:04.833416644 +0000 UTC m=+105.168795203" Mar 12 14:14:04.854416 master-0 kubenswrapper[7440]: I0312 14:14:04.854368 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ca50659e-7afc-4c81-b89f-2386ca173c18-machine-approver-tls\") pod \"ca50659e-7afc-4c81-b89f-2386ca173c18\" (UID: \"ca50659e-7afc-4c81-b89f-2386ca173c18\") " Mar 12 14:14:04.854710 master-0 kubenswrapper[7440]: I0312 14:14:04.854688 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht5pc\" (UniqueName: \"kubernetes.io/projected/ca50659e-7afc-4c81-b89f-2386ca173c18-kube-api-access-ht5pc\") pod \"ca50659e-7afc-4c81-b89f-2386ca173c18\" (UID: \"ca50659e-7afc-4c81-b89f-2386ca173c18\") " Mar 12 14:14:04.854838 master-0 kubenswrapper[7440]: I0312 14:14:04.854820 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ca50659e-7afc-4c81-b89f-2386ca173c18-auth-proxy-config\") pod \"ca50659e-7afc-4c81-b89f-2386ca173c18\" (UID: \"ca50659e-7afc-4c81-b89f-2386ca173c18\") " Mar 12 14:14:04.854964 master-0 kubenswrapper[7440]: I0312 14:14:04.854945 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca50659e-7afc-4c81-b89f-2386ca173c18-config\") pod \"ca50659e-7afc-4c81-b89f-2386ca173c18\" (UID: \"ca50659e-7afc-4c81-b89f-2386ca173c18\") " Mar 12 14:14:04.855999 master-0 kubenswrapper[7440]: I0312 14:14:04.855977 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca50659e-7afc-4c81-b89f-2386ca173c18-config" (OuterVolumeSpecName: "config") pod "ca50659e-7afc-4c81-b89f-2386ca173c18" (UID: "ca50659e-7afc-4c81-b89f-2386ca173c18"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:14:04.858698 master-0 kubenswrapper[7440]: I0312 14:14:04.857333 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca50659e-7afc-4c81-b89f-2386ca173c18-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "ca50659e-7afc-4c81-b89f-2386ca173c18" (UID: "ca50659e-7afc-4c81-b89f-2386ca173c18"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:14:04.861612 master-0 kubenswrapper[7440]: I0312 14:14:04.859030 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca50659e-7afc-4c81-b89f-2386ca173c18-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "ca50659e-7afc-4c81-b89f-2386ca173c18" (UID: "ca50659e-7afc-4c81-b89f-2386ca173c18"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:14:04.862030 master-0 kubenswrapper[7440]: I0312 14:14:04.861985 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca50659e-7afc-4c81-b89f-2386ca173c18-kube-api-access-ht5pc" (OuterVolumeSpecName: "kube-api-access-ht5pc") pod "ca50659e-7afc-4c81-b89f-2386ca173c18" (UID: "ca50659e-7afc-4c81-b89f-2386ca173c18"). InnerVolumeSpecName "kube-api-access-ht5pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:14:04.947625 master-0 kubenswrapper[7440]: I0312 14:14:04.945528 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw" podStartSLOduration=4.871102247 podStartE2EDuration="29.94551041s" podCreationTimestamp="2026-03-12 14:13:35 +0000 UTC" firstStartedPulling="2026-03-12 14:13:37.288384449 +0000 UTC m=+77.623763008" lastFinishedPulling="2026-03-12 14:14:02.362792602 +0000 UTC m=+102.698171171" observedRunningTime="2026-03-12 14:14:04.895236856 +0000 UTC m=+105.230615415" watchObservedRunningTime="2026-03-12 14:14:04.94551041 +0000 UTC m=+105.280888969" Mar 12 14:14:04.968820 master-0 kubenswrapper[7440]: I0312 14:14:04.956576 7440 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ca50659e-7afc-4c81-b89f-2386ca173c18-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:04.968820 master-0 kubenswrapper[7440]: I0312 14:14:04.956615 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht5pc\" (UniqueName: \"kubernetes.io/projected/ca50659e-7afc-4c81-b89f-2386ca173c18-kube-api-access-ht5pc\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:04.968820 master-0 kubenswrapper[7440]: I0312 14:14:04.956625 7440 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ca50659e-7afc-4c81-b89f-2386ca173c18-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:04.968820 master-0 kubenswrapper[7440]: I0312 14:14:04.956635 7440 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca50659e-7afc-4c81-b89f-2386ca173c18-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:05.018777 master-0 kubenswrapper[7440]: I0312 14:14:05.014198 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" podStartSLOduration=5.7070708 podStartE2EDuration="29.014173486s" podCreationTimestamp="2026-03-12 14:13:36 +0000 UTC" firstStartedPulling="2026-03-12 14:13:39.155388804 +0000 UTC m=+79.490767363" lastFinishedPulling="2026-03-12 14:14:02.46249149 +0000 UTC m=+102.797870049" observedRunningTime="2026-03-12 14:14:04.952264172 +0000 UTC m=+105.287642741" watchObservedRunningTime="2026-03-12 14:14:05.014173486 +0000 UTC m=+105.349552045" Mar 12 14:14:05.150037 master-0 kubenswrapper[7440]: I0312 14:14:05.144869 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podStartSLOduration=38.144850626 podStartE2EDuration="38.144850626s" podCreationTimestamp="2026-03-12 14:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:14:05.14029804 +0000 UTC m=+105.475676599" watchObservedRunningTime="2026-03-12 14:14:05.144850626 +0000 UTC m=+105.480229175" Mar 12 14:14:05.599104 master-0 kubenswrapper[7440]: I0312 14:14:05.599058 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" event={"ID":"8e4d9407-ff79-4396-a37f-896617e024d4","Type":"ContainerStarted","Data":"5f66e3c8b94fb51a0c6c9ea1e5170e6a0c1589229e247c05b279a57ea1791d02"} Mar 12 14:14:05.601078 master-0 kubenswrapper[7440]: I0312 14:14:05.601035 7440 generic.go:334] "Generic (PLEG): container finished" podID="ca50659e-7afc-4c81-b89f-2386ca173c18" containerID="098ed9f2c427b81d65b020546224fb35f717dc84b8bb4d73a8d9597bd5875a4f" exitCode=0 Mar 12 14:14:05.601212 master-0 kubenswrapper[7440]: I0312 14:14:05.601090 7440 generic.go:334] "Generic (PLEG): container finished" podID="ca50659e-7afc-4c81-b89f-2386ca173c18" containerID="60d4e07599b638379384c9ffcfcd09977c6b4d80b1728d1e52c718e08335973e" exitCode=0 Mar 12 14:14:05.601212 master-0 kubenswrapper[7440]: I0312 14:14:05.601066 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc" event={"ID":"ca50659e-7afc-4c81-b89f-2386ca173c18","Type":"ContainerDied","Data":"098ed9f2c427b81d65b020546224fb35f717dc84b8bb4d73a8d9597bd5875a4f"} Mar 12 14:14:05.601212 master-0 kubenswrapper[7440]: I0312 14:14:05.601157 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc" event={"ID":"ca50659e-7afc-4c81-b89f-2386ca173c18","Type":"ContainerDied","Data":"60d4e07599b638379384c9ffcfcd09977c6b4d80b1728d1e52c718e08335973e"} Mar 12 14:14:05.601212 master-0 kubenswrapper[7440]: I0312 14:14:05.601175 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc" event={"ID":"ca50659e-7afc-4c81-b89f-2386ca173c18","Type":"ContainerDied","Data":"6f7345e68d0284239c7ffeb41360ca60627706e3ed5e6f0ee04f56580c16d2e9"} Mar 12 14:14:05.601212 master-0 kubenswrapper[7440]: I0312 14:14:05.601192 7440 scope.go:117] "RemoveContainer" containerID="098ed9f2c427b81d65b020546224fb35f717dc84b8bb4d73a8d9597bd5875a4f" Mar 12 14:14:05.601559 master-0 kubenswrapper[7440]: I0312 14:14:05.601531 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc" Mar 12 14:14:05.603097 master-0 kubenswrapper[7440]: I0312 14:14:05.602675 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" event={"ID":"df31c4c2-304e-4bad-8e6f-18c174eba675","Type":"ContainerStarted","Data":"f3d9c730da43b24ec075e5b126409b0c8c7273cecb83802d3e5610d1f61d4571"} Mar 12 14:14:05.605177 master-0 kubenswrapper[7440]: I0312 14:14:05.605102 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" event={"ID":"3edaa533-ecbb-443e-a270-4cb4f923daf6","Type":"ContainerStarted","Data":"b2880692b11dddefd5768e7f708988a4a68f0f5399d1041e081e8804f1478aff"} Mar 12 14:14:05.606886 master-0 kubenswrapper[7440]: I0312 14:14:05.606844 7440 generic.go:334] "Generic (PLEG): container finished" podID="7420564a-dc9d-4a2e-b0fc-0cc01f115e3b" containerID="4de0a85e4d47c7fb4dc863fea7d92d4eeed644f410c3792a0156ceb688c0d760" exitCode=0 Mar 12 14:14:05.610669 master-0 kubenswrapper[7440]: I0312 14:14:05.606926 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" event={"ID":"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b","Type":"ContainerDied","Data":"4de0a85e4d47c7fb4dc863fea7d92d4eeed644f410c3792a0156ceb688c0d760"} Mar 12 14:14:05.610669 master-0 kubenswrapper[7440]: I0312 14:14:05.608766 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-s5wj4" event={"ID":"8dd912f8-2c4d-4a0a-ba41-918ab5c235ba","Type":"ContainerStarted","Data":"188113c35fe96cf36264ce279ce38efba594ca2f0808990ac18724ea42464967"} Mar 12 14:14:05.610669 master-0 kubenswrapper[7440]: I0312 14:14:05.608797 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-s5wj4" event={"ID":"8dd912f8-2c4d-4a0a-ba41-918ab5c235ba","Type":"ContainerStarted","Data":"7e8ded1c40f6f3e26e0bdf53cc47f92c6162eab80d359209d548be3dc3c1a52f"} Mar 12 14:14:05.610669 master-0 kubenswrapper[7440]: I0312 14:14:05.610587 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" event={"ID":"0a898118-6d01-4211-92f0-43967b75405c","Type":"ContainerStarted","Data":"10e2670e6ab6b47f07948c60e7e3a46c3f0ed3468cba558c9fc231e5dc2ca43a"} Mar 12 14:14:05.612798 master-0 kubenswrapper[7440]: I0312 14:14:05.612767 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-thh89" event={"ID":"a932351b-831e-4930-85a2-f2faf1e6b262","Type":"ContainerStarted","Data":"0fe01a0dbee94f17641e27b15e6358ba154e2dc2bbe75b79d78402ecab3bf79f"} Mar 12 14:14:05.613454 master-0 kubenswrapper[7440]: I0312 14:14:05.613382 7440 scope.go:117] "RemoveContainer" containerID="60d4e07599b638379384c9ffcfcd09977c6b4d80b1728d1e52c718e08335973e" Mar 12 14:14:05.621140 master-0 kubenswrapper[7440]: I0312 14:14:05.621106 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qngm" event={"ID":"d181b683-a575-45a3-b736-ad4e07486545","Type":"ContainerStarted","Data":"3e79e6cf6c2a81d84480642bdb6e13725272037b0e0f9e2a9958b1bfd7b31b67"} Mar 12 14:14:05.640307 master-0 kubenswrapper[7440]: I0312 14:14:05.640265 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" event={"ID":"de61e1fe-294c-48a6-8cf3-aeb4637ef2cc","Type":"ContainerStarted","Data":"1da1f692fe7f463fbb1c0cbb755fdd4e259885377082c810ee0f69c91f679d04"} Mar 12 14:14:05.645563 master-0 kubenswrapper[7440]: I0312 14:14:05.645496 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" podStartSLOduration=10.486678448 podStartE2EDuration="36.645470702s" podCreationTimestamp="2026-03-12 14:13:29 +0000 UTC" firstStartedPulling="2026-03-12 14:13:36.256177261 +0000 UTC m=+76.591555810" lastFinishedPulling="2026-03-12 14:14:02.414969505 +0000 UTC m=+102.750348064" observedRunningTime="2026-03-12 14:14:05.644490758 +0000 UTC m=+105.979869327" watchObservedRunningTime="2026-03-12 14:14:05.645470702 +0000 UTC m=+105.980849291" Mar 12 14:14:05.650757 master-0 kubenswrapper[7440]: I0312 14:14:05.646930 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" event={"ID":"1bba274a-38c7-4d13-88a5-6bc39228416c","Type":"ContainerStarted","Data":"b98815f2940c407dcd2edaca0a185078f6d9c591becb207f34495f0ed682e5be"} Mar 12 14:14:05.662401 master-0 kubenswrapper[7440]: I0312 14:14:05.662191 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-q29ch" event={"ID":"f569ed3b-924d-4829-b192-f508ee70658d","Type":"ContainerStarted","Data":"1b9240ea02291ba731222eda95b14d923700f86aa2f9700200a3ef468ef2cb89"} Mar 12 14:14:05.671483 master-0 kubenswrapper[7440]: I0312 14:14:05.671337 7440 scope.go:117] "RemoveContainer" containerID="098ed9f2c427b81d65b020546224fb35f717dc84b8bb4d73a8d9597bd5875a4f" Mar 12 14:14:05.672723 master-0 kubenswrapper[7440]: E0312 14:14:05.672696 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"098ed9f2c427b81d65b020546224fb35f717dc84b8bb4d73a8d9597bd5875a4f\": container with ID starting with 098ed9f2c427b81d65b020546224fb35f717dc84b8bb4d73a8d9597bd5875a4f not found: ID does not exist" containerID="098ed9f2c427b81d65b020546224fb35f717dc84b8bb4d73a8d9597bd5875a4f" Mar 12 14:14:05.672873 master-0 kubenswrapper[7440]: I0312 14:14:05.672845 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"098ed9f2c427b81d65b020546224fb35f717dc84b8bb4d73a8d9597bd5875a4f"} err="failed to get container status \"098ed9f2c427b81d65b020546224fb35f717dc84b8bb4d73a8d9597bd5875a4f\": rpc error: code = NotFound desc = could not find container \"098ed9f2c427b81d65b020546224fb35f717dc84b8bb4d73a8d9597bd5875a4f\": container with ID starting with 098ed9f2c427b81d65b020546224fb35f717dc84b8bb4d73a8d9597bd5875a4f not found: ID does not exist" Mar 12 14:14:05.672994 master-0 kubenswrapper[7440]: I0312 14:14:05.672979 7440 scope.go:117] "RemoveContainer" containerID="60d4e07599b638379384c9ffcfcd09977c6b4d80b1728d1e52c718e08335973e" Mar 12 14:14:05.692612 master-0 kubenswrapper[7440]: E0312 14:14:05.692563 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60d4e07599b638379384c9ffcfcd09977c6b4d80b1728d1e52c718e08335973e\": container with ID starting with 60d4e07599b638379384c9ffcfcd09977c6b4d80b1728d1e52c718e08335973e not found: ID does not exist" containerID="60d4e07599b638379384c9ffcfcd09977c6b4d80b1728d1e52c718e08335973e" Mar 12 14:14:05.692612 master-0 kubenswrapper[7440]: I0312 14:14:05.692601 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60d4e07599b638379384c9ffcfcd09977c6b4d80b1728d1e52c718e08335973e"} err="failed to get container status \"60d4e07599b638379384c9ffcfcd09977c6b4d80b1728d1e52c718e08335973e\": rpc error: code = NotFound desc = could not find container \"60d4e07599b638379384c9ffcfcd09977c6b4d80b1728d1e52c718e08335973e\": container with ID starting with 60d4e07599b638379384c9ffcfcd09977c6b4d80b1728d1e52c718e08335973e not found: ID does not exist" Mar 12 14:14:05.692806 master-0 kubenswrapper[7440]: I0312 14:14:05.692622 7440 scope.go:117] "RemoveContainer" containerID="098ed9f2c427b81d65b020546224fb35f717dc84b8bb4d73a8d9597bd5875a4f" Mar 12 14:14:05.697965 master-0 kubenswrapper[7440]: I0312 14:14:05.697802 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"098ed9f2c427b81d65b020546224fb35f717dc84b8bb4d73a8d9597bd5875a4f"} err="failed to get container status \"098ed9f2c427b81d65b020546224fb35f717dc84b8bb4d73a8d9597bd5875a4f\": rpc error: code = NotFound desc = could not find container \"098ed9f2c427b81d65b020546224fb35f717dc84b8bb4d73a8d9597bd5875a4f\": container with ID starting with 098ed9f2c427b81d65b020546224fb35f717dc84b8bb4d73a8d9597bd5875a4f not found: ID does not exist" Mar 12 14:14:05.697965 master-0 kubenswrapper[7440]: I0312 14:14:05.697842 7440 scope.go:117] "RemoveContainer" containerID="60d4e07599b638379384c9ffcfcd09977c6b4d80b1728d1e52c718e08335973e" Mar 12 14:14:05.698227 master-0 kubenswrapper[7440]: I0312 14:14:05.697975 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" podUID="bc86a749-8fef-462c-b422-95155cb6ca21" containerName="cluster-cloud-controller-manager" containerID="cri-o://9e4a052d66b8a86f9dde7d3e1771bff40120b5229ae08cab4c4d06bd8c8a4ec7" gracePeriod=30 Mar 12 14:14:05.698227 master-0 kubenswrapper[7440]: I0312 14:14:05.698183 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" event={"ID":"bc86a749-8fef-462c-b422-95155cb6ca21","Type":"ContainerStarted","Data":"a0a520da1d97d9f957f5e0e17e1c46e894bd5a2abf966141481649c922a71e6f"} Mar 12 14:14:05.698227 master-0 kubenswrapper[7440]: I0312 14:14:05.698203 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" event={"ID":"bc86a749-8fef-462c-b422-95155cb6ca21","Type":"ContainerStarted","Data":"740eebd052d9dcbb2586a5a74ab08b5ed16965cead0fec06cf7fee07a487e80e"} Mar 12 14:14:05.698227 master-0 kubenswrapper[7440]: I0312 14:14:05.698219 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" podUID="bc86a749-8fef-462c-b422-95155cb6ca21" containerName="kube-rbac-proxy" containerID="cri-o://a0a520da1d97d9f957f5e0e17e1c46e894bd5a2abf966141481649c922a71e6f" gracePeriod=30 Mar 12 14:14:05.698404 master-0 kubenswrapper[7440]: I0312 14:14:05.698262 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" podUID="bc86a749-8fef-462c-b422-95155cb6ca21" containerName="config-sync-controllers" containerID="cri-o://740eebd052d9dcbb2586a5a74ab08b5ed16965cead0fec06cf7fee07a487e80e" gracePeriod=30 Mar 12 14:14:05.706453 master-0 kubenswrapper[7440]: I0312 14:14:05.703004 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60d4e07599b638379384c9ffcfcd09977c6b4d80b1728d1e52c718e08335973e"} err="failed to get container status \"60d4e07599b638379384c9ffcfcd09977c6b4d80b1728d1e52c718e08335973e\": rpc error: code = NotFound desc = could not find container \"60d4e07599b638379384c9ffcfcd09977c6b4d80b1728d1e52c718e08335973e\": container with ID starting with 60d4e07599b638379384c9ffcfcd09977c6b4d80b1728d1e52c718e08335973e not found: ID does not exist" Mar 12 14:14:05.783174 master-0 kubenswrapper[7440]: I0312 14:14:05.778411 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-thh89" podStartSLOduration=4.925911402 podStartE2EDuration="59.77839144s" podCreationTimestamp="2026-03-12 14:13:06 +0000 UTC" firstStartedPulling="2026-03-12 14:13:07.59310416 +0000 UTC m=+47.928482719" lastFinishedPulling="2026-03-12 14:14:02.445584198 +0000 UTC m=+102.780962757" observedRunningTime="2026-03-12 14:14:05.773552337 +0000 UTC m=+106.108930916" watchObservedRunningTime="2026-03-12 14:14:05.77839144 +0000 UTC m=+106.113769999" Mar 12 14:14:05.826612 master-0 kubenswrapper[7440]: I0312 14:14:05.824751 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" podStartSLOduration=12.037066828 podStartE2EDuration="37.824724294s" podCreationTimestamp="2026-03-12 14:13:28 +0000 UTC" firstStartedPulling="2026-03-12 14:13:36.715332159 +0000 UTC m=+77.050710708" lastFinishedPulling="2026-03-12 14:14:02.502989615 +0000 UTC m=+102.838368174" observedRunningTime="2026-03-12 14:14:05.813802456 +0000 UTC m=+106.149181015" watchObservedRunningTime="2026-03-12 14:14:05.824724294 +0000 UTC m=+106.160102853" Mar 12 14:14:05.865917 master-0 kubenswrapper[7440]: I0312 14:14:05.862336 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-7769569c45-s5wj4" podStartSLOduration=11.862314805 podStartE2EDuration="11.862314805s" podCreationTimestamp="2026-03-12 14:13:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:14:05.86094831 +0000 UTC m=+106.196326869" watchObservedRunningTime="2026-03-12 14:14:05.862314805 +0000 UTC m=+106.197693364" Mar 12 14:14:05.970750 master-0 kubenswrapper[7440]: I0312 14:14:05.961870 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-sm9nb"] Mar 12 14:14:05.970750 master-0 kubenswrapper[7440]: I0312 14:14:05.962142 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" podUID="7023af8b-bfcc-4253-85cd-d891dff1c86e" containerName="multus-admission-controller" containerID="cri-o://59225193c476309a0aa5efa9f60ce80fa3d02930e0324fa57c25ccd5390ef184" gracePeriod=30 Mar 12 14:14:05.970750 master-0 kubenswrapper[7440]: I0312 14:14:05.962160 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" podUID="7023af8b-bfcc-4253-85cd-d891dff1c86e" containerName="kube-rbac-proxy" containerID="cri-o://bbf8648501855090b8f097caff2cdeb613eb87fa32c1c70b502f2307573cd6ef" gracePeriod=30 Mar 12 14:14:05.987149 master-0 kubenswrapper[7440]: I0312 14:14:05.986790 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc"] Mar 12 14:14:05.993190 master-0 kubenswrapper[7440]: I0312 14:14:05.993068 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" Mar 12 14:14:06.010769 master-0 kubenswrapper[7440]: I0312 14:14:06.008779 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-6l2lc"] Mar 12 14:14:06.052927 master-0 kubenswrapper[7440]: I0312 14:14:06.048261 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" podStartSLOduration=4.934434332 podStartE2EDuration="30.048237418s" podCreationTimestamp="2026-03-12 14:13:36 +0000 UTC" firstStartedPulling="2026-03-12 14:13:37.347918255 +0000 UTC m=+77.683296814" lastFinishedPulling="2026-03-12 14:14:02.461721341 +0000 UTC m=+102.797099900" observedRunningTime="2026-03-12 14:14:06.0471545 +0000 UTC m=+106.382533079" watchObservedRunningTime="2026-03-12 14:14:06.048237418 +0000 UTC m=+106.383615977" Mar 12 14:14:06.081592 master-0 kubenswrapper[7440]: I0312 14:14:06.079177 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s"] Mar 12 14:14:06.081592 master-0 kubenswrapper[7440]: E0312 14:14:06.079430 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc86a749-8fef-462c-b422-95155cb6ca21" containerName="kube-rbac-proxy" Mar 12 14:14:06.081592 master-0 kubenswrapper[7440]: I0312 14:14:06.079446 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc86a749-8fef-462c-b422-95155cb6ca21" containerName="kube-rbac-proxy" Mar 12 14:14:06.081592 master-0 kubenswrapper[7440]: E0312 14:14:06.079459 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca50659e-7afc-4c81-b89f-2386ca173c18" containerName="kube-rbac-proxy" Mar 12 14:14:06.081592 master-0 kubenswrapper[7440]: I0312 14:14:06.079467 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca50659e-7afc-4c81-b89f-2386ca173c18" containerName="kube-rbac-proxy" Mar 12 14:14:06.081592 master-0 kubenswrapper[7440]: E0312 14:14:06.079478 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc86a749-8fef-462c-b422-95155cb6ca21" containerName="config-sync-controllers" Mar 12 14:14:06.081592 master-0 kubenswrapper[7440]: I0312 14:14:06.079486 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc86a749-8fef-462c-b422-95155cb6ca21" containerName="config-sync-controllers" Mar 12 14:14:06.081592 master-0 kubenswrapper[7440]: E0312 14:14:06.079499 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca50659e-7afc-4c81-b89f-2386ca173c18" containerName="machine-approver-controller" Mar 12 14:14:06.081592 master-0 kubenswrapper[7440]: I0312 14:14:06.079506 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca50659e-7afc-4c81-b89f-2386ca173c18" containerName="machine-approver-controller" Mar 12 14:14:06.081592 master-0 kubenswrapper[7440]: E0312 14:14:06.079516 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc86a749-8fef-462c-b422-95155cb6ca21" containerName="cluster-cloud-controller-manager" Mar 12 14:14:06.081592 master-0 kubenswrapper[7440]: I0312 14:14:06.079523 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc86a749-8fef-462c-b422-95155cb6ca21" containerName="cluster-cloud-controller-manager" Mar 12 14:14:06.081592 master-0 kubenswrapper[7440]: I0312 14:14:06.079657 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca50659e-7afc-4c81-b89f-2386ca173c18" containerName="machine-approver-controller" Mar 12 14:14:06.081592 master-0 kubenswrapper[7440]: I0312 14:14:06.079667 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc86a749-8fef-462c-b422-95155cb6ca21" containerName="kube-rbac-proxy" Mar 12 14:14:06.081592 master-0 kubenswrapper[7440]: I0312 14:14:06.079674 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc86a749-8fef-462c-b422-95155cb6ca21" containerName="cluster-cloud-controller-manager" Mar 12 14:14:06.081592 master-0 kubenswrapper[7440]: I0312 14:14:06.079684 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca50659e-7afc-4c81-b89f-2386ca173c18" containerName="kube-rbac-proxy" Mar 12 14:14:06.081592 master-0 kubenswrapper[7440]: I0312 14:14:06.079694 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc86a749-8fef-462c-b422-95155cb6ca21" containerName="config-sync-controllers" Mar 12 14:14:06.081592 master-0 kubenswrapper[7440]: I0312 14:14:06.080233 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" Mar 12 14:14:06.083121 master-0 kubenswrapper[7440]: I0312 14:14:06.083026 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 12 14:14:06.094740 master-0 kubenswrapper[7440]: I0312 14:14:06.094699 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 12 14:14:06.095712 master-0 kubenswrapper[7440]: I0312 14:14:06.095288 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-btzl2" Mar 12 14:14:06.095712 master-0 kubenswrapper[7440]: I0312 14:14:06.095414 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 12 14:14:06.095712 master-0 kubenswrapper[7440]: I0312 14:14:06.095687 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 12 14:14:06.101983 master-0 kubenswrapper[7440]: I0312 14:14:06.101786 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 12 14:14:06.108092 master-0 kubenswrapper[7440]: I0312 14:14:06.106115 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc86a749-8fef-462c-b422-95155cb6ca21-auth-proxy-config\") pod \"bc86a749-8fef-462c-b422-95155cb6ca21\" (UID: \"bc86a749-8fef-462c-b422-95155cb6ca21\") " Mar 12 14:14:06.108092 master-0 kubenswrapper[7440]: I0312 14:14:06.106145 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c95r5\" (UniqueName: \"kubernetes.io/projected/bc86a749-8fef-462c-b422-95155cb6ca21-kube-api-access-c95r5\") pod \"bc86a749-8fef-462c-b422-95155cb6ca21\" (UID: \"bc86a749-8fef-462c-b422-95155cb6ca21\") " Mar 12 14:14:06.108092 master-0 kubenswrapper[7440]: I0312 14:14:06.106183 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bc86a749-8fef-462c-b422-95155cb6ca21-images\") pod \"bc86a749-8fef-462c-b422-95155cb6ca21\" (UID: \"bc86a749-8fef-462c-b422-95155cb6ca21\") " Mar 12 14:14:06.108092 master-0 kubenswrapper[7440]: I0312 14:14:06.106213 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/bc86a749-8fef-462c-b422-95155cb6ca21-host-etc-kube\") pod \"bc86a749-8fef-462c-b422-95155cb6ca21\" (UID: \"bc86a749-8fef-462c-b422-95155cb6ca21\") " Mar 12 14:14:06.108092 master-0 kubenswrapper[7440]: I0312 14:14:06.106236 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/bc86a749-8fef-462c-b422-95155cb6ca21-cloud-controller-manager-operator-tls\") pod \"bc86a749-8fef-462c-b422-95155cb6ca21\" (UID: \"bc86a749-8fef-462c-b422-95155cb6ca21\") " Mar 12 14:14:06.108092 master-0 kubenswrapper[7440]: I0312 14:14:06.106327 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/40912d56-8288-4d58-ad91-7455bd460887-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-44b6s\" (UID: \"40912d56-8288-4d58-ad91-7455bd460887\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" Mar 12 14:14:06.108092 master-0 kubenswrapper[7440]: I0312 14:14:06.106360 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9gvf\" (UniqueName: \"kubernetes.io/projected/40912d56-8288-4d58-ad91-7455bd460887-kube-api-access-l9gvf\") pod \"machine-approver-754bdc9f9d-44b6s\" (UID: \"40912d56-8288-4d58-ad91-7455bd460887\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" Mar 12 14:14:06.108092 master-0 kubenswrapper[7440]: I0312 14:14:06.106381 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40912d56-8288-4d58-ad91-7455bd460887-config\") pod \"machine-approver-754bdc9f9d-44b6s\" (UID: \"40912d56-8288-4d58-ad91-7455bd460887\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" Mar 12 14:14:06.108092 master-0 kubenswrapper[7440]: I0312 14:14:06.106408 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/40912d56-8288-4d58-ad91-7455bd460887-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-44b6s\" (UID: \"40912d56-8288-4d58-ad91-7455bd460887\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" Mar 12 14:14:06.108092 master-0 kubenswrapper[7440]: I0312 14:14:06.107141 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc86a749-8fef-462c-b422-95155cb6ca21-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "bc86a749-8fef-462c-b422-95155cb6ca21" (UID: "bc86a749-8fef-462c-b422-95155cb6ca21"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:14:06.117663 master-0 kubenswrapper[7440]: I0312 14:14:06.116780 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc86a749-8fef-462c-b422-95155cb6ca21-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "bc86a749-8fef-462c-b422-95155cb6ca21" (UID: "bc86a749-8fef-462c-b422-95155cb6ca21"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:14:06.117663 master-0 kubenswrapper[7440]: I0312 14:14:06.117068 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc86a749-8fef-462c-b422-95155cb6ca21-images" (OuterVolumeSpecName: "images") pod "bc86a749-8fef-462c-b422-95155cb6ca21" (UID: "bc86a749-8fef-462c-b422-95155cb6ca21"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:14:06.123331 master-0 kubenswrapper[7440]: I0312 14:14:06.123058 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc86a749-8fef-462c-b422-95155cb6ca21-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "bc86a749-8fef-462c-b422-95155cb6ca21" (UID: "bc86a749-8fef-462c-b422-95155cb6ca21"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:14:06.126345 master-0 kubenswrapper[7440]: I0312 14:14:06.126189 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc86a749-8fef-462c-b422-95155cb6ca21-kube-api-access-c95r5" (OuterVolumeSpecName: "kube-api-access-c95r5") pod "bc86a749-8fef-462c-b422-95155cb6ca21" (UID: "bc86a749-8fef-462c-b422-95155cb6ca21"). InnerVolumeSpecName "kube-api-access-c95r5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:14:06.218569 master-0 kubenswrapper[7440]: I0312 14:14:06.213457 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9gvf\" (UniqueName: \"kubernetes.io/projected/40912d56-8288-4d58-ad91-7455bd460887-kube-api-access-l9gvf\") pod \"machine-approver-754bdc9f9d-44b6s\" (UID: \"40912d56-8288-4d58-ad91-7455bd460887\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" Mar 12 14:14:06.218569 master-0 kubenswrapper[7440]: I0312 14:14:06.213538 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40912d56-8288-4d58-ad91-7455bd460887-config\") pod \"machine-approver-754bdc9f9d-44b6s\" (UID: \"40912d56-8288-4d58-ad91-7455bd460887\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" Mar 12 14:14:06.218569 master-0 kubenswrapper[7440]: I0312 14:14:06.213594 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/40912d56-8288-4d58-ad91-7455bd460887-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-44b6s\" (UID: \"40912d56-8288-4d58-ad91-7455bd460887\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" Mar 12 14:14:06.218569 master-0 kubenswrapper[7440]: I0312 14:14:06.213736 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/40912d56-8288-4d58-ad91-7455bd460887-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-44b6s\" (UID: \"40912d56-8288-4d58-ad91-7455bd460887\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" Mar 12 14:14:06.218569 master-0 kubenswrapper[7440]: I0312 14:14:06.213856 7440 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bc86a749-8fef-462c-b422-95155cb6ca21-images\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:06.218569 master-0 kubenswrapper[7440]: I0312 14:14:06.213879 7440 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/bc86a749-8fef-462c-b422-95155cb6ca21-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:06.218569 master-0 kubenswrapper[7440]: I0312 14:14:06.213912 7440 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/bc86a749-8fef-462c-b422-95155cb6ca21-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:06.218569 master-0 kubenswrapper[7440]: I0312 14:14:06.213929 7440 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc86a749-8fef-462c-b422-95155cb6ca21-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:06.218569 master-0 kubenswrapper[7440]: I0312 14:14:06.213940 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c95r5\" (UniqueName: \"kubernetes.io/projected/bc86a749-8fef-462c-b422-95155cb6ca21-kube-api-access-c95r5\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:06.218569 master-0 kubenswrapper[7440]: I0312 14:14:06.214565 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/40912d56-8288-4d58-ad91-7455bd460887-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-44b6s\" (UID: \"40912d56-8288-4d58-ad91-7455bd460887\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" Mar 12 14:14:06.218569 master-0 kubenswrapper[7440]: I0312 14:14:06.215277 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40912d56-8288-4d58-ad91-7455bd460887-config\") pod \"machine-approver-754bdc9f9d-44b6s\" (UID: \"40912d56-8288-4d58-ad91-7455bd460887\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" Mar 12 14:14:06.219862 master-0 kubenswrapper[7440]: I0312 14:14:06.219814 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/40912d56-8288-4d58-ad91-7455bd460887-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-44b6s\" (UID: \"40912d56-8288-4d58-ad91-7455bd460887\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" Mar 12 14:14:06.270616 master-0 kubenswrapper[7440]: I0312 14:14:06.270570 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9gvf\" (UniqueName: \"kubernetes.io/projected/40912d56-8288-4d58-ad91-7455bd460887-kube-api-access-l9gvf\") pod \"machine-approver-754bdc9f9d-44b6s\" (UID: \"40912d56-8288-4d58-ad91-7455bd460887\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" Mar 12 14:14:06.381091 master-0 kubenswrapper[7440]: I0312 14:14:06.381045 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4622r" Mar 12 14:14:06.381270 master-0 kubenswrapper[7440]: I0312 14:14:06.381229 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4622r" Mar 12 14:14:06.411927 master-0 kubenswrapper[7440]: I0312 14:14:06.410983 7440 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 12 14:14:06.411927 master-0 kubenswrapper[7440]: I0312 14:14:06.411232 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" containerID="cri-o://857cc78e0c0678c5508c4eb58b1fbdd872cb096a1de1ff4746f9a88c2863a73c" gracePeriod=30 Mar 12 14:14:06.411927 master-0 kubenswrapper[7440]: I0312 14:14:06.411364 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" containerID="cri-o://d62d60cfbaec34b17f1179067155a280075561a18ae5a4aaf75af0a737c10b39" gracePeriod=30 Mar 12 14:14:06.416188 master-0 kubenswrapper[7440]: I0312 14:14:06.413225 7440 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 12 14:14:06.416188 master-0 kubenswrapper[7440]: E0312 14:14:06.413488 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 12 14:14:06.416188 master-0 kubenswrapper[7440]: I0312 14:14:06.413505 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 12 14:14:06.416188 master-0 kubenswrapper[7440]: E0312 14:14:06.413527 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 12 14:14:06.416188 master-0 kubenswrapper[7440]: I0312 14:14:06.413536 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 12 14:14:06.416188 master-0 kubenswrapper[7440]: I0312 14:14:06.413651 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 12 14:14:06.416188 master-0 kubenswrapper[7440]: I0312 14:14:06.413668 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 12 14:14:06.416188 master-0 kubenswrapper[7440]: I0312 14:14:06.415637 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 12 14:14:06.460886 master-0 kubenswrapper[7440]: I0312 14:14:06.460836 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:14:06.468178 master-0 kubenswrapper[7440]: I0312 14:14:06.468088 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" Mar 12 14:14:06.517156 master-0 kubenswrapper[7440]: I0312 14:14:06.517109 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:14:06.517282 master-0 kubenswrapper[7440]: I0312 14:14:06.517168 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:14:06.517282 master-0 kubenswrapper[7440]: I0312 14:14:06.517197 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:14:06.517282 master-0 kubenswrapper[7440]: I0312 14:14:06.517222 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:14:06.517282 master-0 kubenswrapper[7440]: I0312 14:14:06.517247 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:14:06.517282 master-0 kubenswrapper[7440]: I0312 14:14:06.517260 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:14:06.585938 master-0 kubenswrapper[7440]: I0312 14:14:06.585861 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-thh89" Mar 12 14:14:06.586146 master-0 kubenswrapper[7440]: I0312 14:14:06.585954 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-thh89" Mar 12 14:14:06.618569 master-0 kubenswrapper[7440]: I0312 14:14:06.618501 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:14:06.618569 master-0 kubenswrapper[7440]: I0312 14:14:06.618562 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:14:06.619154 master-0 kubenswrapper[7440]: I0312 14:14:06.618618 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:14:06.619154 master-0 kubenswrapper[7440]: I0312 14:14:06.618651 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:14:06.619154 master-0 kubenswrapper[7440]: I0312 14:14:06.618682 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:14:06.619154 master-0 kubenswrapper[7440]: I0312 14:14:06.618716 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:14:06.619154 master-0 kubenswrapper[7440]: I0312 14:14:06.618793 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:14:06.619154 master-0 kubenswrapper[7440]: I0312 14:14:06.618835 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:14:06.619154 master-0 kubenswrapper[7440]: I0312 14:14:06.618862 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:14:06.619154 master-0 kubenswrapper[7440]: I0312 14:14:06.618888 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:14:06.619154 master-0 kubenswrapper[7440]: I0312 14:14:06.618933 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:14:06.619154 master-0 kubenswrapper[7440]: I0312 14:14:06.618958 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:14:06.706271 master-0 kubenswrapper[7440]: I0312 14:14:06.706214 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" event={"ID":"40912d56-8288-4d58-ad91-7455bd460887","Type":"ContainerStarted","Data":"a071b87c5a3a1d570849d8f30a4ef18e47cf5ac7ae26cb6fa07ebd774622be6c"} Mar 12 14:14:06.707691 master-0 kubenswrapper[7440]: I0312 14:14:06.707659 7440 generic.go:334] "Generic (PLEG): container finished" podID="7023af8b-bfcc-4253-85cd-d891dff1c86e" containerID="bbf8648501855090b8f097caff2cdeb613eb87fa32c1c70b502f2307573cd6ef" exitCode=0 Mar 12 14:14:06.707751 master-0 kubenswrapper[7440]: I0312 14:14:06.707715 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" event={"ID":"7023af8b-bfcc-4253-85cd-d891dff1c86e","Type":"ContainerDied","Data":"bbf8648501855090b8f097caff2cdeb613eb87fa32c1c70b502f2307573cd6ef"} Mar 12 14:14:06.709377 master-0 kubenswrapper[7440]: I0312 14:14:06.709340 7440 generic.go:334] "Generic (PLEG): container finished" podID="bc86a749-8fef-462c-b422-95155cb6ca21" containerID="a0a520da1d97d9f957f5e0e17e1c46e894bd5a2abf966141481649c922a71e6f" exitCode=0 Mar 12 14:14:06.709377 master-0 kubenswrapper[7440]: I0312 14:14:06.709368 7440 generic.go:334] "Generic (PLEG): container finished" podID="bc86a749-8fef-462c-b422-95155cb6ca21" containerID="740eebd052d9dcbb2586a5a74ab08b5ed16965cead0fec06cf7fee07a487e80e" exitCode=0 Mar 12 14:14:06.709481 master-0 kubenswrapper[7440]: I0312 14:14:06.709381 7440 generic.go:334] "Generic (PLEG): container finished" podID="bc86a749-8fef-462c-b422-95155cb6ca21" containerID="9e4a052d66b8a86f9dde7d3e1771bff40120b5229ae08cab4c4d06bd8c8a4ec7" exitCode=0 Mar 12 14:14:06.709481 master-0 kubenswrapper[7440]: I0312 14:14:06.709418 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" event={"ID":"bc86a749-8fef-462c-b422-95155cb6ca21","Type":"ContainerDied","Data":"a0a520da1d97d9f957f5e0e17e1c46e894bd5a2abf966141481649c922a71e6f"} Mar 12 14:14:06.709481 master-0 kubenswrapper[7440]: I0312 14:14:06.709438 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" event={"ID":"bc86a749-8fef-462c-b422-95155cb6ca21","Type":"ContainerDied","Data":"740eebd052d9dcbb2586a5a74ab08b5ed16965cead0fec06cf7fee07a487e80e"} Mar 12 14:14:06.709481 master-0 kubenswrapper[7440]: I0312 14:14:06.709452 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" event={"ID":"bc86a749-8fef-462c-b422-95155cb6ca21","Type":"ContainerDied","Data":"9e4a052d66b8a86f9dde7d3e1771bff40120b5229ae08cab4c4d06bd8c8a4ec7"} Mar 12 14:14:06.709481 master-0 kubenswrapper[7440]: I0312 14:14:06.709463 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" event={"ID":"bc86a749-8fef-462c-b422-95155cb6ca21","Type":"ContainerDied","Data":"2d5c60f6fb14b7b43695baab60e1577ff08272e1f7ae298ac4d7d3adc1ea87f7"} Mar 12 14:14:06.709481 master-0 kubenswrapper[7440]: I0312 14:14:06.709480 7440 scope.go:117] "RemoveContainer" containerID="a0a520da1d97d9f957f5e0e17e1c46e894bd5a2abf966141481649c922a71e6f" Mar 12 14:14:06.709672 master-0 kubenswrapper[7440]: I0312 14:14:06.709614 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2" Mar 12 14:14:06.716290 master-0 kubenswrapper[7440]: I0312 14:14:06.716257 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" event={"ID":"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b","Type":"ContainerStarted","Data":"82e98531076d6e3c9a7e475978917c54179baaf121c2bd492fa03aa8611e6187"} Mar 12 14:14:06.733035 master-0 kubenswrapper[7440]: I0312 14:14:06.732996 7440 scope.go:117] "RemoveContainer" containerID="740eebd052d9dcbb2586a5a74ab08b5ed16965cead0fec06cf7fee07a487e80e" Mar 12 14:14:06.773675 master-0 kubenswrapper[7440]: I0312 14:14:06.773009 7440 scope.go:117] "RemoveContainer" containerID="9e4a052d66b8a86f9dde7d3e1771bff40120b5229ae08cab4c4d06bd8c8a4ec7" Mar 12 14:14:06.805154 master-0 kubenswrapper[7440]: I0312 14:14:06.804029 7440 scope.go:117] "RemoveContainer" containerID="a0a520da1d97d9f957f5e0e17e1c46e894bd5a2abf966141481649c922a71e6f" Mar 12 14:14:06.805154 master-0 kubenswrapper[7440]: E0312 14:14:06.804811 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0a520da1d97d9f957f5e0e17e1c46e894bd5a2abf966141481649c922a71e6f\": container with ID starting with a0a520da1d97d9f957f5e0e17e1c46e894bd5a2abf966141481649c922a71e6f not found: ID does not exist" containerID="a0a520da1d97d9f957f5e0e17e1c46e894bd5a2abf966141481649c922a71e6f" Mar 12 14:14:06.805154 master-0 kubenswrapper[7440]: I0312 14:14:06.804853 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0a520da1d97d9f957f5e0e17e1c46e894bd5a2abf966141481649c922a71e6f"} err="failed to get container status \"a0a520da1d97d9f957f5e0e17e1c46e894bd5a2abf966141481649c922a71e6f\": rpc error: code = NotFound desc = could not find container \"a0a520da1d97d9f957f5e0e17e1c46e894bd5a2abf966141481649c922a71e6f\": container with ID starting with a0a520da1d97d9f957f5e0e17e1c46e894bd5a2abf966141481649c922a71e6f not found: ID does not exist" Mar 12 14:14:06.805154 master-0 kubenswrapper[7440]: I0312 14:14:06.804874 7440 scope.go:117] "RemoveContainer" containerID="740eebd052d9dcbb2586a5a74ab08b5ed16965cead0fec06cf7fee07a487e80e" Mar 12 14:14:06.805621 master-0 kubenswrapper[7440]: E0312 14:14:06.805577 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"740eebd052d9dcbb2586a5a74ab08b5ed16965cead0fec06cf7fee07a487e80e\": container with ID starting with 740eebd052d9dcbb2586a5a74ab08b5ed16965cead0fec06cf7fee07a487e80e not found: ID does not exist" containerID="740eebd052d9dcbb2586a5a74ab08b5ed16965cead0fec06cf7fee07a487e80e" Mar 12 14:14:06.805687 master-0 kubenswrapper[7440]: I0312 14:14:06.805636 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"740eebd052d9dcbb2586a5a74ab08b5ed16965cead0fec06cf7fee07a487e80e"} err="failed to get container status \"740eebd052d9dcbb2586a5a74ab08b5ed16965cead0fec06cf7fee07a487e80e\": rpc error: code = NotFound desc = could not find container \"740eebd052d9dcbb2586a5a74ab08b5ed16965cead0fec06cf7fee07a487e80e\": container with ID starting with 740eebd052d9dcbb2586a5a74ab08b5ed16965cead0fec06cf7fee07a487e80e not found: ID does not exist" Mar 12 14:14:06.805687 master-0 kubenswrapper[7440]: I0312 14:14:06.805672 7440 scope.go:117] "RemoveContainer" containerID="9e4a052d66b8a86f9dde7d3e1771bff40120b5229ae08cab4c4d06bd8c8a4ec7" Mar 12 14:14:06.807129 master-0 kubenswrapper[7440]: E0312 14:14:06.807083 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e4a052d66b8a86f9dde7d3e1771bff40120b5229ae08cab4c4d06bd8c8a4ec7\": container with ID starting with 9e4a052d66b8a86f9dde7d3e1771bff40120b5229ae08cab4c4d06bd8c8a4ec7 not found: ID does not exist" containerID="9e4a052d66b8a86f9dde7d3e1771bff40120b5229ae08cab4c4d06bd8c8a4ec7" Mar 12 14:14:06.807206 master-0 kubenswrapper[7440]: I0312 14:14:06.807125 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e4a052d66b8a86f9dde7d3e1771bff40120b5229ae08cab4c4d06bd8c8a4ec7"} err="failed to get container status \"9e4a052d66b8a86f9dde7d3e1771bff40120b5229ae08cab4c4d06bd8c8a4ec7\": rpc error: code = NotFound desc = could not find container \"9e4a052d66b8a86f9dde7d3e1771bff40120b5229ae08cab4c4d06bd8c8a4ec7\": container with ID starting with 9e4a052d66b8a86f9dde7d3e1771bff40120b5229ae08cab4c4d06bd8c8a4ec7 not found: ID does not exist" Mar 12 14:14:06.807206 master-0 kubenswrapper[7440]: I0312 14:14:06.807146 7440 scope.go:117] "RemoveContainer" containerID="a0a520da1d97d9f957f5e0e17e1c46e894bd5a2abf966141481649c922a71e6f" Mar 12 14:14:06.807686 master-0 kubenswrapper[7440]: I0312 14:14:06.807643 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0a520da1d97d9f957f5e0e17e1c46e894bd5a2abf966141481649c922a71e6f"} err="failed to get container status \"a0a520da1d97d9f957f5e0e17e1c46e894bd5a2abf966141481649c922a71e6f\": rpc error: code = NotFound desc = could not find container \"a0a520da1d97d9f957f5e0e17e1c46e894bd5a2abf966141481649c922a71e6f\": container with ID starting with a0a520da1d97d9f957f5e0e17e1c46e894bd5a2abf966141481649c922a71e6f not found: ID does not exist" Mar 12 14:14:06.807686 master-0 kubenswrapper[7440]: I0312 14:14:06.807679 7440 scope.go:117] "RemoveContainer" containerID="740eebd052d9dcbb2586a5a74ab08b5ed16965cead0fec06cf7fee07a487e80e" Mar 12 14:14:06.809615 master-0 kubenswrapper[7440]: I0312 14:14:06.809276 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"740eebd052d9dcbb2586a5a74ab08b5ed16965cead0fec06cf7fee07a487e80e"} err="failed to get container status \"740eebd052d9dcbb2586a5a74ab08b5ed16965cead0fec06cf7fee07a487e80e\": rpc error: code = NotFound desc = could not find container \"740eebd052d9dcbb2586a5a74ab08b5ed16965cead0fec06cf7fee07a487e80e\": container with ID starting with 740eebd052d9dcbb2586a5a74ab08b5ed16965cead0fec06cf7fee07a487e80e not found: ID does not exist" Mar 12 14:14:06.809615 master-0 kubenswrapper[7440]: I0312 14:14:06.809333 7440 scope.go:117] "RemoveContainer" containerID="9e4a052d66b8a86f9dde7d3e1771bff40120b5229ae08cab4c4d06bd8c8a4ec7" Mar 12 14:14:06.809749 master-0 kubenswrapper[7440]: I0312 14:14:06.809614 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e4a052d66b8a86f9dde7d3e1771bff40120b5229ae08cab4c4d06bd8c8a4ec7"} err="failed to get container status \"9e4a052d66b8a86f9dde7d3e1771bff40120b5229ae08cab4c4d06bd8c8a4ec7\": rpc error: code = NotFound desc = could not find container \"9e4a052d66b8a86f9dde7d3e1771bff40120b5229ae08cab4c4d06bd8c8a4ec7\": container with ID starting with 9e4a052d66b8a86f9dde7d3e1771bff40120b5229ae08cab4c4d06bd8c8a4ec7 not found: ID does not exist" Mar 12 14:14:06.809749 master-0 kubenswrapper[7440]: I0312 14:14:06.809631 7440 scope.go:117] "RemoveContainer" containerID="a0a520da1d97d9f957f5e0e17e1c46e894bd5a2abf966141481649c922a71e6f" Mar 12 14:14:06.810641 master-0 kubenswrapper[7440]: I0312 14:14:06.810006 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0a520da1d97d9f957f5e0e17e1c46e894bd5a2abf966141481649c922a71e6f"} err="failed to get container status \"a0a520da1d97d9f957f5e0e17e1c46e894bd5a2abf966141481649c922a71e6f\": rpc error: code = NotFound desc = could not find container \"a0a520da1d97d9f957f5e0e17e1c46e894bd5a2abf966141481649c922a71e6f\": container with ID starting with a0a520da1d97d9f957f5e0e17e1c46e894bd5a2abf966141481649c922a71e6f not found: ID does not exist" Mar 12 14:14:06.810641 master-0 kubenswrapper[7440]: I0312 14:14:06.810039 7440 scope.go:117] "RemoveContainer" containerID="740eebd052d9dcbb2586a5a74ab08b5ed16965cead0fec06cf7fee07a487e80e" Mar 12 14:14:06.814964 master-0 kubenswrapper[7440]: I0312 14:14:06.811103 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"740eebd052d9dcbb2586a5a74ab08b5ed16965cead0fec06cf7fee07a487e80e"} err="failed to get container status \"740eebd052d9dcbb2586a5a74ab08b5ed16965cead0fec06cf7fee07a487e80e\": rpc error: code = NotFound desc = could not find container \"740eebd052d9dcbb2586a5a74ab08b5ed16965cead0fec06cf7fee07a487e80e\": container with ID starting with 740eebd052d9dcbb2586a5a74ab08b5ed16965cead0fec06cf7fee07a487e80e not found: ID does not exist" Mar 12 14:14:06.814964 master-0 kubenswrapper[7440]: I0312 14:14:06.811131 7440 scope.go:117] "RemoveContainer" containerID="9e4a052d66b8a86f9dde7d3e1771bff40120b5229ae08cab4c4d06bd8c8a4ec7" Mar 12 14:14:06.814964 master-0 kubenswrapper[7440]: I0312 14:14:06.811412 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e4a052d66b8a86f9dde7d3e1771bff40120b5229ae08cab4c4d06bd8c8a4ec7"} err="failed to get container status \"9e4a052d66b8a86f9dde7d3e1771bff40120b5229ae08cab4c4d06bd8c8a4ec7\": rpc error: code = NotFound desc = could not find container \"9e4a052d66b8a86f9dde7d3e1771bff40120b5229ae08cab4c4d06bd8c8a4ec7\": container with ID starting with 9e4a052d66b8a86f9dde7d3e1771bff40120b5229ae08cab4c4d06bd8c8a4ec7 not found: ID does not exist" Mar 12 14:14:07.416271 master-0 kubenswrapper[7440]: I0312 14:14:07.416208 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4622r" podUID="191fe879-7ece-4f8c-bae6-cf46acb382c9" containerName="registry-server" probeResult="failure" output=< Mar 12 14:14:07.416271 master-0 kubenswrapper[7440]: timeout: failed to connect service ":50051" within 1s Mar 12 14:14:07.416271 master-0 kubenswrapper[7440]: > Mar 12 14:14:07.624464 master-0 kubenswrapper[7440]: I0312 14:14:07.624395 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-thh89" podUID="a932351b-831e-4930-85a2-f2faf1e6b262" containerName="registry-server" probeResult="failure" output=< Mar 12 14:14:07.624464 master-0 kubenswrapper[7440]: timeout: failed to connect service ":50051" within 1s Mar 12 14:14:07.624464 master-0 kubenswrapper[7440]: > Mar 12 14:14:07.727803 master-0 kubenswrapper[7440]: I0312 14:14:07.727671 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" event={"ID":"40912d56-8288-4d58-ad91-7455bd460887","Type":"ContainerStarted","Data":"6b815065f5b803f6446ee0525693bbd7ee720d608451c165c93b259f6a7e3184"} Mar 12 14:14:07.727803 master-0 kubenswrapper[7440]: I0312 14:14:07.727717 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" event={"ID":"40912d56-8288-4d58-ad91-7455bd460887","Type":"ContainerStarted","Data":"9f02bf384767db17e8e9570ea753dcefdc9a2ea0cf7d2650e496583afd2ebc7f"} Mar 12 14:14:07.744529 master-0 kubenswrapper[7440]: I0312 14:14:07.744459 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:07.744529 master-0 kubenswrapper[7440]: I0312 14:14:07.744518 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:14:07.819741 master-0 kubenswrapper[7440]: I0312 14:14:07.819685 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca50659e-7afc-4c81-b89f-2386ca173c18" path="/var/lib/kubelet/pods/ca50659e-7afc-4c81-b89f-2386ca173c18/volumes" Mar 12 14:14:07.976137 master-0 kubenswrapper[7440]: I0312 14:14:07.976041 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9qngm" Mar 12 14:14:07.976137 master-0 kubenswrapper[7440]: I0312 14:14:07.976145 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9qngm" Mar 12 14:14:08.015410 master-0 kubenswrapper[7440]: I0312 14:14:08.015363 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9qngm" Mar 12 14:14:08.733215 master-0 kubenswrapper[7440]: I0312 14:14:08.733151 7440 generic.go:334] "Generic (PLEG): container finished" podID="dd29b21c-7a0e-4311-952f-427b00468e66" containerID="06754d581cc8aca46ceb909759a4cdf83f5358cda6d0633cc92ae3b0cb8c8c05" exitCode=0 Mar 12 14:14:08.733215 master-0 kubenswrapper[7440]: I0312 14:14:08.733201 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-gltz7" event={"ID":"dd29b21c-7a0e-4311-952f-427b00468e66","Type":"ContainerDied","Data":"06754d581cc8aca46ceb909759a4cdf83f5358cda6d0633cc92ae3b0cb8c8c05"} Mar 12 14:14:08.734524 master-0 kubenswrapper[7440]: I0312 14:14:08.734488 7440 scope.go:117] "RemoveContainer" containerID="06754d581cc8aca46ceb909759a4cdf83f5358cda6d0633cc92ae3b0cb8c8c05" Mar 12 14:14:08.775266 master-0 kubenswrapper[7440]: I0312 14:14:08.775156 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9qngm" Mar 12 14:14:09.739450 master-0 kubenswrapper[7440]: I0312 14:14:09.739365 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-gltz7" event={"ID":"dd29b21c-7a0e-4311-952f-427b00468e66","Type":"ContainerStarted","Data":"91a8f5c51245c9c31ad9e34f814e801c26cbe6ecd3a5aedc09c0fc9965981075"} Mar 12 14:14:11.318375 master-0 kubenswrapper[7440]: I0312 14:14:11.318319 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ns7pm" Mar 12 14:14:11.318375 master-0 kubenswrapper[7440]: I0312 14:14:11.318367 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ns7pm" Mar 12 14:14:12.356101 master-0 kubenswrapper[7440]: I0312 14:14:12.356040 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ns7pm" podUID="2e0d04d0-6ea2-4b4e-a881-968db7d31c7b" containerName="registry-server" probeResult="failure" output=< Mar 12 14:14:12.356101 master-0 kubenswrapper[7440]: timeout: failed to connect service ":50051" within 1s Mar 12 14:14:12.356101 master-0 kubenswrapper[7440]: > Mar 12 14:14:16.420400 master-0 kubenswrapper[7440]: I0312 14:14:16.420291 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4622r" Mar 12 14:14:16.454131 master-0 kubenswrapper[7440]: I0312 14:14:16.454081 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4622r" Mar 12 14:14:16.621002 master-0 kubenswrapper[7440]: I0312 14:14:16.620934 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-thh89" Mar 12 14:14:16.656105 master-0 kubenswrapper[7440]: I0312 14:14:16.656048 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-thh89" Mar 12 14:14:17.745024 master-0 kubenswrapper[7440]: I0312 14:14:17.744947 7440 patch_prober.go:28] interesting pod/apiserver-794bf69795-vntlz container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.68:8443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:14:17.745543 master-0 kubenswrapper[7440]: I0312 14:14:17.745043 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" podUID="7420564a-dc9d-4a2e-b0fc-0cc01f115e3b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.68:8443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:14:19.448958 master-0 kubenswrapper[7440]: E0312 14:14:19.448873 7440 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 12 14:14:19.449544 master-0 kubenswrapper[7440]: I0312 14:14:19.449377 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 12 14:14:19.475700 master-0 kubenswrapper[7440]: W0312 14:14:19.475524 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e52bef89f4b50e4590a1719bcc5d7e5.slice/crio-1e6eb7aa078492d2528a9200bd80537980b32a04e2a20d081923a96e6ddc03d7 WatchSource:0}: Error finding container 1e6eb7aa078492d2528a9200bd80537980b32a04e2a20d081923a96e6ddc03d7: Status 404 returned error can't find the container with id 1e6eb7aa078492d2528a9200bd80537980b32a04e2a20d081923a96e6ddc03d7 Mar 12 14:14:19.790721 master-0 kubenswrapper[7440]: I0312 14:14:19.790495 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"1e6eb7aa078492d2528a9200bd80537980b32a04e2a20d081923a96e6ddc03d7"} Mar 12 14:14:20.802555 master-0 kubenswrapper[7440]: I0312 14:14:20.802411 7440 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="257ef0c1a29111b804b93df184b1276c19040c3b46129a42b1e503f5e1905151" exitCode=0 Mar 12 14:14:20.803152 master-0 kubenswrapper[7440]: I0312 14:14:20.803089 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"257ef0c1a29111b804b93df184b1276c19040c3b46129a42b1e503f5e1905151"} Mar 12 14:14:20.809121 master-0 kubenswrapper[7440]: I0312 14:14:20.809066 7440 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="db63589c7d51a05a8314fa99d2bcd36f7d574dddf92caf850f4dc8319e77bd65" exitCode=1 Mar 12 14:14:20.809540 master-0 kubenswrapper[7440]: I0312 14:14:20.809131 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"db63589c7d51a05a8314fa99d2bcd36f7d574dddf92caf850f4dc8319e77bd65"} Mar 12 14:14:20.809540 master-0 kubenswrapper[7440]: I0312 14:14:20.809182 7440 scope.go:117] "RemoveContainer" containerID="fc7c0f722bd2f10c123348ade47d19a8deffa1a39c549432778dbf52755ce3ca" Mar 12 14:14:20.810278 master-0 kubenswrapper[7440]: I0312 14:14:20.810053 7440 scope.go:117] "RemoveContainer" containerID="db63589c7d51a05a8314fa99d2bcd36f7d574dddf92caf850f4dc8319e77bd65" Mar 12 14:14:20.812873 master-0 kubenswrapper[7440]: I0312 14:14:20.812841 7440 generic.go:334] "Generic (PLEG): container finished" podID="23b56974-d2b1-4205-af5a-70cc2b616d1a" containerID="44912c45860c53bd920d6344d008ca95bda45324f0583a0a019e5ef0a05b1d24" exitCode=0 Mar 12 14:14:20.813031 master-0 kubenswrapper[7440]: I0312 14:14:20.812876 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"23b56974-d2b1-4205-af5a-70cc2b616d1a","Type":"ContainerDied","Data":"44912c45860c53bd920d6344d008ca95bda45324f0583a0a019e5ef0a05b1d24"} Mar 12 14:14:21.380206 master-0 kubenswrapper[7440]: I0312 14:14:21.380146 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ns7pm" Mar 12 14:14:21.426864 master-0 kubenswrapper[7440]: I0312 14:14:21.426799 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ns7pm" Mar 12 14:14:21.832926 master-0 kubenswrapper[7440]: I0312 14:14:21.832849 7440 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="d81715b1a2dbc54afa6b4ebf0b0cbc31e29e0bdb6377beba9d7f0f245fb67694" exitCode=1 Mar 12 14:14:21.834166 master-0 kubenswrapper[7440]: I0312 14:14:21.834099 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerDied","Data":"d81715b1a2dbc54afa6b4ebf0b0cbc31e29e0bdb6377beba9d7f0f245fb67694"} Mar 12 14:14:21.836229 master-0 kubenswrapper[7440]: I0312 14:14:21.836174 7440 scope.go:117] "RemoveContainer" containerID="d81715b1a2dbc54afa6b4ebf0b0cbc31e29e0bdb6377beba9d7f0f245fb67694" Mar 12 14:14:21.839033 master-0 kubenswrapper[7440]: I0312 14:14:21.839016 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"c4d90f1c1d446b898ed50108e2482967a437ec5d999259ff9e991131aa20b54a"} Mar 12 14:14:22.149727 master-0 kubenswrapper[7440]: I0312 14:14:22.149663 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 12 14:14:22.340650 master-0 kubenswrapper[7440]: I0312 14:14:22.340575 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/23b56974-d2b1-4205-af5a-70cc2b616d1a-var-lock\") pod \"23b56974-d2b1-4205-af5a-70cc2b616d1a\" (UID: \"23b56974-d2b1-4205-af5a-70cc2b616d1a\") " Mar 12 14:14:22.340650 master-0 kubenswrapper[7440]: I0312 14:14:22.340643 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23b56974-d2b1-4205-af5a-70cc2b616d1a-kubelet-dir\") pod \"23b56974-d2b1-4205-af5a-70cc2b616d1a\" (UID: \"23b56974-d2b1-4205-af5a-70cc2b616d1a\") " Mar 12 14:14:22.340940 master-0 kubenswrapper[7440]: I0312 14:14:22.340679 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23b56974-d2b1-4205-af5a-70cc2b616d1a-kube-api-access\") pod \"23b56974-d2b1-4205-af5a-70cc2b616d1a\" (UID: \"23b56974-d2b1-4205-af5a-70cc2b616d1a\") " Mar 12 14:14:22.340940 master-0 kubenswrapper[7440]: I0312 14:14:22.340817 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23b56974-d2b1-4205-af5a-70cc2b616d1a-var-lock" (OuterVolumeSpecName: "var-lock") pod "23b56974-d2b1-4205-af5a-70cc2b616d1a" (UID: "23b56974-d2b1-4205-af5a-70cc2b616d1a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:14:22.340940 master-0 kubenswrapper[7440]: I0312 14:14:22.340847 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23b56974-d2b1-4205-af5a-70cc2b616d1a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "23b56974-d2b1-4205-af5a-70cc2b616d1a" (UID: "23b56974-d2b1-4205-af5a-70cc2b616d1a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:14:22.355947 master-0 kubenswrapper[7440]: I0312 14:14:22.355855 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23b56974-d2b1-4205-af5a-70cc2b616d1a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "23b56974-d2b1-4205-af5a-70cc2b616d1a" (UID: "23b56974-d2b1-4205-af5a-70cc2b616d1a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:14:22.441555 master-0 kubenswrapper[7440]: I0312 14:14:22.441502 7440 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/23b56974-d2b1-4205-af5a-70cc2b616d1a-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:22.441555 master-0 kubenswrapper[7440]: I0312 14:14:22.441534 7440 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23b56974-d2b1-4205-af5a-70cc2b616d1a-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:22.441555 master-0 kubenswrapper[7440]: I0312 14:14:22.441545 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23b56974-d2b1-4205-af5a-70cc2b616d1a-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:22.845640 master-0 kubenswrapper[7440]: I0312 14:14:22.845592 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_56fb91c7-1b94-4f59-82f2-3025f0b02e43/installer/0.log" Mar 12 14:14:22.846119 master-0 kubenswrapper[7440]: I0312 14:14:22.845644 7440 generic.go:334] "Generic (PLEG): container finished" podID="56fb91c7-1b94-4f59-82f2-3025f0b02e43" containerID="03429e462f0622cfb4b81f008568fcb386a658560e44c8b3a80cc0aa9bf08473" exitCode=1 Mar 12 14:14:22.846119 master-0 kubenswrapper[7440]: I0312 14:14:22.845695 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"56fb91c7-1b94-4f59-82f2-3025f0b02e43","Type":"ContainerDied","Data":"03429e462f0622cfb4b81f008568fcb386a658560e44c8b3a80cc0aa9bf08473"} Mar 12 14:14:22.847557 master-0 kubenswrapper[7440]: I0312 14:14:22.847523 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 12 14:14:22.847664 master-0 kubenswrapper[7440]: I0312 14:14:22.847522 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"23b56974-d2b1-4205-af5a-70cc2b616d1a","Type":"ContainerDied","Data":"5d684cba0a95ae743814a8952b46742b894c87c51cb377826df98e54818be432"} Mar 12 14:14:22.847717 master-0 kubenswrapper[7440]: I0312 14:14:22.847672 7440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d684cba0a95ae743814a8952b46742b894c87c51cb377826df98e54818be432" Mar 12 14:14:22.849819 master-0 kubenswrapper[7440]: I0312 14:14:22.849713 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"95ba11fc8a440b0f75fb1a6bf90aed334dc73dd1799f7af488f9efe94a5e77b1"} Mar 12 14:14:22.852123 master-0 kubenswrapper[7440]: I0312 14:14:22.852071 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_efd52682-bf05-44fc-9790-8adfc87ca087/installer/0.log" Mar 12 14:14:22.852209 master-0 kubenswrapper[7440]: I0312 14:14:22.852182 7440 generic.go:334] "Generic (PLEG): container finished" podID="efd52682-bf05-44fc-9790-8adfc87ca087" containerID="a7a831aba8d50e763154f735949d2f89a1f0e98463882117ee4053d40ba3f7ce" exitCode=1 Mar 12 14:14:22.852264 master-0 kubenswrapper[7440]: I0312 14:14:22.852210 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"efd52682-bf05-44fc-9790-8adfc87ca087","Type":"ContainerDied","Data":"a7a831aba8d50e763154f735949d2f89a1f0e98463882117ee4053d40ba3f7ce"} Mar 12 14:14:24.074204 master-0 kubenswrapper[7440]: I0312 14:14:24.074163 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:14:24.249874 master-0 kubenswrapper[7440]: I0312 14:14:24.249856 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_efd52682-bf05-44fc-9790-8adfc87ca087/installer/0.log" Mar 12 14:14:24.250061 master-0 kubenswrapper[7440]: I0312 14:14:24.250038 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 14:14:24.254812 master-0 kubenswrapper[7440]: I0312 14:14:24.254775 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_56fb91c7-1b94-4f59-82f2-3025f0b02e43/installer/0.log" Mar 12 14:14:24.254874 master-0 kubenswrapper[7440]: I0312 14:14:24.254853 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 14:14:24.326238 master-0 kubenswrapper[7440]: I0312 14:14:24.326196 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:14:24.363264 master-0 kubenswrapper[7440]: I0312 14:14:24.363187 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56fb91c7-1b94-4f59-82f2-3025f0b02e43-kubelet-dir\") pod \"56fb91c7-1b94-4f59-82f2-3025f0b02e43\" (UID: \"56fb91c7-1b94-4f59-82f2-3025f0b02e43\") " Mar 12 14:14:24.363264 master-0 kubenswrapper[7440]: I0312 14:14:24.363266 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/efd52682-bf05-44fc-9790-8adfc87ca087-kubelet-dir\") pod \"efd52682-bf05-44fc-9790-8adfc87ca087\" (UID: \"efd52682-bf05-44fc-9790-8adfc87ca087\") " Mar 12 14:14:24.363780 master-0 kubenswrapper[7440]: I0312 14:14:24.363319 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/efd52682-bf05-44fc-9790-8adfc87ca087-kube-api-access\") pod \"efd52682-bf05-44fc-9790-8adfc87ca087\" (UID: \"efd52682-bf05-44fc-9790-8adfc87ca087\") " Mar 12 14:14:24.363780 master-0 kubenswrapper[7440]: I0312 14:14:24.363324 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56fb91c7-1b94-4f59-82f2-3025f0b02e43-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "56fb91c7-1b94-4f59-82f2-3025f0b02e43" (UID: "56fb91c7-1b94-4f59-82f2-3025f0b02e43"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:14:24.363780 master-0 kubenswrapper[7440]: I0312 14:14:24.363377 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efd52682-bf05-44fc-9790-8adfc87ca087-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "efd52682-bf05-44fc-9790-8adfc87ca087" (UID: "efd52682-bf05-44fc-9790-8adfc87ca087"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:14:24.363780 master-0 kubenswrapper[7440]: I0312 14:14:24.363378 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56fb91c7-1b94-4f59-82f2-3025f0b02e43-var-lock\") pod \"56fb91c7-1b94-4f59-82f2-3025f0b02e43\" (UID: \"56fb91c7-1b94-4f59-82f2-3025f0b02e43\") " Mar 12 14:14:24.363780 master-0 kubenswrapper[7440]: I0312 14:14:24.363441 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56fb91c7-1b94-4f59-82f2-3025f0b02e43-var-lock" (OuterVolumeSpecName: "var-lock") pod "56fb91c7-1b94-4f59-82f2-3025f0b02e43" (UID: "56fb91c7-1b94-4f59-82f2-3025f0b02e43"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:14:24.363780 master-0 kubenswrapper[7440]: I0312 14:14:24.363470 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56fb91c7-1b94-4f59-82f2-3025f0b02e43-kube-api-access\") pod \"56fb91c7-1b94-4f59-82f2-3025f0b02e43\" (UID: \"56fb91c7-1b94-4f59-82f2-3025f0b02e43\") " Mar 12 14:14:24.364535 master-0 kubenswrapper[7440]: I0312 14:14:24.364268 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/efd52682-bf05-44fc-9790-8adfc87ca087-var-lock\") pod \"efd52682-bf05-44fc-9790-8adfc87ca087\" (UID: \"efd52682-bf05-44fc-9790-8adfc87ca087\") " Mar 12 14:14:24.364535 master-0 kubenswrapper[7440]: I0312 14:14:24.364343 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efd52682-bf05-44fc-9790-8adfc87ca087-var-lock" (OuterVolumeSpecName: "var-lock") pod "efd52682-bf05-44fc-9790-8adfc87ca087" (UID: "efd52682-bf05-44fc-9790-8adfc87ca087"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:14:24.364746 master-0 kubenswrapper[7440]: I0312 14:14:24.364712 7440 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/efd52682-bf05-44fc-9790-8adfc87ca087-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:24.364746 master-0 kubenswrapper[7440]: I0312 14:14:24.364738 7440 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56fb91c7-1b94-4f59-82f2-3025f0b02e43-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:24.364865 master-0 kubenswrapper[7440]: I0312 14:14:24.364754 7440 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/efd52682-bf05-44fc-9790-8adfc87ca087-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:24.364865 master-0 kubenswrapper[7440]: I0312 14:14:24.364765 7440 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56fb91c7-1b94-4f59-82f2-3025f0b02e43-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:24.366778 master-0 kubenswrapper[7440]: I0312 14:14:24.366720 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56fb91c7-1b94-4f59-82f2-3025f0b02e43-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "56fb91c7-1b94-4f59-82f2-3025f0b02e43" (UID: "56fb91c7-1b94-4f59-82f2-3025f0b02e43"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:14:24.366881 master-0 kubenswrapper[7440]: I0312 14:14:24.366836 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efd52682-bf05-44fc-9790-8adfc87ca087-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "efd52682-bf05-44fc-9790-8adfc87ca087" (UID: "efd52682-bf05-44fc-9790-8adfc87ca087"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:14:24.466618 master-0 kubenswrapper[7440]: I0312 14:14:24.466448 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/efd52682-bf05-44fc-9790-8adfc87ca087-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:24.466618 master-0 kubenswrapper[7440]: I0312 14:14:24.466536 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56fb91c7-1b94-4f59-82f2-3025f0b02e43-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:24.566943 master-0 kubenswrapper[7440]: E0312 14:14:24.566688 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:14:14Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:14:14Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:14:14Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:14:14Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:0d4c830b2653f2eeffebd09537afb06afb5ae827adbc03f224ab7269f399c05c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d6065909bc521a3f9a85174276fdbceafad02a276449a7dd1952a1f689b0d362\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1735807445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:185237e125a9d710a58d4b588ea6b75eb361e4e99d979c1acd193de3b5d787f1\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:746054bb64fa0b27b1a696cd5db508bb9ee883a94969e4c1c4b5d35a93da8ef5\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1281521882},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:419c6163a23c12fa8884122764fc9055f901e98f35811ea7b5af57f8a71cdb3c\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bbd5afda20f052626b7914c319e3b44721ac442a05724cfe4199e8736319dcf1\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221789390},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d74fe7cb12c554c120262683d9c4066f33ae4f60a5fad83cba419d851b98c12d\\\"],\\\"sizeBytes\\\":470822665},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e9ee63a30a9b95b5801afa36e09fc583ec2cda3c5cb3c8676e478fea016abfa1\\\"],\\\"sizeBytes\\\":470680779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:14:24.864215 master-0 kubenswrapper[7440]: I0312 14:14:24.864170 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_efd52682-bf05-44fc-9790-8adfc87ca087/installer/0.log" Mar 12 14:14:24.864392 master-0 kubenswrapper[7440]: I0312 14:14:24.864266 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"efd52682-bf05-44fc-9790-8adfc87ca087","Type":"ContainerDied","Data":"83a78b6bdc6bac34701501df7342c8dd451a72192f273fdc21aa0b983df21030"} Mar 12 14:14:24.864392 master-0 kubenswrapper[7440]: I0312 14:14:24.864293 7440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83a78b6bdc6bac34701501df7342c8dd451a72192f273fdc21aa0b983df21030" Mar 12 14:14:24.864392 master-0 kubenswrapper[7440]: I0312 14:14:24.864316 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 14:14:24.866497 master-0 kubenswrapper[7440]: I0312 14:14:24.866461 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_56fb91c7-1b94-4f59-82f2-3025f0b02e43/installer/0.log" Mar 12 14:14:24.866623 master-0 kubenswrapper[7440]: I0312 14:14:24.866588 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"56fb91c7-1b94-4f59-82f2-3025f0b02e43","Type":"ContainerDied","Data":"df05180a9aba2b044bc8cc4f8bc493121c5ae7f993a124591b67cf0a86c60578"} Mar 12 14:14:24.866623 master-0 kubenswrapper[7440]: I0312 14:14:24.866612 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 14:14:24.866698 master-0 kubenswrapper[7440]: I0312 14:14:24.866625 7440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df05180a9aba2b044bc8cc4f8bc493121c5ae7f993a124591b67cf0a86c60578" Mar 12 14:14:24.982807 master-0 kubenswrapper[7440]: E0312 14:14:24.982749 7440 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:14:27.074676 master-0 kubenswrapper[7440]: I0312 14:14:27.074534 7440 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:14:27.746029 master-0 kubenswrapper[7440]: I0312 14:14:27.745861 7440 patch_prober.go:28] interesting pod/apiserver-794bf69795-vntlz container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.68:8443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:14:27.746029 master-0 kubenswrapper[7440]: I0312 14:14:27.745934 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" podUID="7420564a-dc9d-4a2e-b0fc-0cc01f115e3b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.68:8443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:14:31.828940 master-0 kubenswrapper[7440]: I0312 14:14:31.828810 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_c0743910-1ba7-490d-bc3e-5126562b04aa/installer/0.log" Mar 12 14:14:31.828940 master-0 kubenswrapper[7440]: I0312 14:14:31.828862 7440 generic.go:334] "Generic (PLEG): container finished" podID="c0743910-1ba7-490d-bc3e-5126562b04aa" containerID="763faa898e18449dd9a50b708e0137c7362e38addce32c4afec9964d733e4f39" exitCode=1 Mar 12 14:14:31.828940 master-0 kubenswrapper[7440]: I0312 14:14:31.828911 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"c0743910-1ba7-490d-bc3e-5126562b04aa","Type":"ContainerDied","Data":"763faa898e18449dd9a50b708e0137c7362e38addce32c4afec9964d733e4f39"} Mar 12 14:14:33.108742 master-0 kubenswrapper[7440]: I0312 14:14:33.108697 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_c0743910-1ba7-490d-bc3e-5126562b04aa/installer/0.log" Mar 12 14:14:33.109225 master-0 kubenswrapper[7440]: I0312 14:14:33.108756 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 14:14:33.248296 master-0 kubenswrapper[7440]: I0312 14:14:33.248221 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c0743910-1ba7-490d-bc3e-5126562b04aa-kube-api-access\") pod \"c0743910-1ba7-490d-bc3e-5126562b04aa\" (UID: \"c0743910-1ba7-490d-bc3e-5126562b04aa\") " Mar 12 14:14:33.248526 master-0 kubenswrapper[7440]: I0312 14:14:33.248305 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c0743910-1ba7-490d-bc3e-5126562b04aa-kubelet-dir\") pod \"c0743910-1ba7-490d-bc3e-5126562b04aa\" (UID: \"c0743910-1ba7-490d-bc3e-5126562b04aa\") " Mar 12 14:14:33.248526 master-0 kubenswrapper[7440]: I0312 14:14:33.248351 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c0743910-1ba7-490d-bc3e-5126562b04aa-var-lock\") pod \"c0743910-1ba7-490d-bc3e-5126562b04aa\" (UID: \"c0743910-1ba7-490d-bc3e-5126562b04aa\") " Mar 12 14:14:33.248594 master-0 kubenswrapper[7440]: I0312 14:14:33.248536 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0743910-1ba7-490d-bc3e-5126562b04aa-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c0743910-1ba7-490d-bc3e-5126562b04aa" (UID: "c0743910-1ba7-490d-bc3e-5126562b04aa"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:14:33.248594 master-0 kubenswrapper[7440]: I0312 14:14:33.248578 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0743910-1ba7-490d-bc3e-5126562b04aa-var-lock" (OuterVolumeSpecName: "var-lock") pod "c0743910-1ba7-490d-bc3e-5126562b04aa" (UID: "c0743910-1ba7-490d-bc3e-5126562b04aa"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:14:33.248806 master-0 kubenswrapper[7440]: I0312 14:14:33.248776 7440 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c0743910-1ba7-490d-bc3e-5126562b04aa-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:33.248806 master-0 kubenswrapper[7440]: I0312 14:14:33.248800 7440 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c0743910-1ba7-490d-bc3e-5126562b04aa-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:33.251292 master-0 kubenswrapper[7440]: I0312 14:14:33.251258 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0743910-1ba7-490d-bc3e-5126562b04aa-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c0743910-1ba7-490d-bc3e-5126562b04aa" (UID: "c0743910-1ba7-490d-bc3e-5126562b04aa"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:14:33.350685 master-0 kubenswrapper[7440]: I0312 14:14:33.350529 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c0743910-1ba7-490d-bc3e-5126562b04aa-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:33.807913 master-0 kubenswrapper[7440]: E0312 14:14:33.807821 7440 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 12 14:14:33.840943 master-0 kubenswrapper[7440]: I0312 14:14:33.840858 7440 generic.go:334] "Generic (PLEG): container finished" podID="354f29997baa583b6238f7de9108ee10" containerID="d62d60cfbaec34b17f1179067155a280075561a18ae5a4aaf75af0a737c10b39" exitCode=0 Mar 12 14:14:33.843264 master-0 kubenswrapper[7440]: I0312 14:14:33.843240 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_c0743910-1ba7-490d-bc3e-5126562b04aa/installer/0.log" Mar 12 14:14:33.843330 master-0 kubenswrapper[7440]: I0312 14:14:33.843286 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"c0743910-1ba7-490d-bc3e-5126562b04aa","Type":"ContainerDied","Data":"ad667b1962e9be89dad22c04e8baae0b8b39d88482f4ed8d30c8828a965ec326"} Mar 12 14:14:33.843330 master-0 kubenswrapper[7440]: I0312 14:14:33.843312 7440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad667b1962e9be89dad22c04e8baae0b8b39d88482f4ed8d30c8828a965ec326" Mar 12 14:14:33.843394 master-0 kubenswrapper[7440]: I0312 14:14:33.843371 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 14:14:34.568047 master-0 kubenswrapper[7440]: E0312 14:14:34.567847 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:14:34.848984 master-0 kubenswrapper[7440]: I0312 14:14:34.848784 7440 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="70f8a10f08775f9ef9b766aaa2353e10257f6f7a64d18cef4a9ce779cf9930f3" exitCode=0 Mar 12 14:14:34.848984 master-0 kubenswrapper[7440]: I0312 14:14:34.848830 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"70f8a10f08775f9ef9b766aaa2353e10257f6f7a64d18cef4a9ce779cf9930f3"} Mar 12 14:14:34.983607 master-0 kubenswrapper[7440]: E0312 14:14:34.983528 7440 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:14:36.527421 master-0 kubenswrapper[7440]: I0312 14:14:36.527271 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_354f29997baa583b6238f7de9108ee10/etcdctl/0.log" Mar 12 14:14:36.527421 master-0 kubenswrapper[7440]: I0312 14:14:36.527366 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 14:14:36.692723 master-0 kubenswrapper[7440]: I0312 14:14:36.692605 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"354f29997baa583b6238f7de9108ee10\" (UID: \"354f29997baa583b6238f7de9108ee10\") " Mar 12 14:14:36.692723 master-0 kubenswrapper[7440]: I0312 14:14:36.692696 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"354f29997baa583b6238f7de9108ee10\" (UID: \"354f29997baa583b6238f7de9108ee10\") " Mar 12 14:14:36.692945 master-0 kubenswrapper[7440]: I0312 14:14:36.692776 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir" (OuterVolumeSpecName: "data-dir") pod "354f29997baa583b6238f7de9108ee10" (UID: "354f29997baa583b6238f7de9108ee10"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:14:36.692945 master-0 kubenswrapper[7440]: I0312 14:14:36.692845 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs" (OuterVolumeSpecName: "certs") pod "354f29997baa583b6238f7de9108ee10" (UID: "354f29997baa583b6238f7de9108ee10"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:14:36.693142 master-0 kubenswrapper[7440]: I0312 14:14:36.693111 7440 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:36.693142 master-0 kubenswrapper[7440]: I0312 14:14:36.693131 7440 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:36.746890 master-0 kubenswrapper[7440]: I0312 14:14:36.745994 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-sm9nb_7023af8b-bfcc-4253-85cd-d891dff1c86e/multus-admission-controller/0.log" Mar 12 14:14:36.746890 master-0 kubenswrapper[7440]: I0312 14:14:36.746078 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" Mar 12 14:14:36.861200 master-0 kubenswrapper[7440]: I0312 14:14:36.861112 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_354f29997baa583b6238f7de9108ee10/etcdctl/0.log" Mar 12 14:14:36.861448 master-0 kubenswrapper[7440]: I0312 14:14:36.861219 7440 generic.go:334] "Generic (PLEG): container finished" podID="354f29997baa583b6238f7de9108ee10" containerID="857cc78e0c0678c5508c4eb58b1fbdd872cb096a1de1ff4746f9a88c2863a73c" exitCode=137 Mar 12 14:14:36.861448 master-0 kubenswrapper[7440]: I0312 14:14:36.861298 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 14:14:36.861448 master-0 kubenswrapper[7440]: I0312 14:14:36.861396 7440 scope.go:117] "RemoveContainer" containerID="d62d60cfbaec34b17f1179067155a280075561a18ae5a4aaf75af0a737c10b39" Mar 12 14:14:36.863648 master-0 kubenswrapper[7440]: I0312 14:14:36.863614 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-sm9nb_7023af8b-bfcc-4253-85cd-d891dff1c86e/multus-admission-controller/0.log" Mar 12 14:14:36.863709 master-0 kubenswrapper[7440]: I0312 14:14:36.863666 7440 generic.go:334] "Generic (PLEG): container finished" podID="7023af8b-bfcc-4253-85cd-d891dff1c86e" containerID="59225193c476309a0aa5efa9f60ce80fa3d02930e0324fa57c25ccd5390ef184" exitCode=137 Mar 12 14:14:36.863709 master-0 kubenswrapper[7440]: I0312 14:14:36.863698 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" event={"ID":"7023af8b-bfcc-4253-85cd-d891dff1c86e","Type":"ContainerDied","Data":"59225193c476309a0aa5efa9f60ce80fa3d02930e0324fa57c25ccd5390ef184"} Mar 12 14:14:36.863792 master-0 kubenswrapper[7440]: I0312 14:14:36.863725 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" event={"ID":"7023af8b-bfcc-4253-85cd-d891dff1c86e","Type":"ContainerDied","Data":"fab4209128367cae9aae1c602fe8e2a20cfcbb53ea4e672f691caba442c30231"} Mar 12 14:14:36.863792 master-0 kubenswrapper[7440]: I0312 14:14:36.863736 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-sm9nb" Mar 12 14:14:36.875161 master-0 kubenswrapper[7440]: I0312 14:14:36.875121 7440 scope.go:117] "RemoveContainer" containerID="857cc78e0c0678c5508c4eb58b1fbdd872cb096a1de1ff4746f9a88c2863a73c" Mar 12 14:14:36.890180 master-0 kubenswrapper[7440]: I0312 14:14:36.890132 7440 scope.go:117] "RemoveContainer" containerID="d62d60cfbaec34b17f1179067155a280075561a18ae5a4aaf75af0a737c10b39" Mar 12 14:14:36.890652 master-0 kubenswrapper[7440]: E0312 14:14:36.890614 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d62d60cfbaec34b17f1179067155a280075561a18ae5a4aaf75af0a737c10b39\": container with ID starting with d62d60cfbaec34b17f1179067155a280075561a18ae5a4aaf75af0a737c10b39 not found: ID does not exist" containerID="d62d60cfbaec34b17f1179067155a280075561a18ae5a4aaf75af0a737c10b39" Mar 12 14:14:36.890700 master-0 kubenswrapper[7440]: I0312 14:14:36.890662 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d62d60cfbaec34b17f1179067155a280075561a18ae5a4aaf75af0a737c10b39"} err="failed to get container status \"d62d60cfbaec34b17f1179067155a280075561a18ae5a4aaf75af0a737c10b39\": rpc error: code = NotFound desc = could not find container \"d62d60cfbaec34b17f1179067155a280075561a18ae5a4aaf75af0a737c10b39\": container with ID starting with d62d60cfbaec34b17f1179067155a280075561a18ae5a4aaf75af0a737c10b39 not found: ID does not exist" Mar 12 14:14:36.890700 master-0 kubenswrapper[7440]: I0312 14:14:36.890691 7440 scope.go:117] "RemoveContainer" containerID="857cc78e0c0678c5508c4eb58b1fbdd872cb096a1de1ff4746f9a88c2863a73c" Mar 12 14:14:36.891116 master-0 kubenswrapper[7440]: E0312 14:14:36.891067 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"857cc78e0c0678c5508c4eb58b1fbdd872cb096a1de1ff4746f9a88c2863a73c\": container with ID starting with 857cc78e0c0678c5508c4eb58b1fbdd872cb096a1de1ff4746f9a88c2863a73c not found: ID does not exist" containerID="857cc78e0c0678c5508c4eb58b1fbdd872cb096a1de1ff4746f9a88c2863a73c" Mar 12 14:14:36.891175 master-0 kubenswrapper[7440]: I0312 14:14:36.891115 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"857cc78e0c0678c5508c4eb58b1fbdd872cb096a1de1ff4746f9a88c2863a73c"} err="failed to get container status \"857cc78e0c0678c5508c4eb58b1fbdd872cb096a1de1ff4746f9a88c2863a73c\": rpc error: code = NotFound desc = could not find container \"857cc78e0c0678c5508c4eb58b1fbdd872cb096a1de1ff4746f9a88c2863a73c\": container with ID starting with 857cc78e0c0678c5508c4eb58b1fbdd872cb096a1de1ff4746f9a88c2863a73c not found: ID does not exist" Mar 12 14:14:36.891175 master-0 kubenswrapper[7440]: I0312 14:14:36.891138 7440 scope.go:117] "RemoveContainer" containerID="bbf8648501855090b8f097caff2cdeb613eb87fa32c1c70b502f2307573cd6ef" Mar 12 14:14:36.895598 master-0 kubenswrapper[7440]: I0312 14:14:36.895564 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm476\" (UniqueName: \"kubernetes.io/projected/7023af8b-bfcc-4253-85cd-d891dff1c86e-kube-api-access-dm476\") pod \"7023af8b-bfcc-4253-85cd-d891dff1c86e\" (UID: \"7023af8b-bfcc-4253-85cd-d891dff1c86e\") " Mar 12 14:14:36.895665 master-0 kubenswrapper[7440]: I0312 14:14:36.895628 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs\") pod \"7023af8b-bfcc-4253-85cd-d891dff1c86e\" (UID: \"7023af8b-bfcc-4253-85cd-d891dff1c86e\") " Mar 12 14:14:36.898427 master-0 kubenswrapper[7440]: I0312 14:14:36.898383 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "7023af8b-bfcc-4253-85cd-d891dff1c86e" (UID: "7023af8b-bfcc-4253-85cd-d891dff1c86e"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:14:36.898499 master-0 kubenswrapper[7440]: I0312 14:14:36.898431 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7023af8b-bfcc-4253-85cd-d891dff1c86e-kube-api-access-dm476" (OuterVolumeSpecName: "kube-api-access-dm476") pod "7023af8b-bfcc-4253-85cd-d891dff1c86e" (UID: "7023af8b-bfcc-4253-85cd-d891dff1c86e"). InnerVolumeSpecName "kube-api-access-dm476". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:14:36.903784 master-0 kubenswrapper[7440]: I0312 14:14:36.903752 7440 scope.go:117] "RemoveContainer" containerID="59225193c476309a0aa5efa9f60ce80fa3d02930e0324fa57c25ccd5390ef184" Mar 12 14:14:36.915249 master-0 kubenswrapper[7440]: I0312 14:14:36.915201 7440 scope.go:117] "RemoveContainer" containerID="bbf8648501855090b8f097caff2cdeb613eb87fa32c1c70b502f2307573cd6ef" Mar 12 14:14:36.915767 master-0 kubenswrapper[7440]: E0312 14:14:36.915726 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbf8648501855090b8f097caff2cdeb613eb87fa32c1c70b502f2307573cd6ef\": container with ID starting with bbf8648501855090b8f097caff2cdeb613eb87fa32c1c70b502f2307573cd6ef not found: ID does not exist" containerID="bbf8648501855090b8f097caff2cdeb613eb87fa32c1c70b502f2307573cd6ef" Mar 12 14:14:36.915815 master-0 kubenswrapper[7440]: I0312 14:14:36.915767 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbf8648501855090b8f097caff2cdeb613eb87fa32c1c70b502f2307573cd6ef"} err="failed to get container status \"bbf8648501855090b8f097caff2cdeb613eb87fa32c1c70b502f2307573cd6ef\": rpc error: code = NotFound desc = could not find container \"bbf8648501855090b8f097caff2cdeb613eb87fa32c1c70b502f2307573cd6ef\": container with ID starting with bbf8648501855090b8f097caff2cdeb613eb87fa32c1c70b502f2307573cd6ef not found: ID does not exist" Mar 12 14:14:36.915815 master-0 kubenswrapper[7440]: I0312 14:14:36.915792 7440 scope.go:117] "RemoveContainer" containerID="59225193c476309a0aa5efa9f60ce80fa3d02930e0324fa57c25ccd5390ef184" Mar 12 14:14:36.916200 master-0 kubenswrapper[7440]: E0312 14:14:36.916167 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59225193c476309a0aa5efa9f60ce80fa3d02930e0324fa57c25ccd5390ef184\": container with ID starting with 59225193c476309a0aa5efa9f60ce80fa3d02930e0324fa57c25ccd5390ef184 not found: ID does not exist" containerID="59225193c476309a0aa5efa9f60ce80fa3d02930e0324fa57c25ccd5390ef184" Mar 12 14:14:36.916200 master-0 kubenswrapper[7440]: I0312 14:14:36.916187 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59225193c476309a0aa5efa9f60ce80fa3d02930e0324fa57c25ccd5390ef184"} err="failed to get container status \"59225193c476309a0aa5efa9f60ce80fa3d02930e0324fa57c25ccd5390ef184\": rpc error: code = NotFound desc = could not find container \"59225193c476309a0aa5efa9f60ce80fa3d02930e0324fa57c25ccd5390ef184\": container with ID starting with 59225193c476309a0aa5efa9f60ce80fa3d02930e0324fa57c25ccd5390ef184 not found: ID does not exist" Mar 12 14:14:36.996755 master-0 kubenswrapper[7440]: I0312 14:14:36.996680 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm476\" (UniqueName: \"kubernetes.io/projected/7023af8b-bfcc-4253-85cd-d891dff1c86e-kube-api-access-dm476\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:36.996755 master-0 kubenswrapper[7440]: I0312 14:14:36.996741 7440 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7023af8b-bfcc-4253-85cd-d891dff1c86e-webhook-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:14:37.073938 master-0 kubenswrapper[7440]: I0312 14:14:37.073810 7440 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:14:37.746644 master-0 kubenswrapper[7440]: I0312 14:14:37.746573 7440 patch_prober.go:28] interesting pod/apiserver-794bf69795-vntlz container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.68:8443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:14:37.747310 master-0 kubenswrapper[7440]: I0312 14:14:37.746663 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" podUID="7420564a-dc9d-4a2e-b0fc-0cc01f115e3b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.68:8443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 14:14:37.811139 master-0 kubenswrapper[7440]: I0312 14:14:37.811081 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354f29997baa583b6238f7de9108ee10" path="/var/lib/kubelet/pods/354f29997baa583b6238f7de9108ee10/volumes" Mar 12 14:14:37.811745 master-0 kubenswrapper[7440]: I0312 14:14:37.811694 7440 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 12 14:14:40.456883 master-0 kubenswrapper[7440]: E0312 14:14:40.456358 7440 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c1d8d65b91bbf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:14:06.411357119 +0000 UTC m=+106.746735678,LastTimestamp:2026-03-12 14:14:06.411357119 +0000 UTC m=+106.746735678,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:14:44.569284 master-0 kubenswrapper[7440]: E0312 14:14:44.569040 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:14:44.985049 master-0 kubenswrapper[7440]: E0312 14:14:44.984555 7440 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:14:47.073850 master-0 kubenswrapper[7440]: I0312 14:14:47.073675 7440 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:14:47.747655 master-0 kubenswrapper[7440]: I0312 14:14:47.747589 7440 patch_prober.go:28] interesting pod/apiserver-794bf69795-vntlz container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.68:8443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:14:47.747854 master-0 kubenswrapper[7440]: I0312 14:14:47.747653 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" podUID="7420564a-dc9d-4a2e-b0fc-0cc01f115e3b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.68:8443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:14:47.853646 master-0 kubenswrapper[7440]: E0312 14:14:47.853579 7440 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 12 14:14:48.942394 master-0 kubenswrapper[7440]: I0312 14:14:48.942257 7440 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="89d8d59bf6fa2a26b3f43dce31271bb83151aa62ec4c71c1e3cb8e9ec9a4453c" exitCode=0 Mar 12 14:14:48.944914 master-0 kubenswrapper[7440]: I0312 14:14:48.944878 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/0.log" Mar 12 14:14:48.945041 master-0 kubenswrapper[7440]: I0312 14:14:48.945023 7440 generic.go:334] "Generic (PLEG): container finished" podID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" containerID="6ba212567515d3f9436de59fb6dea21c7df5a57d0a71d8f4512b348613929a0b" exitCode=1 Mar 12 14:14:54.570340 master-0 kubenswrapper[7440]: E0312 14:14:54.570250 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:14:54.985521 master-0 kubenswrapper[7440]: E0312 14:14:54.985365 7440 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:14:57.748616 master-0 kubenswrapper[7440]: I0312 14:14:57.748556 7440 patch_prober.go:28] interesting pod/apiserver-794bf69795-vntlz container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.68:8443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:14:57.749377 master-0 kubenswrapper[7440]: I0312 14:14:57.749327 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" podUID="7420564a-dc9d-4a2e-b0fc-0cc01f115e3b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.68:8443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 14:15:01.015131 master-0 kubenswrapper[7440]: I0312 14:15:01.015078 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-rqq4v_e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9/approver/0.log" Mar 12 14:15:01.015751 master-0 kubenswrapper[7440]: I0312 14:15:01.015523 7440 generic.go:334] "Generic (PLEG): container finished" podID="e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9" containerID="6426a3a4748b7e9d673d2f1d6267439ec1d4e697687aa5758b4c1a8fe5038d99" exitCode=1 Mar 12 14:15:01.947524 master-0 kubenswrapper[7440]: E0312 14:15:01.947437 7440 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 12 14:15:04.571112 master-0 kubenswrapper[7440]: E0312 14:15:04.570999 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:15:04.571112 master-0 kubenswrapper[7440]: E0312 14:15:04.571071 7440 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 14:15:04.986996 master-0 kubenswrapper[7440]: E0312 14:15:04.986486 7440 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:15:04.986996 master-0 kubenswrapper[7440]: I0312 14:15:04.986791 7440 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 12 14:15:06.462098 master-0 kubenswrapper[7440]: I0312 14:15:06.462029 7440 status_manager.go:851] "Failed to get status for pod" podUID="0a898118-6d01-4211-92f0-43967b75405c" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-config-operator-64488f9d78-ljnjj)" Mar 12 14:15:06.716150 master-0 kubenswrapper[7440]: I0312 14:15:06.716021 7440 patch_prober.go:28] interesting pod/apiserver-794bf69795-vntlz container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.68:8443/livez\": read tcp 10.128.0.2:44902->10.128.0.68:8443: read: connection reset by peer" start-of-body= Mar 12 14:15:06.716150 master-0 kubenswrapper[7440]: I0312 14:15:06.716083 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" podUID="7420564a-dc9d-4a2e-b0fc-0cc01f115e3b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.68:8443/livez\": read tcp 10.128.0.2:44902->10.128.0.68:8443: read: connection reset by peer" Mar 12 14:15:06.716630 master-0 kubenswrapper[7440]: I0312 14:15:06.716588 7440 patch_prober.go:28] interesting pod/apiserver-794bf69795-vntlz container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.68:8443/livez\": dial tcp 10.128.0.68:8443: connect: connection refused" start-of-body= Mar 12 14:15:06.716761 master-0 kubenswrapper[7440]: I0312 14:15:06.716723 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" podUID="7420564a-dc9d-4a2e-b0fc-0cc01f115e3b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.68:8443/livez\": dial tcp 10.128.0.68:8443: connect: connection refused" Mar 12 14:15:07.065778 master-0 kubenswrapper[7440]: I0312 14:15:07.065681 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-794bf69795-vntlz_7420564a-dc9d-4a2e-b0fc-0cc01f115e3b/oauth-apiserver/0.log" Mar 12 14:15:07.066327 master-0 kubenswrapper[7440]: I0312 14:15:07.066263 7440 generic.go:334] "Generic (PLEG): container finished" podID="7420564a-dc9d-4a2e-b0fc-0cc01f115e3b" containerID="82e98531076d6e3c9a7e475978917c54179baaf121c2bd492fa03aa8611e6187" exitCode=1 Mar 12 14:15:07.744484 master-0 kubenswrapper[7440]: I0312 14:15:07.744401 7440 patch_prober.go:28] interesting pod/apiserver-794bf69795-vntlz container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.68:8443/livez\": dial tcp 10.128.0.68:8443: connect: connection refused" start-of-body= Mar 12 14:15:07.744993 master-0 kubenswrapper[7440]: I0312 14:15:07.744494 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" podUID="7420564a-dc9d-4a2e-b0fc-0cc01f115e3b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.68:8443/livez\": dial tcp 10.128.0.68:8443: connect: connection refused" Mar 12 14:15:11.813919 master-0 kubenswrapper[7440]: E0312 14:15:11.813838 7440 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 14:15:11.814362 master-0 kubenswrapper[7440]: E0312 14:15:11.814021 7440 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.01s" Mar 12 14:15:11.819767 master-0 kubenswrapper[7440]: I0312 14:15:11.819727 7440 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 12 14:15:12.745619 master-0 kubenswrapper[7440]: I0312 14:15:12.745544 7440 patch_prober.go:28] interesting pod/apiserver-794bf69795-vntlz container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.68:8443/livez\": dial tcp 10.128.0.68:8443: connect: connection refused" start-of-body= Mar 12 14:15:12.745960 master-0 kubenswrapper[7440]: I0312 14:15:12.745674 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" podUID="7420564a-dc9d-4a2e-b0fc-0cc01f115e3b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.68:8443/livez\": dial tcp 10.128.0.68:8443: connect: connection refused" Mar 12 14:15:14.459455 master-0 kubenswrapper[7440]: E0312 14:15:14.459288 7440 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{machine-approver-754bdc9f9d-44b6s.189c1d8d6a7936c9 openshift-cluster-machine-approver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-cluster-machine-approver,Name:machine-approver-754bdc9f9d-44b6s,UID:40912d56-8288-4d58-ad91-7455bd460887,APIVersion:v1,ResourceVersion:9917,FieldPath:spec.containers{kube-rbac-proxy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:14:06.491055817 +0000 UTC m=+106.826434376,LastTimestamp:2026-03-12 14:14:06.491055817 +0000 UTC m=+106.826434376,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:15:14.991121 master-0 kubenswrapper[7440]: E0312 14:15:14.991062 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 12 14:15:17.744999 master-0 kubenswrapper[7440]: I0312 14:15:17.744888 7440 patch_prober.go:28] interesting pod/apiserver-794bf69795-vntlz container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.68:8443/livez\": dial tcp 10.128.0.68:8443: connect: connection refused" start-of-body= Mar 12 14:15:17.746157 master-0 kubenswrapper[7440]: I0312 14:15:17.746087 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" podUID="7420564a-dc9d-4a2e-b0fc-0cc01f115e3b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.68:8443/livez\": dial tcp 10.128.0.68:8443: connect: connection refused" Mar 12 14:15:22.168424 master-0 kubenswrapper[7440]: I0312 14:15:22.168318 7440 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="c4d90f1c1d446b898ed50108e2482967a437ec5d999259ff9e991131aa20b54a" exitCode=1 Mar 12 14:15:22.745128 master-0 kubenswrapper[7440]: I0312 14:15:22.745018 7440 patch_prober.go:28] interesting pod/apiserver-794bf69795-vntlz container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.68:8443/livez\": dial tcp 10.128.0.68:8443: connect: connection refused" start-of-body= Mar 12 14:15:22.745419 master-0 kubenswrapper[7440]: I0312 14:15:22.745150 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" podUID="7420564a-dc9d-4a2e-b0fc-0cc01f115e3b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.68:8443/livez\": dial tcp 10.128.0.68:8443: connect: connection refused" Mar 12 14:15:24.928933 master-0 kubenswrapper[7440]: E0312 14:15:24.928689 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:15:14Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:15:14Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:15:14Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:15:14Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:0d4c830b2653f2eeffebd09537afb06afb5ae827adbc03f224ab7269f399c05c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d6065909bc521a3f9a85174276fdbceafad02a276449a7dd1952a1f689b0d362\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1735807445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:185237e125a9d710a58d4b588ea6b75eb361e4e99d979c1acd193de3b5d787f1\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:746054bb64fa0b27b1a696cd5db508bb9ee883a94969e4c1c4b5d35a93da8ef5\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1281521882},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:419c6163a23c12fa8884122764fc9055f901e98f35811ea7b5af57f8a71cdb3c\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bbd5afda20f052626b7914c319e3b44721ac442a05724cfe4199e8736319dcf1\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221789390},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d74fe7cb12c554c120262683d9c4066f33ae4f60a5fad83cba419d851b98c12d\\\"],\\\"sizeBytes\\\":470822665},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e9ee63a30a9b95b5801afa36e09fc583ec2cda3c5cb3c8676e478fea016abfa1\\\"],\\\"sizeBytes\\\":470680779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:15:25.192921 master-0 kubenswrapper[7440]: E0312 14:15:25.192748 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 12 14:15:27.745453 master-0 kubenswrapper[7440]: I0312 14:15:27.745343 7440 patch_prober.go:28] interesting pod/apiserver-794bf69795-vntlz container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.68:8443/livez\": dial tcp 10.128.0.68:8443: connect: connection refused" start-of-body= Mar 12 14:15:27.749289 master-0 kubenswrapper[7440]: I0312 14:15:27.745455 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" podUID="7420564a-dc9d-4a2e-b0fc-0cc01f115e3b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.68:8443/livez\": dial tcp 10.128.0.68:8443: connect: connection refused" Mar 12 14:15:32.745357 master-0 kubenswrapper[7440]: I0312 14:15:32.745270 7440 patch_prober.go:28] interesting pod/apiserver-794bf69795-vntlz container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.68:8443/livez\": dial tcp 10.128.0.68:8443: connect: connection refused" start-of-body= Mar 12 14:15:32.745357 master-0 kubenswrapper[7440]: I0312 14:15:32.745343 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" podUID="7420564a-dc9d-4a2e-b0fc-0cc01f115e3b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.68:8443/livez\": dial tcp 10.128.0.68:8443: connect: connection refused" Mar 12 14:15:34.929303 master-0 kubenswrapper[7440]: E0312 14:15:34.929123 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Mar 12 14:15:35.594536 master-0 kubenswrapper[7440]: E0312 14:15:35.594436 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 12 14:15:37.745276 master-0 kubenswrapper[7440]: I0312 14:15:37.745201 7440 patch_prober.go:28] interesting pod/apiserver-794bf69795-vntlz container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.68:8443/livez\": dial tcp 10.128.0.68:8443: connect: connection refused" start-of-body= Mar 12 14:15:37.745276 master-0 kubenswrapper[7440]: I0312 14:15:37.745268 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" podUID="7420564a-dc9d-4a2e-b0fc-0cc01f115e3b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.68:8443/livez\": dial tcp 10.128.0.68:8443: connect: connection refused" Mar 12 14:15:42.744883 master-0 kubenswrapper[7440]: I0312 14:15:42.744758 7440 patch_prober.go:28] interesting pod/apiserver-794bf69795-vntlz container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.68:8443/livez\": dial tcp 10.128.0.68:8443: connect: connection refused" start-of-body= Mar 12 14:15:42.744883 master-0 kubenswrapper[7440]: I0312 14:15:42.744836 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" podUID="7420564a-dc9d-4a2e-b0fc-0cc01f115e3b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.68:8443/livez\": dial tcp 10.128.0.68:8443: connect: connection refused" Mar 12 14:15:44.930857 master-0 kubenswrapper[7440]: E0312 14:15:44.930381 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:15:45.821953 master-0 kubenswrapper[7440]: E0312 14:15:45.821804 7440 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 14:15:45.823487 master-0 kubenswrapper[7440]: E0312 14:15:45.823432 7440 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.009s" Mar 12 14:15:45.823487 master-0 kubenswrapper[7440]: I0312 14:15:45.823478 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:15:45.832574 master-0 kubenswrapper[7440]: I0312 14:15:45.831894 7440 scope.go:117] "RemoveContainer" containerID="82e98531076d6e3c9a7e475978917c54179baaf121c2bd492fa03aa8611e6187" Mar 12 14:15:45.850139 master-0 kubenswrapper[7440]: I0312 14:15:45.850077 7440 scope.go:117] "RemoveContainer" containerID="6ba212567515d3f9436de59fb6dea21c7df5a57d0a71d8f4512b348613929a0b" Mar 12 14:15:45.851034 master-0 kubenswrapper[7440]: I0312 14:15:45.850847 7440 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 12 14:15:45.852013 master-0 kubenswrapper[7440]: I0312 14:15:45.851939 7440 scope.go:117] "RemoveContainer" containerID="c4d90f1c1d446b898ed50108e2482967a437ec5d999259ff9e991131aa20b54a" Mar 12 14:15:46.395974 master-0 kubenswrapper[7440]: E0312 14:15:46.395183 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="1.6s" Mar 12 14:15:46.509756 master-0 kubenswrapper[7440]: I0312 14:15:46.509722 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-794bf69795-vntlz_7420564a-dc9d-4a2e-b0fc-0cc01f115e3b/oauth-apiserver/0.log" Mar 12 14:15:46.512106 master-0 kubenswrapper[7440]: I0312 14:15:46.512077 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/0.log" Mar 12 14:15:48.461629 master-0 kubenswrapper[7440]: E0312 14:15:48.461498 7440 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{machine-approver-754bdc9f9d-44b6s.189c1d8d77b9257b openshift-cluster-machine-approver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-cluster-machine-approver,Name:machine-approver-754bdc9f9d-44b6s,UID:40912d56-8288-4d58-ad91-7455bd460887,APIVersion:v1,ResourceVersion:9917,FieldPath:spec.containers{kube-rbac-proxy},},Reason:Created,Message:Created container: kube-rbac-proxy,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:14:06.713349499 +0000 UTC m=+107.048728058,LastTimestamp:2026-03-12 14:14:06.713349499 +0000 UTC m=+107.048728058,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:15:55.583445 master-0 kubenswrapper[7440]: I0312 14:15:55.583359 7440 generic.go:334] "Generic (PLEG): container finished" podID="1bc0d552-01c7-4212-a551-d16419f2dc80" containerID="73cc9d119c3cd4081058d9ad935f90baed6fe86111a2b8950fb3e1c100feb5fb" exitCode=0 Mar 12 14:15:57.599224 master-0 kubenswrapper[7440]: I0312 14:15:57.599179 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-754hn_1f9b15c6-b4ee-4907-8daa-376e3b438896/manager/0.log" Mar 12 14:15:57.599785 master-0 kubenswrapper[7440]: I0312 14:15:57.599231 7440 generic.go:334] "Generic (PLEG): container finished" podID="1f9b15c6-b4ee-4907-8daa-376e3b438896" containerID="ed6b1efe75e8b6c558fafcaa8ddbf929d9ca6180cac551e6f152da3936b202da" exitCode=1 Mar 12 14:15:57.601078 master-0 kubenswrapper[7440]: I0312 14:15:57.601050 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-2pj4z_39252b5a-d014-4319-ad81-3c1bf2ef585e/manager/0.log" Mar 12 14:15:57.601483 master-0 kubenswrapper[7440]: I0312 14:15:57.601452 7440 generic.go:334] "Generic (PLEG): container finished" podID="39252b5a-d014-4319-ad81-3c1bf2ef585e" containerID="9e5d0273aaf9a58de181bc25e8eb0e74c78055d79bccf5dc90c3b2168e550793" exitCode=1 Mar 12 14:15:58.573874 master-0 kubenswrapper[7440]: I0312 14:15:58.573684 7440 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-754hn container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" start-of-body= Mar 12 14:15:58.573874 master-0 kubenswrapper[7440]: I0312 14:15:58.573805 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" podUID="1f9b15c6-b4ee-4907-8daa-376e3b438896" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" Mar 12 14:16:00.254117 master-0 kubenswrapper[7440]: I0312 14:16:00.254056 7440 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-2pj4z container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.35:8081/readyz\": dial tcp 10.128.0.35:8081: connect: connection refused" start-of-body= Mar 12 14:16:00.254772 master-0 kubenswrapper[7440]: I0312 14:16:00.254119 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" podUID="39252b5a-d014-4319-ad81-3c1bf2ef585e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.35:8081/readyz\": dial tcp 10.128.0.35:8081: connect: connection refused" Mar 12 14:16:00.624366 master-0 kubenswrapper[7440]: I0312 14:16:00.624282 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/0.log" Mar 12 14:16:00.624596 master-0 kubenswrapper[7440]: I0312 14:16:00.624573 7440 generic.go:334] "Generic (PLEG): container finished" podID="d56089bf-177c-492d-8964-73a45574e7ed" containerID="d53adb45a67056ee01b81331e65f41973a39210d835cc7c159b8fe9b81f06549" exitCode=1 Mar 12 14:16:03.262003 master-0 kubenswrapper[7440]: I0312 14:16:03.261916 7440 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-qzdff container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" start-of-body= Mar 12 14:16:03.262003 master-0 kubenswrapper[7440]: I0312 14:16:03.261976 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" podUID="1bc0d552-01c7-4212-a551-d16419f2dc80" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" Mar 12 14:16:03.262592 master-0 kubenswrapper[7440]: I0312 14:16:03.262039 7440 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-qzdff container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" start-of-body= Mar 12 14:16:03.262592 master-0 kubenswrapper[7440]: I0312 14:16:03.262118 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" podUID="1bc0d552-01c7-4212-a551-d16419f2dc80" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" Mar 12 14:16:06.463680 master-0 kubenswrapper[7440]: I0312 14:16:06.463591 7440 status_manager.go:851] "Failed to get status for pod" podUID="d181b683-a575-45a3-b736-ad4e07486545" pod="openshift-marketplace/redhat-marketplace-9qngm" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods redhat-marketplace-9qngm)" Mar 12 14:16:06.848015 master-0 kubenswrapper[7440]: E0312 14:16:06.847961 7440 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="21.024s" Mar 12 14:16:06.855591 master-0 kubenswrapper[7440]: I0312 14:16:06.855541 7440 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 12 14:16:06.858914 master-0 kubenswrapper[7440]: I0312 14:16:06.858856 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:16:06.859105 master-0 kubenswrapper[7440]: I0312 14:16:06.858941 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:16:06.859105 master-0 kubenswrapper[7440]: I0312 14:16:06.858962 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"89d8d59bf6fa2a26b3f43dce31271bb83151aa62ec4c71c1e3cb8e9ec9a4453c"} Mar 12 14:16:06.859105 master-0 kubenswrapper[7440]: I0312 14:16:06.858988 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" event={"ID":"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6","Type":"ContainerDied","Data":"6ba212567515d3f9436de59fb6dea21c7df5a57d0a71d8f4512b348613929a0b"} Mar 12 14:16:06.859105 master-0 kubenswrapper[7440]: I0312 14:16:06.859004 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:16:06.859105 master-0 kubenswrapper[7440]: I0312 14:16:06.859015 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-rqq4v" event={"ID":"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9","Type":"ContainerDied","Data":"6426a3a4748b7e9d673d2f1d6267439ec1d4e697687aa5758b4c1a8fe5038d99"} Mar 12 14:16:06.859105 master-0 kubenswrapper[7440]: I0312 14:16:06.859042 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"f877e9e772e626aee6aab05c7ac905f2c4beb3f6e88c57c25b9eaeab3e18035d"} Mar 12 14:16:06.859105 master-0 kubenswrapper[7440]: I0312 14:16:06.859054 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"19b7b4eaee1a852a9ccf6d4df36d726273012941b1ee088eb660f41b5b7c26b8"} Mar 12 14:16:06.859105 master-0 kubenswrapper[7440]: I0312 14:16:06.859065 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"8c824a81227bbc4977bfae432c464a86a92fba843d33ea60db40b0306a18e201"} Mar 12 14:16:06.859105 master-0 kubenswrapper[7440]: I0312 14:16:06.859075 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"e16a62b4a09dc1bf1229b7f6c1c70a440164d0b0527802cf7ca0f10f946c47d1"} Mar 12 14:16:06.859105 master-0 kubenswrapper[7440]: I0312 14:16:06.859085 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"ce4bbd63e68811b084a013b96af26d98956aa6df6255b0040e0ffbc96b8a34b0"} Mar 12 14:16:06.859105 master-0 kubenswrapper[7440]: I0312 14:16:06.859095 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" event={"ID":"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b","Type":"ContainerDied","Data":"82e98531076d6e3c9a7e475978917c54179baaf121c2bd492fa03aa8611e6187"} Mar 12 14:16:06.859105 master-0 kubenswrapper[7440]: I0312 14:16:06.859108 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"c4d90f1c1d446b898ed50108e2482967a437ec5d999259ff9e991131aa20b54a"} Mar 12 14:16:06.859608 master-0 kubenswrapper[7440]: I0312 14:16:06.859126 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"80ae1c45663433034e72c5c20f8723a435fbf83c810f99ce19145980cd404753"} Mar 12 14:16:06.859608 master-0 kubenswrapper[7440]: I0312 14:16:06.859138 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" event={"ID":"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b","Type":"ContainerStarted","Data":"465db2374b4fcd6162b8dc553bb6f2ef4a19ba262a22ad29911e0930f35262e4"} Mar 12 14:16:06.859608 master-0 kubenswrapper[7440]: I0312 14:16:06.859149 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" event={"ID":"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6","Type":"ContainerStarted","Data":"cb41f5989ad50bdc5ae078b167c9bb559590c0f507a4b8b3d5d90309a6eca4b7"} Mar 12 14:16:06.859608 master-0 kubenswrapper[7440]: I0312 14:16:06.859161 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" event={"ID":"1bc0d552-01c7-4212-a551-d16419f2dc80","Type":"ContainerDied","Data":"73cc9d119c3cd4081058d9ad935f90baed6fe86111a2b8950fb3e1c100feb5fb"} Mar 12 14:16:06.859608 master-0 kubenswrapper[7440]: I0312 14:16:06.859174 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" event={"ID":"1f9b15c6-b4ee-4907-8daa-376e3b438896","Type":"ContainerDied","Data":"ed6b1efe75e8b6c558fafcaa8ddbf929d9ca6180cac551e6f152da3936b202da"} Mar 12 14:16:06.859608 master-0 kubenswrapper[7440]: I0312 14:16:06.859187 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" event={"ID":"39252b5a-d014-4319-ad81-3c1bf2ef585e","Type":"ContainerDied","Data":"9e5d0273aaf9a58de181bc25e8eb0e74c78055d79bccf5dc90c3b2168e550793"} Mar 12 14:16:06.859608 master-0 kubenswrapper[7440]: I0312 14:16:06.859200 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" event={"ID":"d56089bf-177c-492d-8964-73a45574e7ed","Type":"ContainerDied","Data":"d53adb45a67056ee01b81331e65f41973a39210d835cc7c159b8fe9b81f06549"} Mar 12 14:16:06.859608 master-0 kubenswrapper[7440]: I0312 14:16:06.859500 7440 scope.go:117] "RemoveContainer" containerID="d53adb45a67056ee01b81331e65f41973a39210d835cc7c159b8fe9b81f06549" Mar 12 14:16:06.859608 master-0 kubenswrapper[7440]: I0312 14:16:06.859532 7440 scope.go:117] "RemoveContainer" containerID="6426a3a4748b7e9d673d2f1d6267439ec1d4e697687aa5758b4c1a8fe5038d99" Mar 12 14:16:06.859998 master-0 kubenswrapper[7440]: I0312 14:16:06.859706 7440 scope.go:117] "RemoveContainer" containerID="db63589c7d51a05a8314fa99d2bcd36f7d574dddf92caf850f4dc8319e77bd65" Mar 12 14:16:06.860205 master-0 kubenswrapper[7440]: I0312 14:16:06.860180 7440 scope.go:117] "RemoveContainer" containerID="73cc9d119c3cd4081058d9ad935f90baed6fe86111a2b8950fb3e1c100feb5fb" Mar 12 14:16:06.860820 master-0 kubenswrapper[7440]: I0312 14:16:06.860501 7440 scope.go:117] "RemoveContainer" containerID="ed6b1efe75e8b6c558fafcaa8ddbf929d9ca6180cac551e6f152da3936b202da" Mar 12 14:16:06.861219 master-0 kubenswrapper[7440]: I0312 14:16:06.861114 7440 scope.go:117] "RemoveContainer" containerID="9e5d0273aaf9a58de181bc25e8eb0e74c78055d79bccf5dc90c3b2168e550793" Mar 12 14:16:06.909467 master-0 kubenswrapper[7440]: I0312 14:16:06.909410 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qngm"] Mar 12 14:16:06.909723 master-0 kubenswrapper[7440]: I0312 14:16:06.909690 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9qngm" podUID="d181b683-a575-45a3-b736-ad4e07486545" containerName="registry-server" containerID="cri-o://3e79e6cf6c2a81d84480642bdb6e13725272037b0e0f9e2a9958b1bfd7b31b67" gracePeriod=2 Mar 12 14:16:06.910942 master-0 kubenswrapper[7440]: I0312 14:16:06.910845 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 12 14:16:06.910942 master-0 kubenswrapper[7440]: I0312 14:16:06.910873 7440 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="81ce53a6-7f97-4868-be45-a3522575ee37" Mar 12 14:16:06.913291 master-0 kubenswrapper[7440]: I0312 14:16:06.913246 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-thh89"] Mar 12 14:16:06.913420 master-0 kubenswrapper[7440]: I0312 14:16:06.913248 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" podStartSLOduration=120.913236792 podStartE2EDuration="2m0.913236792s" podCreationTimestamp="2026-03-12 14:14:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:16:06.840859059 +0000 UTC m=+227.176237638" watchObservedRunningTime="2026-03-12 14:16:06.913236792 +0000 UTC m=+227.248615351" Mar 12 14:16:06.913566 master-0 kubenswrapper[7440]: I0312 14:16:06.913534 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-thh89" podUID="a932351b-831e-4930-85a2-f2faf1e6b262" containerName="registry-server" containerID="cri-o://0fe01a0dbee94f17641e27b15e6358ba154e2dc2bbe75b79d78402ecab3bf79f" gracePeriod=2 Mar 12 14:16:06.919461 master-0 kubenswrapper[7440]: I0312 14:16:06.919414 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 12 14:16:06.919693 master-0 kubenswrapper[7440]: I0312 14:16:06.919669 7440 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="81ce53a6-7f97-4868-be45-a3522575ee37" Mar 12 14:16:06.925575 master-0 kubenswrapper[7440]: I0312 14:16:06.925512 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 12 14:16:06.958350 master-0 kubenswrapper[7440]: I0312 14:16:06.958308 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 12 14:16:07.028174 master-0 kubenswrapper[7440]: I0312 14:16:07.026432 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 12 14:16:07.028174 master-0 kubenswrapper[7440]: I0312 14:16:07.026502 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 12 14:16:07.091447 master-0 kubenswrapper[7440]: I0312 14:16:07.091398 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-sm9nb"] Mar 12 14:16:07.095829 master-0 kubenswrapper[7440]: I0312 14:16:07.095787 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-sm9nb"] Mar 12 14:16:07.134509 master-0 kubenswrapper[7440]: I0312 14:16:07.133240 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" podStartSLOduration=191.133219125 podStartE2EDuration="3m11.133219125s" podCreationTimestamp="2026-03-12 14:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:16:07.131669925 +0000 UTC m=+227.467048494" watchObservedRunningTime="2026-03-12 14:16:07.133219125 +0000 UTC m=+227.468597684" Mar 12 14:16:07.246695 master-0 kubenswrapper[7440]: I0312 14:16:07.246654 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2"] Mar 12 14:16:07.252957 master-0 kubenswrapper[7440]: I0312 14:16:07.251890 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-9d9f2"] Mar 12 14:16:07.348742 master-0 kubenswrapper[7440]: I0312 14:16:07.348103 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=1.348080837 podStartE2EDuration="1.348080837s" podCreationTimestamp="2026-03-12 14:16:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:16:07.345590403 +0000 UTC m=+227.680968972" watchObservedRunningTime="2026-03-12 14:16:07.348080837 +0000 UTC m=+227.683459396" Mar 12 14:16:07.384847 master-0 kubenswrapper[7440]: I0312 14:16:07.384736 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9qngm" Mar 12 14:16:07.386890 master-0 kubenswrapper[7440]: I0312 14:16:07.386863 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-thh89" Mar 12 14:16:07.440487 master-0 kubenswrapper[7440]: I0312 14:16:07.440445 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ns7pm"] Mar 12 14:16:07.440968 master-0 kubenswrapper[7440]: I0312 14:16:07.440938 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ns7pm" podUID="2e0d04d0-6ea2-4b4e-a881-968db7d31c7b" containerName="registry-server" containerID="cri-o://88258e715bc540b76097bb99083cec5e9e7c8071119a50353c605425f13a6d2b" gracePeriod=2 Mar 12 14:16:07.470762 master-0 kubenswrapper[7440]: I0312 14:16:07.470718 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8rnk\" (UniqueName: \"kubernetes.io/projected/d181b683-a575-45a3-b736-ad4e07486545-kube-api-access-t8rnk\") pod \"d181b683-a575-45a3-b736-ad4e07486545\" (UID: \"d181b683-a575-45a3-b736-ad4e07486545\") " Mar 12 14:16:07.471343 master-0 kubenswrapper[7440]: I0312 14:16:07.471324 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh4cz\" (UniqueName: \"kubernetes.io/projected/a932351b-831e-4930-85a2-f2faf1e6b262-kube-api-access-hh4cz\") pod \"a932351b-831e-4930-85a2-f2faf1e6b262\" (UID: \"a932351b-831e-4930-85a2-f2faf1e6b262\") " Mar 12 14:16:07.471499 master-0 kubenswrapper[7440]: I0312 14:16:07.471481 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a932351b-831e-4930-85a2-f2faf1e6b262-utilities\") pod \"a932351b-831e-4930-85a2-f2faf1e6b262\" (UID: \"a932351b-831e-4930-85a2-f2faf1e6b262\") " Mar 12 14:16:07.471651 master-0 kubenswrapper[7440]: I0312 14:16:07.471634 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d181b683-a575-45a3-b736-ad4e07486545-catalog-content\") pod \"d181b683-a575-45a3-b736-ad4e07486545\" (UID: \"d181b683-a575-45a3-b736-ad4e07486545\") " Mar 12 14:16:07.471755 master-0 kubenswrapper[7440]: I0312 14:16:07.471737 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a932351b-831e-4930-85a2-f2faf1e6b262-catalog-content\") pod \"a932351b-831e-4930-85a2-f2faf1e6b262\" (UID: \"a932351b-831e-4930-85a2-f2faf1e6b262\") " Mar 12 14:16:07.471854 master-0 kubenswrapper[7440]: I0312 14:16:07.471837 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d181b683-a575-45a3-b736-ad4e07486545-utilities\") pod \"d181b683-a575-45a3-b736-ad4e07486545\" (UID: \"d181b683-a575-45a3-b736-ad4e07486545\") " Mar 12 14:16:07.472227 master-0 kubenswrapper[7440]: I0312 14:16:07.472183 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a932351b-831e-4930-85a2-f2faf1e6b262-utilities" (OuterVolumeSpecName: "utilities") pod "a932351b-831e-4930-85a2-f2faf1e6b262" (UID: "a932351b-831e-4930-85a2-f2faf1e6b262"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:16:07.473003 master-0 kubenswrapper[7440]: I0312 14:16:07.472951 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d181b683-a575-45a3-b736-ad4e07486545-utilities" (OuterVolumeSpecName: "utilities") pod "d181b683-a575-45a3-b736-ad4e07486545" (UID: "d181b683-a575-45a3-b736-ad4e07486545"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:16:07.476136 master-0 kubenswrapper[7440]: I0312 14:16:07.476084 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a932351b-831e-4930-85a2-f2faf1e6b262-kube-api-access-hh4cz" (OuterVolumeSpecName: "kube-api-access-hh4cz") pod "a932351b-831e-4930-85a2-f2faf1e6b262" (UID: "a932351b-831e-4930-85a2-f2faf1e6b262"). InnerVolumeSpecName "kube-api-access-hh4cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:16:07.487118 master-0 kubenswrapper[7440]: I0312 14:16:07.487063 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d181b683-a575-45a3-b736-ad4e07486545-kube-api-access-t8rnk" (OuterVolumeSpecName: "kube-api-access-t8rnk") pod "d181b683-a575-45a3-b736-ad4e07486545" (UID: "d181b683-a575-45a3-b736-ad4e07486545"). InnerVolumeSpecName "kube-api-access-t8rnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:16:07.499758 master-0 kubenswrapper[7440]: I0312 14:16:07.499688 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d181b683-a575-45a3-b736-ad4e07486545-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d181b683-a575-45a3-b736-ad4e07486545" (UID: "d181b683-a575-45a3-b736-ad4e07486545"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:16:07.534416 master-0 kubenswrapper[7440]: I0312 14:16:07.534352 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a932351b-831e-4930-85a2-f2faf1e6b262-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a932351b-831e-4930-85a2-f2faf1e6b262" (UID: "a932351b-831e-4930-85a2-f2faf1e6b262"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:16:07.572702 master-0 kubenswrapper[7440]: I0312 14:16:07.572664 7440 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d181b683-a575-45a3-b736-ad4e07486545-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:07.572702 master-0 kubenswrapper[7440]: I0312 14:16:07.572691 7440 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a932351b-831e-4930-85a2-f2faf1e6b262-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:07.572702 master-0 kubenswrapper[7440]: I0312 14:16:07.572700 7440 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d181b683-a575-45a3-b736-ad4e07486545-utilities\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:07.572887 master-0 kubenswrapper[7440]: I0312 14:16:07.572709 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8rnk\" (UniqueName: \"kubernetes.io/projected/d181b683-a575-45a3-b736-ad4e07486545-kube-api-access-t8rnk\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:07.572887 master-0 kubenswrapper[7440]: I0312 14:16:07.572718 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh4cz\" (UniqueName: \"kubernetes.io/projected/a932351b-831e-4930-85a2-f2faf1e6b262-kube-api-access-hh4cz\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:07.572887 master-0 kubenswrapper[7440]: I0312 14:16:07.572726 7440 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a932351b-831e-4930-85a2-f2faf1e6b262-utilities\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:07.641951 master-0 kubenswrapper[7440]: I0312 14:16:07.641821 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4622r"] Mar 12 14:16:07.642158 master-0 kubenswrapper[7440]: I0312 14:16:07.642116 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4622r" podUID="191fe879-7ece-4f8c-bae6-cf46acb382c9" containerName="registry-server" containerID="cri-o://a56be2a786928fd5eaf82b7365566e1dbacec830c9324c47c9ab044e97cd0ce5" gracePeriod=2 Mar 12 14:16:07.668937 master-0 kubenswrapper[7440]: I0312 14:16:07.668877 7440 generic.go:334] "Generic (PLEG): container finished" podID="a932351b-831e-4930-85a2-f2faf1e6b262" containerID="0fe01a0dbee94f17641e27b15e6358ba154e2dc2bbe75b79d78402ecab3bf79f" exitCode=0 Mar 12 14:16:07.669043 master-0 kubenswrapper[7440]: I0312 14:16:07.668991 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-thh89" Mar 12 14:16:07.669043 master-0 kubenswrapper[7440]: I0312 14:16:07.669017 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-thh89" event={"ID":"a932351b-831e-4930-85a2-f2faf1e6b262","Type":"ContainerDied","Data":"0fe01a0dbee94f17641e27b15e6358ba154e2dc2bbe75b79d78402ecab3bf79f"} Mar 12 14:16:07.669135 master-0 kubenswrapper[7440]: I0312 14:16:07.669054 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-thh89" event={"ID":"a932351b-831e-4930-85a2-f2faf1e6b262","Type":"ContainerDied","Data":"f8a10557be91edf1ddf87676f9207a0449cad63c109bcfc61a138873a1379236"} Mar 12 14:16:07.669135 master-0 kubenswrapper[7440]: I0312 14:16:07.669080 7440 scope.go:117] "RemoveContainer" containerID="0fe01a0dbee94f17641e27b15e6358ba154e2dc2bbe75b79d78402ecab3bf79f" Mar 12 14:16:07.689172 master-0 kubenswrapper[7440]: I0312 14:16:07.689125 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-2pj4z_39252b5a-d014-4319-ad81-3c1bf2ef585e/manager/0.log" Mar 12 14:16:07.691122 master-0 kubenswrapper[7440]: I0312 14:16:07.691084 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" event={"ID":"39252b5a-d014-4319-ad81-3c1bf2ef585e","Type":"ContainerStarted","Data":"b4044e7e2ef92f0cd6613cc7ae6cd69030edcd7a8b1f34e45d134f63f2150425"} Mar 12 14:16:07.691400 master-0 kubenswrapper[7440]: I0312 14:16:07.691371 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:16:07.698294 master-0 kubenswrapper[7440]: I0312 14:16:07.696426 7440 generic.go:334] "Generic (PLEG): container finished" podID="2e0d04d0-6ea2-4b4e-a881-968db7d31c7b" containerID="88258e715bc540b76097bb99083cec5e9e7c8071119a50353c605425f13a6d2b" exitCode=0 Mar 12 14:16:07.698294 master-0 kubenswrapper[7440]: I0312 14:16:07.696531 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ns7pm" event={"ID":"2e0d04d0-6ea2-4b4e-a881-968db7d31c7b","Type":"ContainerDied","Data":"88258e715bc540b76097bb99083cec5e9e7c8071119a50353c605425f13a6d2b"} Mar 12 14:16:07.703930 master-0 kubenswrapper[7440]: I0312 14:16:07.703810 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-rqq4v_e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9/approver/0.log" Mar 12 14:16:07.706829 master-0 kubenswrapper[7440]: I0312 14:16:07.706356 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-rqq4v" event={"ID":"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9","Type":"ContainerStarted","Data":"5cb54a4bc2f599bf332cb42ee8b1be36eecedf83f6db23db71f7ec0f390ee742"} Mar 12 14:16:07.712016 master-0 kubenswrapper[7440]: I0312 14:16:07.711968 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-754hn_1f9b15c6-b4ee-4907-8daa-376e3b438896/manager/0.log" Mar 12 14:16:07.712129 master-0 kubenswrapper[7440]: I0312 14:16:07.712073 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" event={"ID":"1f9b15c6-b4ee-4907-8daa-376e3b438896","Type":"ContainerStarted","Data":"d8ff2c564a804fe655cf5c13836235d82f004d14fcc6254310c9d20d2a34b9ca"} Mar 12 14:16:07.712371 master-0 kubenswrapper[7440]: I0312 14:16:07.712337 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:16:07.716069 master-0 kubenswrapper[7440]: I0312 14:16:07.716031 7440 generic.go:334] "Generic (PLEG): container finished" podID="d181b683-a575-45a3-b736-ad4e07486545" containerID="3e79e6cf6c2a81d84480642bdb6e13725272037b0e0f9e2a9958b1bfd7b31b67" exitCode=0 Mar 12 14:16:07.716129 master-0 kubenswrapper[7440]: I0312 14:16:07.716093 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qngm" event={"ID":"d181b683-a575-45a3-b736-ad4e07486545","Type":"ContainerDied","Data":"3e79e6cf6c2a81d84480642bdb6e13725272037b0e0f9e2a9958b1bfd7b31b67"} Mar 12 14:16:07.716129 master-0 kubenswrapper[7440]: I0312 14:16:07.716115 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qngm" event={"ID":"d181b683-a575-45a3-b736-ad4e07486545","Type":"ContainerDied","Data":"ad049afe5ae9a1ecf28a08d3c5dea4946348cb8f5a1a87ed70b45bf058b12cac"} Mar 12 14:16:07.716199 master-0 kubenswrapper[7440]: I0312 14:16:07.716182 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9qngm" Mar 12 14:16:07.719602 master-0 kubenswrapper[7440]: I0312 14:16:07.719572 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/0.log" Mar 12 14:16:07.720563 master-0 kubenswrapper[7440]: I0312 14:16:07.719638 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" event={"ID":"d56089bf-177c-492d-8964-73a45574e7ed","Type":"ContainerStarted","Data":"7bd65ca4e680b5333dd47dc3da6564b9ecb4961327d3c93643808daf9a4c8812"} Mar 12 14:16:07.728301 master-0 kubenswrapper[7440]: I0312 14:16:07.728050 7440 scope.go:117] "RemoveContainer" containerID="301380442ddd774e8f58eb782166994c76dcab49ea2cd60afb98a69d120ab1da" Mar 12 14:16:07.729110 master-0 kubenswrapper[7440]: I0312 14:16:07.728992 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" event={"ID":"1bc0d552-01c7-4212-a551-d16419f2dc80","Type":"ContainerStarted","Data":"d4f5f31cb9b13fbf54308c119403bf09d2d0acf82b48cd71b5bda3672a1ed049"} Mar 12 14:16:07.729758 master-0 kubenswrapper[7440]: I0312 14:16:07.729736 7440 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" containerID="cri-o://73cc9d119c3cd4081058d9ad935f90baed6fe86111a2b8950fb3e1c100feb5fb" Mar 12 14:16:07.729800 master-0 kubenswrapper[7440]: I0312 14:16:07.729758 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:16:07.744586 master-0 kubenswrapper[7440]: I0312 14:16:07.744543 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:16:07.744668 master-0 kubenswrapper[7440]: I0312 14:16:07.744592 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:16:07.769617 master-0 kubenswrapper[7440]: I0312 14:16:07.767360 7440 scope.go:117] "RemoveContainer" containerID="74fb402f739f96f56154340ca788d707573841081bff6bef4caa13bff71d91ab" Mar 12 14:16:07.775183 master-0 kubenswrapper[7440]: I0312 14:16:07.775106 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-thh89"] Mar 12 14:16:07.785612 master-0 kubenswrapper[7440]: I0312 14:16:07.784946 7440 scope.go:117] "RemoveContainer" containerID="0fe01a0dbee94f17641e27b15e6358ba154e2dc2bbe75b79d78402ecab3bf79f" Mar 12 14:16:07.791539 master-0 kubenswrapper[7440]: E0312 14:16:07.791286 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fe01a0dbee94f17641e27b15e6358ba154e2dc2bbe75b79d78402ecab3bf79f\": container with ID starting with 0fe01a0dbee94f17641e27b15e6358ba154e2dc2bbe75b79d78402ecab3bf79f not found: ID does not exist" containerID="0fe01a0dbee94f17641e27b15e6358ba154e2dc2bbe75b79d78402ecab3bf79f" Mar 12 14:16:07.791539 master-0 kubenswrapper[7440]: I0312 14:16:07.791352 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fe01a0dbee94f17641e27b15e6358ba154e2dc2bbe75b79d78402ecab3bf79f"} err="failed to get container status \"0fe01a0dbee94f17641e27b15e6358ba154e2dc2bbe75b79d78402ecab3bf79f\": rpc error: code = NotFound desc = could not find container \"0fe01a0dbee94f17641e27b15e6358ba154e2dc2bbe75b79d78402ecab3bf79f\": container with ID starting with 0fe01a0dbee94f17641e27b15e6358ba154e2dc2bbe75b79d78402ecab3bf79f not found: ID does not exist" Mar 12 14:16:07.791539 master-0 kubenswrapper[7440]: I0312 14:16:07.791378 7440 scope.go:117] "RemoveContainer" containerID="301380442ddd774e8f58eb782166994c76dcab49ea2cd60afb98a69d120ab1da" Mar 12 14:16:07.793209 master-0 kubenswrapper[7440]: E0312 14:16:07.792981 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"301380442ddd774e8f58eb782166994c76dcab49ea2cd60afb98a69d120ab1da\": container with ID starting with 301380442ddd774e8f58eb782166994c76dcab49ea2cd60afb98a69d120ab1da not found: ID does not exist" containerID="301380442ddd774e8f58eb782166994c76dcab49ea2cd60afb98a69d120ab1da" Mar 12 14:16:07.793209 master-0 kubenswrapper[7440]: I0312 14:16:07.793054 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"301380442ddd774e8f58eb782166994c76dcab49ea2cd60afb98a69d120ab1da"} err="failed to get container status \"301380442ddd774e8f58eb782166994c76dcab49ea2cd60afb98a69d120ab1da\": rpc error: code = NotFound desc = could not find container \"301380442ddd774e8f58eb782166994c76dcab49ea2cd60afb98a69d120ab1da\": container with ID starting with 301380442ddd774e8f58eb782166994c76dcab49ea2cd60afb98a69d120ab1da not found: ID does not exist" Mar 12 14:16:07.793209 master-0 kubenswrapper[7440]: I0312 14:16:07.793084 7440 scope.go:117] "RemoveContainer" containerID="74fb402f739f96f56154340ca788d707573841081bff6bef4caa13bff71d91ab" Mar 12 14:16:07.794023 master-0 kubenswrapper[7440]: E0312 14:16:07.793527 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74fb402f739f96f56154340ca788d707573841081bff6bef4caa13bff71d91ab\": container with ID starting with 74fb402f739f96f56154340ca788d707573841081bff6bef4caa13bff71d91ab not found: ID does not exist" containerID="74fb402f739f96f56154340ca788d707573841081bff6bef4caa13bff71d91ab" Mar 12 14:16:07.794023 master-0 kubenswrapper[7440]: I0312 14:16:07.793568 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74fb402f739f96f56154340ca788d707573841081bff6bef4caa13bff71d91ab"} err="failed to get container status \"74fb402f739f96f56154340ca788d707573841081bff6bef4caa13bff71d91ab\": rpc error: code = NotFound desc = could not find container \"74fb402f739f96f56154340ca788d707573841081bff6bef4caa13bff71d91ab\": container with ID starting with 74fb402f739f96f56154340ca788d707573841081bff6bef4caa13bff71d91ab not found: ID does not exist" Mar 12 14:16:07.794023 master-0 kubenswrapper[7440]: I0312 14:16:07.793587 7440 scope.go:117] "RemoveContainer" containerID="3e79e6cf6c2a81d84480642bdb6e13725272037b0e0f9e2a9958b1bfd7b31b67" Mar 12 14:16:07.801937 master-0 kubenswrapper[7440]: I0312 14:16:07.799013 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-thh89"] Mar 12 14:16:07.810662 master-0 kubenswrapper[7440]: I0312 14:16:07.810297 7440 scope.go:117] "RemoveContainer" containerID="d8d7fa2e13e14e0b984d9f7e1f54f4e5ea0c4414c868abc17d41c48f7a68c9ba" Mar 12 14:16:07.817797 master-0 kubenswrapper[7440]: I0312 14:16:07.815825 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ns7pm" Mar 12 14:16:07.817797 master-0 kubenswrapper[7440]: I0312 14:16:07.816680 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56fb91c7-1b94-4f59-82f2-3025f0b02e43" path="/var/lib/kubelet/pods/56fb91c7-1b94-4f59-82f2-3025f0b02e43/volumes" Mar 12 14:16:07.817797 master-0 kubenswrapper[7440]: I0312 14:16:07.817204 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7023af8b-bfcc-4253-85cd-d891dff1c86e" path="/var/lib/kubelet/pods/7023af8b-bfcc-4253-85cd-d891dff1c86e/volumes" Mar 12 14:16:07.817797 master-0 kubenswrapper[7440]: I0312 14:16:07.817697 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a932351b-831e-4930-85a2-f2faf1e6b262" path="/var/lib/kubelet/pods/a932351b-831e-4930-85a2-f2faf1e6b262/volumes" Mar 12 14:16:07.823678 master-0 kubenswrapper[7440]: I0312 14:16:07.818780 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc86a749-8fef-462c-b422-95155cb6ca21" path="/var/lib/kubelet/pods/bc86a749-8fef-462c-b422-95155cb6ca21/volumes" Mar 12 14:16:07.823678 master-0 kubenswrapper[7440]: I0312 14:16:07.819280 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qngm"] Mar 12 14:16:07.823678 master-0 kubenswrapper[7440]: I0312 14:16:07.819301 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qngm"] Mar 12 14:16:07.891683 master-0 kubenswrapper[7440]: I0312 14:16:07.852630 7440 scope.go:117] "RemoveContainer" containerID="75cce3e44d0b316e12d6d6e14e98cf027dec02a2c5b39a8c50e537653cad5272" Mar 12 14:16:07.891683 master-0 kubenswrapper[7440]: I0312 14:16:07.876724 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e0d04d0-6ea2-4b4e-a881-968db7d31c7b-utilities\") pod \"2e0d04d0-6ea2-4b4e-a881-968db7d31c7b\" (UID: \"2e0d04d0-6ea2-4b4e-a881-968db7d31c7b\") " Mar 12 14:16:07.891683 master-0 kubenswrapper[7440]: I0312 14:16:07.876818 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztgb\" (UniqueName: \"kubernetes.io/projected/2e0d04d0-6ea2-4b4e-a881-968db7d31c7b-kube-api-access-dztgb\") pod \"2e0d04d0-6ea2-4b4e-a881-968db7d31c7b\" (UID: \"2e0d04d0-6ea2-4b4e-a881-968db7d31c7b\") " Mar 12 14:16:07.891683 master-0 kubenswrapper[7440]: I0312 14:16:07.876854 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e0d04d0-6ea2-4b4e-a881-968db7d31c7b-catalog-content\") pod \"2e0d04d0-6ea2-4b4e-a881-968db7d31c7b\" (UID: \"2e0d04d0-6ea2-4b4e-a881-968db7d31c7b\") " Mar 12 14:16:07.891683 master-0 kubenswrapper[7440]: I0312 14:16:07.878270 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e0d04d0-6ea2-4b4e-a881-968db7d31c7b-utilities" (OuterVolumeSpecName: "utilities") pod "2e0d04d0-6ea2-4b4e-a881-968db7d31c7b" (UID: "2e0d04d0-6ea2-4b4e-a881-968db7d31c7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:16:07.891683 master-0 kubenswrapper[7440]: I0312 14:16:07.889165 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e0d04d0-6ea2-4b4e-a881-968db7d31c7b-kube-api-access-dztgb" (OuterVolumeSpecName: "kube-api-access-dztgb") pod "2e0d04d0-6ea2-4b4e-a881-968db7d31c7b" (UID: "2e0d04d0-6ea2-4b4e-a881-968db7d31c7b"). InnerVolumeSpecName "kube-api-access-dztgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:16:07.911390 master-0 kubenswrapper[7440]: I0312 14:16:07.911186 7440 scope.go:117] "RemoveContainer" containerID="3e79e6cf6c2a81d84480642bdb6e13725272037b0e0f9e2a9958b1bfd7b31b67" Mar 12 14:16:07.911725 master-0 kubenswrapper[7440]: E0312 14:16:07.911692 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e79e6cf6c2a81d84480642bdb6e13725272037b0e0f9e2a9958b1bfd7b31b67\": container with ID starting with 3e79e6cf6c2a81d84480642bdb6e13725272037b0e0f9e2a9958b1bfd7b31b67 not found: ID does not exist" containerID="3e79e6cf6c2a81d84480642bdb6e13725272037b0e0f9e2a9958b1bfd7b31b67" Mar 12 14:16:07.911781 master-0 kubenswrapper[7440]: I0312 14:16:07.911735 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e79e6cf6c2a81d84480642bdb6e13725272037b0e0f9e2a9958b1bfd7b31b67"} err="failed to get container status \"3e79e6cf6c2a81d84480642bdb6e13725272037b0e0f9e2a9958b1bfd7b31b67\": rpc error: code = NotFound desc = could not find container \"3e79e6cf6c2a81d84480642bdb6e13725272037b0e0f9e2a9958b1bfd7b31b67\": container with ID starting with 3e79e6cf6c2a81d84480642bdb6e13725272037b0e0f9e2a9958b1bfd7b31b67 not found: ID does not exist" Mar 12 14:16:07.911781 master-0 kubenswrapper[7440]: I0312 14:16:07.911765 7440 scope.go:117] "RemoveContainer" containerID="d8d7fa2e13e14e0b984d9f7e1f54f4e5ea0c4414c868abc17d41c48f7a68c9ba" Mar 12 14:16:07.912220 master-0 kubenswrapper[7440]: E0312 14:16:07.912178 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8d7fa2e13e14e0b984d9f7e1f54f4e5ea0c4414c868abc17d41c48f7a68c9ba\": container with ID starting with d8d7fa2e13e14e0b984d9f7e1f54f4e5ea0c4414c868abc17d41c48f7a68c9ba not found: ID does not exist" containerID="d8d7fa2e13e14e0b984d9f7e1f54f4e5ea0c4414c868abc17d41c48f7a68c9ba" Mar 12 14:16:07.912220 master-0 kubenswrapper[7440]: I0312 14:16:07.912211 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8d7fa2e13e14e0b984d9f7e1f54f4e5ea0c4414c868abc17d41c48f7a68c9ba"} err="failed to get container status \"d8d7fa2e13e14e0b984d9f7e1f54f4e5ea0c4414c868abc17d41c48f7a68c9ba\": rpc error: code = NotFound desc = could not find container \"d8d7fa2e13e14e0b984d9f7e1f54f4e5ea0c4414c868abc17d41c48f7a68c9ba\": container with ID starting with d8d7fa2e13e14e0b984d9f7e1f54f4e5ea0c4414c868abc17d41c48f7a68c9ba not found: ID does not exist" Mar 12 14:16:07.912316 master-0 kubenswrapper[7440]: I0312 14:16:07.912230 7440 scope.go:117] "RemoveContainer" containerID="75cce3e44d0b316e12d6d6e14e98cf027dec02a2c5b39a8c50e537653cad5272" Mar 12 14:16:07.912486 master-0 kubenswrapper[7440]: E0312 14:16:07.912451 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75cce3e44d0b316e12d6d6e14e98cf027dec02a2c5b39a8c50e537653cad5272\": container with ID starting with 75cce3e44d0b316e12d6d6e14e98cf027dec02a2c5b39a8c50e537653cad5272 not found: ID does not exist" containerID="75cce3e44d0b316e12d6d6e14e98cf027dec02a2c5b39a8c50e537653cad5272" Mar 12 14:16:07.912531 master-0 kubenswrapper[7440]: I0312 14:16:07.912478 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75cce3e44d0b316e12d6d6e14e98cf027dec02a2c5b39a8c50e537653cad5272"} err="failed to get container status \"75cce3e44d0b316e12d6d6e14e98cf027dec02a2c5b39a8c50e537653cad5272\": rpc error: code = NotFound desc = could not find container \"75cce3e44d0b316e12d6d6e14e98cf027dec02a2c5b39a8c50e537653cad5272\": container with ID starting with 75cce3e44d0b316e12d6d6e14e98cf027dec02a2c5b39a8c50e537653cad5272 not found: ID does not exist" Mar 12 14:16:07.978446 master-0 kubenswrapper[7440]: I0312 14:16:07.978348 7440 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e0d04d0-6ea2-4b4e-a881-968db7d31c7b-utilities\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:07.978446 master-0 kubenswrapper[7440]: I0312 14:16:07.978393 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dztgb\" (UniqueName: \"kubernetes.io/projected/2e0d04d0-6ea2-4b4e-a881-968db7d31c7b-kube-api-access-dztgb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:08.048848 master-0 kubenswrapper[7440]: I0312 14:16:08.048793 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e0d04d0-6ea2-4b4e-a881-968db7d31c7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e0d04d0-6ea2-4b4e-a881-968db7d31c7b" (UID: "2e0d04d0-6ea2-4b4e-a881-968db7d31c7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:16:08.079944 master-0 kubenswrapper[7440]: I0312 14:16:08.079875 7440 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e0d04d0-6ea2-4b4e-a881-968db7d31c7b-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.133277 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: E0312 14:16:08.133459 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56fb91c7-1b94-4f59-82f2-3025f0b02e43" containerName="installer" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.133470 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="56fb91c7-1b94-4f59-82f2-3025f0b02e43" containerName="installer" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: E0312 14:16:08.133483 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d181b683-a575-45a3-b736-ad4e07486545" containerName="extract-utilities" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.133489 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="d181b683-a575-45a3-b736-ad4e07486545" containerName="extract-utilities" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: E0312 14:16:08.133499 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b56974-d2b1-4205-af5a-70cc2b616d1a" containerName="installer" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.133506 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b56974-d2b1-4205-af5a-70cc2b616d1a" containerName="installer" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: E0312 14:16:08.133512 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7023af8b-bfcc-4253-85cd-d891dff1c86e" containerName="multus-admission-controller" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.133518 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="7023af8b-bfcc-4253-85cd-d891dff1c86e" containerName="multus-admission-controller" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: E0312 14:16:08.133525 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0d04d0-6ea2-4b4e-a881-968db7d31c7b" containerName="extract-content" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.133531 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0d04d0-6ea2-4b4e-a881-968db7d31c7b" containerName="extract-content" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: E0312 14:16:08.133540 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0d04d0-6ea2-4b4e-a881-968db7d31c7b" containerName="registry-server" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.133546 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0d04d0-6ea2-4b4e-a881-968db7d31c7b" containerName="registry-server" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: E0312 14:16:08.133555 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7023af8b-bfcc-4253-85cd-d891dff1c86e" containerName="kube-rbac-proxy" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.133561 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="7023af8b-bfcc-4253-85cd-d891dff1c86e" containerName="kube-rbac-proxy" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: E0312 14:16:08.133570 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efd52682-bf05-44fc-9790-8adfc87ca087" containerName="installer" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.133579 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="efd52682-bf05-44fc-9790-8adfc87ca087" containerName="installer" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: E0312 14:16:08.133592 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a932351b-831e-4930-85a2-f2faf1e6b262" containerName="extract-utilities" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.133599 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="a932351b-831e-4930-85a2-f2faf1e6b262" containerName="extract-utilities" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: E0312 14:16:08.133607 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a932351b-831e-4930-85a2-f2faf1e6b262" containerName="extract-content" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.133614 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="a932351b-831e-4930-85a2-f2faf1e6b262" containerName="extract-content" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: E0312 14:16:08.133628 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0743910-1ba7-490d-bc3e-5126562b04aa" containerName="installer" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.133636 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0743910-1ba7-490d-bc3e-5126562b04aa" containerName="installer" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: E0312 14:16:08.133647 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0d04d0-6ea2-4b4e-a881-968db7d31c7b" containerName="extract-utilities" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.133652 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0d04d0-6ea2-4b4e-a881-968db7d31c7b" containerName="extract-utilities" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: E0312 14:16:08.133661 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a932351b-831e-4930-85a2-f2faf1e6b262" containerName="registry-server" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.133666 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="a932351b-831e-4930-85a2-f2faf1e6b262" containerName="registry-server" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: E0312 14:16:08.133676 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d181b683-a575-45a3-b736-ad4e07486545" containerName="registry-server" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.133681 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="d181b683-a575-45a3-b736-ad4e07486545" containerName="registry-server" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: E0312 14:16:08.133691 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d181b683-a575-45a3-b736-ad4e07486545" containerName="extract-content" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.133696 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="d181b683-a575-45a3-b736-ad4e07486545" containerName="extract-content" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.133784 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="efd52682-bf05-44fc-9790-8adfc87ca087" containerName="installer" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.133799 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="7023af8b-bfcc-4253-85cd-d891dff1c86e" containerName="multus-admission-controller" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.133808 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0743910-1ba7-490d-bc3e-5126562b04aa" containerName="installer" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.133815 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="a932351b-831e-4930-85a2-f2faf1e6b262" containerName="registry-server" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.133823 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="7023af8b-bfcc-4253-85cd-d891dff1c86e" containerName="kube-rbac-proxy" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.133829 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e0d04d0-6ea2-4b4e-a881-968db7d31c7b" containerName="registry-server" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.133836 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="d181b683-a575-45a3-b736-ad4e07486545" containerName="registry-server" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.133843 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="56fb91c7-1b94-4f59-82f2-3025f0b02e43" containerName="installer" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.133850 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="23b56974-d2b1-4205-af5a-70cc2b616d1a" containerName="installer" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.134232 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 14:16:08.136388 master-0 kubenswrapper[7440]: I0312 14:16:08.136314 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 12 14:16:08.137835 master-0 kubenswrapper[7440]: I0312 14:16:08.136793 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-9xvhv" Mar 12 14:16:08.146582 master-0 kubenswrapper[7440]: I0312 14:16:08.144533 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 12 14:16:08.149141 master-0 kubenswrapper[7440]: I0312 14:16:08.149118 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4622r" Mar 12 14:16:08.180610 master-0 kubenswrapper[7440]: I0312 14:16:08.180394 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q6kr\" (UniqueName: \"kubernetes.io/projected/191fe879-7ece-4f8c-bae6-cf46acb382c9-kube-api-access-9q6kr\") pod \"191fe879-7ece-4f8c-bae6-cf46acb382c9\" (UID: \"191fe879-7ece-4f8c-bae6-cf46acb382c9\") " Mar 12 14:16:08.180610 master-0 kubenswrapper[7440]: I0312 14:16:08.180471 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/191fe879-7ece-4f8c-bae6-cf46acb382c9-catalog-content\") pod \"191fe879-7ece-4f8c-bae6-cf46acb382c9\" (UID: \"191fe879-7ece-4f8c-bae6-cf46acb382c9\") " Mar 12 14:16:08.180610 master-0 kubenswrapper[7440]: I0312 14:16:08.180530 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/191fe879-7ece-4f8c-bae6-cf46acb382c9-utilities\") pod \"191fe879-7ece-4f8c-bae6-cf46acb382c9\" (UID: \"191fe879-7ece-4f8c-bae6-cf46acb382c9\") " Mar 12 14:16:08.180987 master-0 kubenswrapper[7440]: I0312 14:16:08.180725 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be4847ff-0a31-4147-93f6-0cdb03f2418d-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"be4847ff-0a31-4147-93f6-0cdb03f2418d\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 14:16:08.180987 master-0 kubenswrapper[7440]: I0312 14:16:08.180799 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be4847ff-0a31-4147-93f6-0cdb03f2418d-kube-api-access\") pod \"installer-2-master-0\" (UID: \"be4847ff-0a31-4147-93f6-0cdb03f2418d\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 14:16:08.180987 master-0 kubenswrapper[7440]: I0312 14:16:08.180834 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be4847ff-0a31-4147-93f6-0cdb03f2418d-var-lock\") pod \"installer-2-master-0\" (UID: \"be4847ff-0a31-4147-93f6-0cdb03f2418d\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 14:16:08.181890 master-0 kubenswrapper[7440]: I0312 14:16:08.181838 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/191fe879-7ece-4f8c-bae6-cf46acb382c9-utilities" (OuterVolumeSpecName: "utilities") pod "191fe879-7ece-4f8c-bae6-cf46acb382c9" (UID: "191fe879-7ece-4f8c-bae6-cf46acb382c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:16:08.203961 master-0 kubenswrapper[7440]: I0312 14:16:08.202037 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/191fe879-7ece-4f8c-bae6-cf46acb382c9-kube-api-access-9q6kr" (OuterVolumeSpecName: "kube-api-access-9q6kr") pod "191fe879-7ece-4f8c-bae6-cf46acb382c9" (UID: "191fe879-7ece-4f8c-bae6-cf46acb382c9"). InnerVolumeSpecName "kube-api-access-9q6kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:16:08.286621 master-0 kubenswrapper[7440]: I0312 14:16:08.286542 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be4847ff-0a31-4147-93f6-0cdb03f2418d-kube-api-access\") pod \"installer-2-master-0\" (UID: \"be4847ff-0a31-4147-93f6-0cdb03f2418d\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 14:16:08.286621 master-0 kubenswrapper[7440]: I0312 14:16:08.286616 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be4847ff-0a31-4147-93f6-0cdb03f2418d-var-lock\") pod \"installer-2-master-0\" (UID: \"be4847ff-0a31-4147-93f6-0cdb03f2418d\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 14:16:08.286917 master-0 kubenswrapper[7440]: I0312 14:16:08.286681 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be4847ff-0a31-4147-93f6-0cdb03f2418d-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"be4847ff-0a31-4147-93f6-0cdb03f2418d\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 14:16:08.286917 master-0 kubenswrapper[7440]: I0312 14:16:08.286737 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q6kr\" (UniqueName: \"kubernetes.io/projected/191fe879-7ece-4f8c-bae6-cf46acb382c9-kube-api-access-9q6kr\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:08.286917 master-0 kubenswrapper[7440]: I0312 14:16:08.286752 7440 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/191fe879-7ece-4f8c-bae6-cf46acb382c9-utilities\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:08.286917 master-0 kubenswrapper[7440]: I0312 14:16:08.286797 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be4847ff-0a31-4147-93f6-0cdb03f2418d-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"be4847ff-0a31-4147-93f6-0cdb03f2418d\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 14:16:08.287186 master-0 kubenswrapper[7440]: I0312 14:16:08.287151 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be4847ff-0a31-4147-93f6-0cdb03f2418d-var-lock\") pod \"installer-2-master-0\" (UID: \"be4847ff-0a31-4147-93f6-0cdb03f2418d\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 14:16:08.302018 master-0 kubenswrapper[7440]: I0312 14:16:08.301944 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/191fe879-7ece-4f8c-bae6-cf46acb382c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "191fe879-7ece-4f8c-bae6-cf46acb382c9" (UID: "191fe879-7ece-4f8c-bae6-cf46acb382c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:16:08.331933 master-0 kubenswrapper[7440]: I0312 14:16:08.331858 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be4847ff-0a31-4147-93f6-0cdb03f2418d-kube-api-access\") pod \"installer-2-master-0\" (UID: \"be4847ff-0a31-4147-93f6-0cdb03f2418d\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 14:16:08.388350 master-0 kubenswrapper[7440]: I0312 14:16:08.388288 7440 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/191fe879-7ece-4f8c-bae6-cf46acb382c9-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:08.477161 master-0 kubenswrapper[7440]: I0312 14:16:08.477054 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 14:16:08.746991 master-0 kubenswrapper[7440]: I0312 14:16:08.746875 7440 generic.go:334] "Generic (PLEG): container finished" podID="191fe879-7ece-4f8c-bae6-cf46acb382c9" containerID="a56be2a786928fd5eaf82b7365566e1dbacec830c9324c47c9ab044e97cd0ce5" exitCode=0 Mar 12 14:16:08.746991 master-0 kubenswrapper[7440]: I0312 14:16:08.746961 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4622r" event={"ID":"191fe879-7ece-4f8c-bae6-cf46acb382c9","Type":"ContainerDied","Data":"a56be2a786928fd5eaf82b7365566e1dbacec830c9324c47c9ab044e97cd0ce5"} Mar 12 14:16:08.747255 master-0 kubenswrapper[7440]: I0312 14:16:08.747035 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4622r" event={"ID":"191fe879-7ece-4f8c-bae6-cf46acb382c9","Type":"ContainerDied","Data":"1e38cfb7b1cfebf62b43a9053c4b94b244fdd93e3535e4f97ff145735061c782"} Mar 12 14:16:08.747255 master-0 kubenswrapper[7440]: I0312 14:16:08.747073 7440 scope.go:117] "RemoveContainer" containerID="a56be2a786928fd5eaf82b7365566e1dbacec830c9324c47c9ab044e97cd0ce5" Mar 12 14:16:08.747340 master-0 kubenswrapper[7440]: I0312 14:16:08.747277 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4622r" Mar 12 14:16:08.755428 master-0 kubenswrapper[7440]: I0312 14:16:08.755382 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ns7pm" Mar 12 14:16:08.755428 master-0 kubenswrapper[7440]: I0312 14:16:08.755291 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ns7pm" event={"ID":"2e0d04d0-6ea2-4b4e-a881-968db7d31c7b","Type":"ContainerDied","Data":"841b9a60fc8e48fe4721499840092282bfd7c62abd981c2c9d32a9c0204e85cd"} Mar 12 14:16:08.757582 master-0 kubenswrapper[7440]: I0312 14:16:08.757553 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:16:08.770497 master-0 kubenswrapper[7440]: I0312 14:16:08.770243 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:16:08.783931 master-0 kubenswrapper[7440]: I0312 14:16:08.783873 7440 scope.go:117] "RemoveContainer" containerID="0874bc42d51f1eeb79284639fd2174ae8726c365d0a5e04de38df5932f77ea4a" Mar 12 14:16:08.798778 master-0 kubenswrapper[7440]: I0312 14:16:08.798642 7440 scope.go:117] "RemoveContainer" containerID="d719003a3f7ad0713f784a1cb591dc3c3a9a743ae24bde8d2da3a255b6858a9a" Mar 12 14:16:08.811512 master-0 kubenswrapper[7440]: I0312 14:16:08.811297 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4622r"] Mar 12 14:16:08.814920 master-0 kubenswrapper[7440]: I0312 14:16:08.814875 7440 scope.go:117] "RemoveContainer" containerID="a56be2a786928fd5eaf82b7365566e1dbacec830c9324c47c9ab044e97cd0ce5" Mar 12 14:16:08.815467 master-0 kubenswrapper[7440]: E0312 14:16:08.815388 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a56be2a786928fd5eaf82b7365566e1dbacec830c9324c47c9ab044e97cd0ce5\": container with ID starting with a56be2a786928fd5eaf82b7365566e1dbacec830c9324c47c9ab044e97cd0ce5 not found: ID does not exist" containerID="a56be2a786928fd5eaf82b7365566e1dbacec830c9324c47c9ab044e97cd0ce5" Mar 12 14:16:08.815554 master-0 kubenswrapper[7440]: I0312 14:16:08.815516 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a56be2a786928fd5eaf82b7365566e1dbacec830c9324c47c9ab044e97cd0ce5"} err="failed to get container status \"a56be2a786928fd5eaf82b7365566e1dbacec830c9324c47c9ab044e97cd0ce5\": rpc error: code = NotFound desc = could not find container \"a56be2a786928fd5eaf82b7365566e1dbacec830c9324c47c9ab044e97cd0ce5\": container with ID starting with a56be2a786928fd5eaf82b7365566e1dbacec830c9324c47c9ab044e97cd0ce5 not found: ID does not exist" Mar 12 14:16:08.815762 master-0 kubenswrapper[7440]: I0312 14:16:08.815554 7440 scope.go:117] "RemoveContainer" containerID="0874bc42d51f1eeb79284639fd2174ae8726c365d0a5e04de38df5932f77ea4a" Mar 12 14:16:08.816128 master-0 kubenswrapper[7440]: E0312 14:16:08.816091 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0874bc42d51f1eeb79284639fd2174ae8726c365d0a5e04de38df5932f77ea4a\": container with ID starting with 0874bc42d51f1eeb79284639fd2174ae8726c365d0a5e04de38df5932f77ea4a not found: ID does not exist" containerID="0874bc42d51f1eeb79284639fd2174ae8726c365d0a5e04de38df5932f77ea4a" Mar 12 14:16:08.816179 master-0 kubenswrapper[7440]: I0312 14:16:08.816125 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0874bc42d51f1eeb79284639fd2174ae8726c365d0a5e04de38df5932f77ea4a"} err="failed to get container status \"0874bc42d51f1eeb79284639fd2174ae8726c365d0a5e04de38df5932f77ea4a\": rpc error: code = NotFound desc = could not find container \"0874bc42d51f1eeb79284639fd2174ae8726c365d0a5e04de38df5932f77ea4a\": container with ID starting with 0874bc42d51f1eeb79284639fd2174ae8726c365d0a5e04de38df5932f77ea4a not found: ID does not exist" Mar 12 14:16:08.816179 master-0 kubenswrapper[7440]: I0312 14:16:08.816150 7440 scope.go:117] "RemoveContainer" containerID="d719003a3f7ad0713f784a1cb591dc3c3a9a743ae24bde8d2da3a255b6858a9a" Mar 12 14:16:08.816473 master-0 kubenswrapper[7440]: E0312 14:16:08.816426 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d719003a3f7ad0713f784a1cb591dc3c3a9a743ae24bde8d2da3a255b6858a9a\": container with ID starting with d719003a3f7ad0713f784a1cb591dc3c3a9a743ae24bde8d2da3a255b6858a9a not found: ID does not exist" containerID="d719003a3f7ad0713f784a1cb591dc3c3a9a743ae24bde8d2da3a255b6858a9a" Mar 12 14:16:08.816508 master-0 kubenswrapper[7440]: I0312 14:16:08.816480 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d719003a3f7ad0713f784a1cb591dc3c3a9a743ae24bde8d2da3a255b6858a9a"} err="failed to get container status \"d719003a3f7ad0713f784a1cb591dc3c3a9a743ae24bde8d2da3a255b6858a9a\": rpc error: code = NotFound desc = could not find container \"d719003a3f7ad0713f784a1cb591dc3c3a9a743ae24bde8d2da3a255b6858a9a\": container with ID starting with d719003a3f7ad0713f784a1cb591dc3c3a9a743ae24bde8d2da3a255b6858a9a not found: ID does not exist" Mar 12 14:16:08.816508 master-0 kubenswrapper[7440]: I0312 14:16:08.816504 7440 scope.go:117] "RemoveContainer" containerID="88258e715bc540b76097bb99083cec5e9e7c8071119a50353c605425f13a6d2b" Mar 12 14:16:08.817616 master-0 kubenswrapper[7440]: I0312 14:16:08.817396 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4622r"] Mar 12 14:16:08.825875 master-0 kubenswrapper[7440]: I0312 14:16:08.823696 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ns7pm"] Mar 12 14:16:08.830764 master-0 kubenswrapper[7440]: I0312 14:16:08.830708 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ns7pm"] Mar 12 14:16:08.842764 master-0 kubenswrapper[7440]: I0312 14:16:08.842724 7440 scope.go:117] "RemoveContainer" containerID="f241461857ce59ea560667138db043106ff9a507cada6dbe3fa25235fbd8ecbd" Mar 12 14:16:08.856362 master-0 kubenswrapper[7440]: I0312 14:16:08.856319 7440 scope.go:117] "RemoveContainer" containerID="170193929f6a99afbd76eacae4d3179712e768f76f3e9ac38d49e68d3e5f5c8d" Mar 12 14:16:08.889405 master-0 kubenswrapper[7440]: I0312 14:16:08.889352 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 12 14:16:08.896538 master-0 kubenswrapper[7440]: W0312 14:16:08.896124 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbe4847ff_0a31_4147_93f6_0cdb03f2418d.slice/crio-189b7dd40431337c3300f45e7a77aa01791623bb48bbe77b2a0e96890a222c74 WatchSource:0}: Error finding container 189b7dd40431337c3300f45e7a77aa01791623bb48bbe77b2a0e96890a222c74: Status 404 returned error can't find the container with id 189b7dd40431337c3300f45e7a77aa01791623bb48bbe77b2a0e96890a222c74 Mar 12 14:16:08.898032 master-0 kubenswrapper[7440]: E0312 14:16:08.897419 7440 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e0d04d0_6ea2_4b4e_a881_968db7d31c7b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e0d04d0_6ea2_4b4e_a881_968db7d31c7b.slice/crio-841b9a60fc8e48fe4721499840092282bfd7c62abd981c2c9d32a9c0204e85cd\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod191fe879_7ece_4f8c_bae6_cf46acb382c9.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod191fe879_7ece_4f8c_bae6_cf46acb382c9.slice/crio-1e38cfb7b1cfebf62b43a9053c4b94b244fdd93e3535e4f97ff145735061c782\": RecentStats: unable to find data in memory cache]" Mar 12 14:16:09.450893 master-0 kubenswrapper[7440]: I0312 14:16:09.450756 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 12 14:16:09.450893 master-0 kubenswrapper[7440]: I0312 14:16:09.450861 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 12 14:16:09.474457 master-0 kubenswrapper[7440]: I0312 14:16:09.474382 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 12 14:16:09.763256 master-0 kubenswrapper[7440]: I0312 14:16:09.763198 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"be4847ff-0a31-4147-93f6-0cdb03f2418d","Type":"ContainerStarted","Data":"241aab17123596d30cb151981c1709611449c7907327ce4b19c53019951ff0d7"} Mar 12 14:16:09.763256 master-0 kubenswrapper[7440]: I0312 14:16:09.763257 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"be4847ff-0a31-4147-93f6-0cdb03f2418d","Type":"ContainerStarted","Data":"189b7dd40431337c3300f45e7a77aa01791623bb48bbe77b2a0e96890a222c74"} Mar 12 14:16:09.766087 master-0 kubenswrapper[7440]: I0312 14:16:09.766051 7440 generic.go:334] "Generic (PLEG): container finished" podID="dd29b21c-7a0e-4311-952f-427b00468e66" containerID="91a8f5c51245c9c31ad9e34f814e801c26cbe6ecd3a5aedc09c0fc9965981075" exitCode=0 Mar 12 14:16:09.766194 master-0 kubenswrapper[7440]: I0312 14:16:09.766127 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-gltz7" event={"ID":"dd29b21c-7a0e-4311-952f-427b00468e66","Type":"ContainerDied","Data":"91a8f5c51245c9c31ad9e34f814e801c26cbe6ecd3a5aedc09c0fc9965981075"} Mar 12 14:16:09.766194 master-0 kubenswrapper[7440]: I0312 14:16:09.766170 7440 scope.go:117] "RemoveContainer" containerID="06754d581cc8aca46ceb909759a4cdf83f5358cda6d0633cc92ae3b0cb8c8c05" Mar 12 14:16:09.766677 master-0 kubenswrapper[7440]: I0312 14:16:09.766635 7440 scope.go:117] "RemoveContainer" containerID="91a8f5c51245c9c31ad9e34f814e801c26cbe6ecd3a5aedc09c0fc9965981075" Mar 12 14:16:09.766877 master-0 kubenswrapper[7440]: E0312 14:16:09.766856 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=insights-operator pod=insights-operator-8f89dfddd-gltz7_openshift-insights(dd29b21c-7a0e-4311-952f-427b00468e66)\"" pod="openshift-insights/insights-operator-8f89dfddd-gltz7" podUID="dd29b21c-7a0e-4311-952f-427b00468e66" Mar 12 14:16:09.780260 master-0 kubenswrapper[7440]: I0312 14:16:09.780203 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=1.780160504 podStartE2EDuration="1.780160504s" podCreationTimestamp="2026-03-12 14:16:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:16:09.778584784 +0000 UTC m=+230.113963333" watchObservedRunningTime="2026-03-12 14:16:09.780160504 +0000 UTC m=+230.115539063" Mar 12 14:16:09.780661 master-0 kubenswrapper[7440]: I0312 14:16:09.780636 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 12 14:16:09.821068 master-0 kubenswrapper[7440]: I0312 14:16:09.820659 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="191fe879-7ece-4f8c-bae6-cf46acb382c9" path="/var/lib/kubelet/pods/191fe879-7ece-4f8c-bae6-cf46acb382c9/volumes" Mar 12 14:16:09.822046 master-0 kubenswrapper[7440]: I0312 14:16:09.821558 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e0d04d0-6ea2-4b4e-a881-968db7d31c7b" path="/var/lib/kubelet/pods/2e0d04d0-6ea2-4b4e-a881-968db7d31c7b/volumes" Mar 12 14:16:09.822353 master-0 kubenswrapper[7440]: I0312 14:16:09.822318 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d181b683-a575-45a3-b736-ad4e07486545" path="/var/lib/kubelet/pods/d181b683-a575-45a3-b736-ad4e07486545/volumes" Mar 12 14:16:10.851722 master-0 kubenswrapper[7440]: I0312 14:16:10.851681 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:16:10.857461 master-0 kubenswrapper[7440]: I0312 14:16:10.857431 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:16:14.065741 master-0 kubenswrapper[7440]: I0312 14:16:14.065683 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9bljc"] Mar 12 14:16:14.066664 master-0 kubenswrapper[7440]: E0312 14:16:14.065998 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="191fe879-7ece-4f8c-bae6-cf46acb382c9" containerName="extract-content" Mar 12 14:16:14.066664 master-0 kubenswrapper[7440]: I0312 14:16:14.066016 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="191fe879-7ece-4f8c-bae6-cf46acb382c9" containerName="extract-content" Mar 12 14:16:14.066664 master-0 kubenswrapper[7440]: E0312 14:16:14.066047 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="191fe879-7ece-4f8c-bae6-cf46acb382c9" containerName="registry-server" Mar 12 14:16:14.066664 master-0 kubenswrapper[7440]: I0312 14:16:14.066055 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="191fe879-7ece-4f8c-bae6-cf46acb382c9" containerName="registry-server" Mar 12 14:16:14.066664 master-0 kubenswrapper[7440]: E0312 14:16:14.066068 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="191fe879-7ece-4f8c-bae6-cf46acb382c9" containerName="extract-utilities" Mar 12 14:16:14.066664 master-0 kubenswrapper[7440]: I0312 14:16:14.066077 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="191fe879-7ece-4f8c-bae6-cf46acb382c9" containerName="extract-utilities" Mar 12 14:16:14.066664 master-0 kubenswrapper[7440]: I0312 14:16:14.066179 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="191fe879-7ece-4f8c-bae6-cf46acb382c9" containerName="registry-server" Mar 12 14:16:14.067094 master-0 kubenswrapper[7440]: I0312 14:16:14.067065 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9bljc" Mar 12 14:16:14.075065 master-0 kubenswrapper[7440]: I0312 14:16:14.074976 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mgqz4"] Mar 12 14:16:14.077277 master-0 kubenswrapper[7440]: I0312 14:16:14.077249 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vmhgb"] Mar 12 14:16:14.077814 master-0 kubenswrapper[7440]: I0312 14:16:14.077605 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mgqz4" Mar 12 14:16:14.082318 master-0 kubenswrapper[7440]: I0312 14:16:14.080835 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-cfs7s" Mar 12 14:16:14.082318 master-0 kubenswrapper[7440]: I0312 14:16:14.081000 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-q2s6g" Mar 12 14:16:14.082497 master-0 kubenswrapper[7440]: I0312 14:16:14.082332 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4gbmc"] Mar 12 14:16:14.082544 master-0 kubenswrapper[7440]: I0312 14:16:14.082510 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmhgb" Mar 12 14:16:14.084563 master-0 kubenswrapper[7440]: I0312 14:16:14.084521 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:16:14.084653 master-0 kubenswrapper[7440]: I0312 14:16:14.084621 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:16:14.085313 master-0 kubenswrapper[7440]: I0312 14:16:14.085293 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:16:14.085542 master-0 kubenswrapper[7440]: I0312 14:16:14.085514 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4gbmc" Mar 12 14:16:14.090566 master-0 kubenswrapper[7440]: I0312 14:16:14.090537 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-pjs2n" Mar 12 14:16:14.091431 master-0 kubenswrapper[7440]: I0312 14:16:14.091413 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-mxcxn" Mar 12 14:16:14.096206 master-0 kubenswrapper[7440]: I0312 14:16:14.094978 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mgqz4"] Mar 12 14:16:14.096206 master-0 kubenswrapper[7440]: I0312 14:16:14.095791 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:16:14.103950 master-0 kubenswrapper[7440]: I0312 14:16:14.103228 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmhgb"] Mar 12 14:16:14.114006 master-0 kubenswrapper[7440]: I0312 14:16:14.113980 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4gbmc"] Mar 12 14:16:14.117052 master-0 kubenswrapper[7440]: I0312 14:16:14.117018 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9bljc"] Mar 12 14:16:14.154840 master-0 kubenswrapper[7440]: I0312 14:16:14.154802 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfz8z\" (UniqueName: \"kubernetes.io/projected/e2742559-1f28-4f2c-a873-d6a9348972fb-kube-api-access-nfz8z\") pod \"community-operators-4gbmc\" (UID: \"e2742559-1f28-4f2c-a873-d6a9348972fb\") " pod="openshift-marketplace/community-operators-4gbmc" Mar 12 14:16:14.155029 master-0 kubenswrapper[7440]: I0312 14:16:14.154843 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70710a0b-8b5d-40f5-b726-fd5e2836ffbe-catalog-content\") pod \"certified-operators-mgqz4\" (UID: \"70710a0b-8b5d-40f5-b726-fd5e2836ffbe\") " pod="openshift-marketplace/certified-operators-mgqz4" Mar 12 14:16:14.155029 master-0 kubenswrapper[7440]: I0312 14:16:14.154888 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2742559-1f28-4f2c-a873-d6a9348972fb-catalog-content\") pod \"community-operators-4gbmc\" (UID: \"e2742559-1f28-4f2c-a873-d6a9348972fb\") " pod="openshift-marketplace/community-operators-4gbmc" Mar 12 14:16:14.155029 master-0 kubenswrapper[7440]: I0312 14:16:14.154933 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f59d485-9f69-4f36-836e-6338f84b7d69-catalog-content\") pod \"redhat-operators-9bljc\" (UID: \"2f59d485-9f69-4f36-836e-6338f84b7d69\") " pod="openshift-marketplace/redhat-operators-9bljc" Mar 12 14:16:14.155029 master-0 kubenswrapper[7440]: I0312 14:16:14.154954 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f59d485-9f69-4f36-836e-6338f84b7d69-utilities\") pod \"redhat-operators-9bljc\" (UID: \"2f59d485-9f69-4f36-836e-6338f84b7d69\") " pod="openshift-marketplace/redhat-operators-9bljc" Mar 12 14:16:14.155029 master-0 kubenswrapper[7440]: I0312 14:16:14.154989 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9cfq\" (UniqueName: \"kubernetes.io/projected/70710a0b-8b5d-40f5-b726-fd5e2836ffbe-kube-api-access-b9cfq\") pod \"certified-operators-mgqz4\" (UID: \"70710a0b-8b5d-40f5-b726-fd5e2836ffbe\") " pod="openshift-marketplace/certified-operators-mgqz4" Mar 12 14:16:14.155029 master-0 kubenswrapper[7440]: I0312 14:16:14.155010 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70710a0b-8b5d-40f5-b726-fd5e2836ffbe-utilities\") pod \"certified-operators-mgqz4\" (UID: \"70710a0b-8b5d-40f5-b726-fd5e2836ffbe\") " pod="openshift-marketplace/certified-operators-mgqz4" Mar 12 14:16:14.155029 master-0 kubenswrapper[7440]: I0312 14:16:14.155031 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2742559-1f28-4f2c-a873-d6a9348972fb-utilities\") pod \"community-operators-4gbmc\" (UID: \"e2742559-1f28-4f2c-a873-d6a9348972fb\") " pod="openshift-marketplace/community-operators-4gbmc" Mar 12 14:16:14.155238 master-0 kubenswrapper[7440]: I0312 14:16:14.155178 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbwl8\" (UniqueName: \"kubernetes.io/projected/2f59d485-9f69-4f36-836e-6338f84b7d69-kube-api-access-fbwl8\") pod \"redhat-operators-9bljc\" (UID: \"2f59d485-9f69-4f36-836e-6338f84b7d69\") " pod="openshift-marketplace/redhat-operators-9bljc" Mar 12 14:16:14.257521 master-0 kubenswrapper[7440]: I0312 14:16:14.257454 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9cfq\" (UniqueName: \"kubernetes.io/projected/70710a0b-8b5d-40f5-b726-fd5e2836ffbe-kube-api-access-b9cfq\") pod \"certified-operators-mgqz4\" (UID: \"70710a0b-8b5d-40f5-b726-fd5e2836ffbe\") " pod="openshift-marketplace/certified-operators-mgqz4" Mar 12 14:16:14.257820 master-0 kubenswrapper[7440]: I0312 14:16:14.257803 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70710a0b-8b5d-40f5-b726-fd5e2836ffbe-utilities\") pod \"certified-operators-mgqz4\" (UID: \"70710a0b-8b5d-40f5-b726-fd5e2836ffbe\") " pod="openshift-marketplace/certified-operators-mgqz4" Mar 12 14:16:14.257965 master-0 kubenswrapper[7440]: I0312 14:16:14.257945 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qv7x\" (UniqueName: \"kubernetes.io/projected/cba33300-f7ef-4547-97ff-62e223da79cf-kube-api-access-6qv7x\") pod \"redhat-marketplace-vmhgb\" (UID: \"cba33300-f7ef-4547-97ff-62e223da79cf\") " pod="openshift-marketplace/redhat-marketplace-vmhgb" Mar 12 14:16:14.258097 master-0 kubenswrapper[7440]: I0312 14:16:14.258082 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2742559-1f28-4f2c-a873-d6a9348972fb-utilities\") pod \"community-operators-4gbmc\" (UID: \"e2742559-1f28-4f2c-a873-d6a9348972fb\") " pod="openshift-marketplace/community-operators-4gbmc" Mar 12 14:16:14.258212 master-0 kubenswrapper[7440]: I0312 14:16:14.258198 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbwl8\" (UniqueName: \"kubernetes.io/projected/2f59d485-9f69-4f36-836e-6338f84b7d69-kube-api-access-fbwl8\") pod \"redhat-operators-9bljc\" (UID: \"2f59d485-9f69-4f36-836e-6338f84b7d69\") " pod="openshift-marketplace/redhat-operators-9bljc" Mar 12 14:16:14.258312 master-0 kubenswrapper[7440]: I0312 14:16:14.258296 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cba33300-f7ef-4547-97ff-62e223da79cf-catalog-content\") pod \"redhat-marketplace-vmhgb\" (UID: \"cba33300-f7ef-4547-97ff-62e223da79cf\") " pod="openshift-marketplace/redhat-marketplace-vmhgb" Mar 12 14:16:14.258423 master-0 kubenswrapper[7440]: I0312 14:16:14.258384 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70710a0b-8b5d-40f5-b726-fd5e2836ffbe-utilities\") pod \"certified-operators-mgqz4\" (UID: \"70710a0b-8b5d-40f5-b726-fd5e2836ffbe\") " pod="openshift-marketplace/certified-operators-mgqz4" Mar 12 14:16:14.258423 master-0 kubenswrapper[7440]: I0312 14:16:14.258397 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfz8z\" (UniqueName: \"kubernetes.io/projected/e2742559-1f28-4f2c-a873-d6a9348972fb-kube-api-access-nfz8z\") pod \"community-operators-4gbmc\" (UID: \"e2742559-1f28-4f2c-a873-d6a9348972fb\") " pod="openshift-marketplace/community-operators-4gbmc" Mar 12 14:16:14.258506 master-0 kubenswrapper[7440]: I0312 14:16:14.258403 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2742559-1f28-4f2c-a873-d6a9348972fb-utilities\") pod \"community-operators-4gbmc\" (UID: \"e2742559-1f28-4f2c-a873-d6a9348972fb\") " pod="openshift-marketplace/community-operators-4gbmc" Mar 12 14:16:14.258735 master-0 kubenswrapper[7440]: I0312 14:16:14.258702 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70710a0b-8b5d-40f5-b726-fd5e2836ffbe-catalog-content\") pod \"certified-operators-mgqz4\" (UID: \"70710a0b-8b5d-40f5-b726-fd5e2836ffbe\") " pod="openshift-marketplace/certified-operators-mgqz4" Mar 12 14:16:14.258816 master-0 kubenswrapper[7440]: I0312 14:16:14.258793 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2742559-1f28-4f2c-a873-d6a9348972fb-catalog-content\") pod \"community-operators-4gbmc\" (UID: \"e2742559-1f28-4f2c-a873-d6a9348972fb\") " pod="openshift-marketplace/community-operators-4gbmc" Mar 12 14:16:14.258867 master-0 kubenswrapper[7440]: I0312 14:16:14.258856 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cba33300-f7ef-4547-97ff-62e223da79cf-utilities\") pod \"redhat-marketplace-vmhgb\" (UID: \"cba33300-f7ef-4547-97ff-62e223da79cf\") " pod="openshift-marketplace/redhat-marketplace-vmhgb" Mar 12 14:16:14.258931 master-0 kubenswrapper[7440]: I0312 14:16:14.258881 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f59d485-9f69-4f36-836e-6338f84b7d69-catalog-content\") pod \"redhat-operators-9bljc\" (UID: \"2f59d485-9f69-4f36-836e-6338f84b7d69\") " pod="openshift-marketplace/redhat-operators-9bljc" Mar 12 14:16:14.259013 master-0 kubenswrapper[7440]: I0312 14:16:14.258992 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f59d485-9f69-4f36-836e-6338f84b7d69-utilities\") pod \"redhat-operators-9bljc\" (UID: \"2f59d485-9f69-4f36-836e-6338f84b7d69\") " pod="openshift-marketplace/redhat-operators-9bljc" Mar 12 14:16:14.259065 master-0 kubenswrapper[7440]: I0312 14:16:14.259035 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70710a0b-8b5d-40f5-b726-fd5e2836ffbe-catalog-content\") pod \"certified-operators-mgqz4\" (UID: \"70710a0b-8b5d-40f5-b726-fd5e2836ffbe\") " pod="openshift-marketplace/certified-operators-mgqz4" Mar 12 14:16:14.259232 master-0 kubenswrapper[7440]: I0312 14:16:14.259187 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2742559-1f28-4f2c-a873-d6a9348972fb-catalog-content\") pod \"community-operators-4gbmc\" (UID: \"e2742559-1f28-4f2c-a873-d6a9348972fb\") " pod="openshift-marketplace/community-operators-4gbmc" Mar 12 14:16:14.259444 master-0 kubenswrapper[7440]: I0312 14:16:14.259409 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f59d485-9f69-4f36-836e-6338f84b7d69-catalog-content\") pod \"redhat-operators-9bljc\" (UID: \"2f59d485-9f69-4f36-836e-6338f84b7d69\") " pod="openshift-marketplace/redhat-operators-9bljc" Mar 12 14:16:14.259579 master-0 kubenswrapper[7440]: I0312 14:16:14.259531 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f59d485-9f69-4f36-836e-6338f84b7d69-utilities\") pod \"redhat-operators-9bljc\" (UID: \"2f59d485-9f69-4f36-836e-6338f84b7d69\") " pod="openshift-marketplace/redhat-operators-9bljc" Mar 12 14:16:14.274308 master-0 kubenswrapper[7440]: I0312 14:16:14.274249 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfz8z\" (UniqueName: \"kubernetes.io/projected/e2742559-1f28-4f2c-a873-d6a9348972fb-kube-api-access-nfz8z\") pod \"community-operators-4gbmc\" (UID: \"e2742559-1f28-4f2c-a873-d6a9348972fb\") " pod="openshift-marketplace/community-operators-4gbmc" Mar 12 14:16:14.274671 master-0 kubenswrapper[7440]: I0312 14:16:14.274640 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbwl8\" (UniqueName: \"kubernetes.io/projected/2f59d485-9f69-4f36-836e-6338f84b7d69-kube-api-access-fbwl8\") pod \"redhat-operators-9bljc\" (UID: \"2f59d485-9f69-4f36-836e-6338f84b7d69\") " pod="openshift-marketplace/redhat-operators-9bljc" Mar 12 14:16:14.276499 master-0 kubenswrapper[7440]: I0312 14:16:14.276460 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9cfq\" (UniqueName: \"kubernetes.io/projected/70710a0b-8b5d-40f5-b726-fd5e2836ffbe-kube-api-access-b9cfq\") pod \"certified-operators-mgqz4\" (UID: \"70710a0b-8b5d-40f5-b726-fd5e2836ffbe\") " pod="openshift-marketplace/certified-operators-mgqz4" Mar 12 14:16:14.360694 master-0 kubenswrapper[7440]: I0312 14:16:14.360526 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qv7x\" (UniqueName: \"kubernetes.io/projected/cba33300-f7ef-4547-97ff-62e223da79cf-kube-api-access-6qv7x\") pod \"redhat-marketplace-vmhgb\" (UID: \"cba33300-f7ef-4547-97ff-62e223da79cf\") " pod="openshift-marketplace/redhat-marketplace-vmhgb" Mar 12 14:16:14.360694 master-0 kubenswrapper[7440]: I0312 14:16:14.360693 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cba33300-f7ef-4547-97ff-62e223da79cf-catalog-content\") pod \"redhat-marketplace-vmhgb\" (UID: \"cba33300-f7ef-4547-97ff-62e223da79cf\") " pod="openshift-marketplace/redhat-marketplace-vmhgb" Mar 12 14:16:14.360960 master-0 kubenswrapper[7440]: I0312 14:16:14.360740 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cba33300-f7ef-4547-97ff-62e223da79cf-utilities\") pod \"redhat-marketplace-vmhgb\" (UID: \"cba33300-f7ef-4547-97ff-62e223da79cf\") " pod="openshift-marketplace/redhat-marketplace-vmhgb" Mar 12 14:16:14.361252 master-0 kubenswrapper[7440]: I0312 14:16:14.361223 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cba33300-f7ef-4547-97ff-62e223da79cf-utilities\") pod \"redhat-marketplace-vmhgb\" (UID: \"cba33300-f7ef-4547-97ff-62e223da79cf\") " pod="openshift-marketplace/redhat-marketplace-vmhgb" Mar 12 14:16:14.361330 master-0 kubenswrapper[7440]: I0312 14:16:14.361291 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cba33300-f7ef-4547-97ff-62e223da79cf-catalog-content\") pod \"redhat-marketplace-vmhgb\" (UID: \"cba33300-f7ef-4547-97ff-62e223da79cf\") " pod="openshift-marketplace/redhat-marketplace-vmhgb" Mar 12 14:16:14.376542 master-0 kubenswrapper[7440]: I0312 14:16:14.376502 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qv7x\" (UniqueName: \"kubernetes.io/projected/cba33300-f7ef-4547-97ff-62e223da79cf-kube-api-access-6qv7x\") pod \"redhat-marketplace-vmhgb\" (UID: \"cba33300-f7ef-4547-97ff-62e223da79cf\") " pod="openshift-marketplace/redhat-marketplace-vmhgb" Mar 12 14:16:14.419383 master-0 kubenswrapper[7440]: I0312 14:16:14.419338 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9bljc" Mar 12 14:16:14.445974 master-0 kubenswrapper[7440]: I0312 14:16:14.443078 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mgqz4" Mar 12 14:16:14.461581 master-0 kubenswrapper[7440]: I0312 14:16:14.461230 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmhgb" Mar 12 14:16:14.470502 master-0 kubenswrapper[7440]: I0312 14:16:14.470462 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4gbmc" Mar 12 14:16:14.674216 master-0 kubenswrapper[7440]: I0312 14:16:14.674166 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9bljc"] Mar 12 14:16:14.683446 master-0 kubenswrapper[7440]: W0312 14:16:14.683390 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f59d485_9f69_4f36_836e_6338f84b7d69.slice/crio-1349683c6b7a48b60ff43680722efbbec3a557f6a028d5afab1d1b9c68ad3a50 WatchSource:0}: Error finding container 1349683c6b7a48b60ff43680722efbbec3a557f6a028d5afab1d1b9c68ad3a50: Status 404 returned error can't find the container with id 1349683c6b7a48b60ff43680722efbbec3a557f6a028d5afab1d1b9c68ad3a50 Mar 12 14:16:14.811685 master-0 kubenswrapper[7440]: I0312 14:16:14.811644 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9bljc" event={"ID":"2f59d485-9f69-4f36-836e-6338f84b7d69","Type":"ContainerStarted","Data":"6f88048bcaa35db146cb15d79ce615c930b521dad3951a081c1c2ef94a48da36"} Mar 12 14:16:14.811685 master-0 kubenswrapper[7440]: I0312 14:16:14.811686 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9bljc" event={"ID":"2f59d485-9f69-4f36-836e-6338f84b7d69","Type":"ContainerStarted","Data":"1349683c6b7a48b60ff43680722efbbec3a557f6a028d5afab1d1b9c68ad3a50"} Mar 12 14:16:14.957395 master-0 kubenswrapper[7440]: I0312 14:16:14.957347 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mgqz4"] Mar 12 14:16:14.959158 master-0 kubenswrapper[7440]: W0312 14:16:14.959116 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70710a0b_8b5d_40f5_b726_fd5e2836ffbe.slice/crio-f02823618c817a57f5deb9d5aa242eb2274591837e55328914242489612536a0 WatchSource:0}: Error finding container f02823618c817a57f5deb9d5aa242eb2274591837e55328914242489612536a0: Status 404 returned error can't find the container with id f02823618c817a57f5deb9d5aa242eb2274591837e55328914242489612536a0 Mar 12 14:16:15.007774 master-0 kubenswrapper[7440]: I0312 14:16:15.007738 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4gbmc"] Mar 12 14:16:15.017584 master-0 kubenswrapper[7440]: W0312 14:16:15.017521 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2742559_1f28_4f2c_a873_d6a9348972fb.slice/crio-44f838e36ef84ec07445889d3aec1d687c84ce529c36e9146d695bf4ed4afa8f WatchSource:0}: Error finding container 44f838e36ef84ec07445889d3aec1d687c84ce529c36e9146d695bf4ed4afa8f: Status 404 returned error can't find the container with id 44f838e36ef84ec07445889d3aec1d687c84ce529c36e9146d695bf4ed4afa8f Mar 12 14:16:15.060618 master-0 kubenswrapper[7440]: I0312 14:16:15.060586 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmhgb"] Mar 12 14:16:15.071366 master-0 kubenswrapper[7440]: W0312 14:16:15.071336 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcba33300_f7ef_4547_97ff_62e223da79cf.slice/crio-19d81290fc93fac6e353ccf6f4dabde5040333c3260c06c3a57f91c397c38d86 WatchSource:0}: Error finding container 19d81290fc93fac6e353ccf6f4dabde5040333c3260c06c3a57f91c397c38d86: Status 404 returned error can't find the container with id 19d81290fc93fac6e353ccf6f4dabde5040333c3260c06c3a57f91c397c38d86 Mar 12 14:16:15.818840 master-0 kubenswrapper[7440]: I0312 14:16:15.818745 7440 generic.go:334] "Generic (PLEG): container finished" podID="cba33300-f7ef-4547-97ff-62e223da79cf" containerID="1bc9540ba67897e35b5ccbe24ebd39e07a2c8806ea8a765dbac1ad9e9c299016" exitCode=0 Mar 12 14:16:15.818840 master-0 kubenswrapper[7440]: I0312 14:16:15.818814 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmhgb" event={"ID":"cba33300-f7ef-4547-97ff-62e223da79cf","Type":"ContainerDied","Data":"1bc9540ba67897e35b5ccbe24ebd39e07a2c8806ea8a765dbac1ad9e9c299016"} Mar 12 14:16:15.819336 master-0 kubenswrapper[7440]: I0312 14:16:15.818932 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmhgb" event={"ID":"cba33300-f7ef-4547-97ff-62e223da79cf","Type":"ContainerStarted","Data":"19d81290fc93fac6e353ccf6f4dabde5040333c3260c06c3a57f91c397c38d86"} Mar 12 14:16:15.822546 master-0 kubenswrapper[7440]: I0312 14:16:15.820828 7440 generic.go:334] "Generic (PLEG): container finished" podID="e2742559-1f28-4f2c-a873-d6a9348972fb" containerID="935fc506f983008a79b60e43ad782c4f076fe53a90782b9c09742c04419944c2" exitCode=0 Mar 12 14:16:15.822546 master-0 kubenswrapper[7440]: I0312 14:16:15.820963 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gbmc" event={"ID":"e2742559-1f28-4f2c-a873-d6a9348972fb","Type":"ContainerDied","Data":"935fc506f983008a79b60e43ad782c4f076fe53a90782b9c09742c04419944c2"} Mar 12 14:16:15.822546 master-0 kubenswrapper[7440]: I0312 14:16:15.821027 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gbmc" event={"ID":"e2742559-1f28-4f2c-a873-d6a9348972fb","Type":"ContainerStarted","Data":"44f838e36ef84ec07445889d3aec1d687c84ce529c36e9146d695bf4ed4afa8f"} Mar 12 14:16:15.824697 master-0 kubenswrapper[7440]: I0312 14:16:15.823982 7440 generic.go:334] "Generic (PLEG): container finished" podID="70710a0b-8b5d-40f5-b726-fd5e2836ffbe" containerID="8d3bb5013ca4c818b7c70903d8fce9e610940673188c266c6d78750aa35aac12" exitCode=0 Mar 12 14:16:15.824697 master-0 kubenswrapper[7440]: I0312 14:16:15.824127 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgqz4" event={"ID":"70710a0b-8b5d-40f5-b726-fd5e2836ffbe","Type":"ContainerDied","Data":"8d3bb5013ca4c818b7c70903d8fce9e610940673188c266c6d78750aa35aac12"} Mar 12 14:16:15.824697 master-0 kubenswrapper[7440]: I0312 14:16:15.824209 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgqz4" event={"ID":"70710a0b-8b5d-40f5-b726-fd5e2836ffbe","Type":"ContainerStarted","Data":"f02823618c817a57f5deb9d5aa242eb2274591837e55328914242489612536a0"} Mar 12 14:16:15.828831 master-0 kubenswrapper[7440]: I0312 14:16:15.828354 7440 generic.go:334] "Generic (PLEG): container finished" podID="2f59d485-9f69-4f36-836e-6338f84b7d69" containerID="6f88048bcaa35db146cb15d79ce615c930b521dad3951a081c1c2ef94a48da36" exitCode=0 Mar 12 14:16:15.829194 master-0 kubenswrapper[7440]: I0312 14:16:15.829033 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9bljc" event={"ID":"2f59d485-9f69-4f36-836e-6338f84b7d69","Type":"ContainerDied","Data":"6f88048bcaa35db146cb15d79ce615c930b521dad3951a081c1c2ef94a48da36"} Mar 12 14:16:16.835952 master-0 kubenswrapper[7440]: I0312 14:16:16.835881 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9bljc" event={"ID":"2f59d485-9f69-4f36-836e-6338f84b7d69","Type":"ContainerStarted","Data":"fd763b32a6f9e14de1e48ab02ce0e8ed0420b566d892ab96ff30c9ac6deeebf4"} Mar 12 14:16:17.842630 master-0 kubenswrapper[7440]: I0312 14:16:17.842507 7440 generic.go:334] "Generic (PLEG): container finished" podID="70710a0b-8b5d-40f5-b726-fd5e2836ffbe" containerID="1b509b364f4790e7d098a08001f85e21186839f1379b4fc1d8a3f87999a8287a" exitCode=0 Mar 12 14:16:17.843245 master-0 kubenswrapper[7440]: I0312 14:16:17.843203 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgqz4" event={"ID":"70710a0b-8b5d-40f5-b726-fd5e2836ffbe","Type":"ContainerDied","Data":"1b509b364f4790e7d098a08001f85e21186839f1379b4fc1d8a3f87999a8287a"} Mar 12 14:16:17.844866 master-0 kubenswrapper[7440]: I0312 14:16:17.844824 7440 generic.go:334] "Generic (PLEG): container finished" podID="2f59d485-9f69-4f36-836e-6338f84b7d69" containerID="fd763b32a6f9e14de1e48ab02ce0e8ed0420b566d892ab96ff30c9ac6deeebf4" exitCode=0 Mar 12 14:16:17.844975 master-0 kubenswrapper[7440]: I0312 14:16:17.844867 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9bljc" event={"ID":"2f59d485-9f69-4f36-836e-6338f84b7d69","Type":"ContainerDied","Data":"fd763b32a6f9e14de1e48ab02ce0e8ed0420b566d892ab96ff30c9ac6deeebf4"} Mar 12 14:16:17.847534 master-0 kubenswrapper[7440]: I0312 14:16:17.847487 7440 generic.go:334] "Generic (PLEG): container finished" podID="cba33300-f7ef-4547-97ff-62e223da79cf" containerID="eb008940bc7dc6c2ae442f778e48aef8337971c8ef1e3c95db6a891e0cad1a81" exitCode=0 Mar 12 14:16:17.847612 master-0 kubenswrapper[7440]: I0312 14:16:17.847556 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmhgb" event={"ID":"cba33300-f7ef-4547-97ff-62e223da79cf","Type":"ContainerDied","Data":"eb008940bc7dc6c2ae442f778e48aef8337971c8ef1e3c95db6a891e0cad1a81"} Mar 12 14:16:17.849938 master-0 kubenswrapper[7440]: I0312 14:16:17.849883 7440 generic.go:334] "Generic (PLEG): container finished" podID="e2742559-1f28-4f2c-a873-d6a9348972fb" containerID="a6e68da263c509d4a3107148074b05db9d9991a2f13362fc7aaad75eb4e279c0" exitCode=0 Mar 12 14:16:17.850021 master-0 kubenswrapper[7440]: I0312 14:16:17.849921 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gbmc" event={"ID":"e2742559-1f28-4f2c-a873-d6a9348972fb","Type":"ContainerDied","Data":"a6e68da263c509d4a3107148074b05db9d9991a2f13362fc7aaad75eb4e279c0"} Mar 12 14:16:17.968503 master-0 kubenswrapper[7440]: I0312 14:16:17.968461 7440 patch_prober.go:28] interesting pod/machine-config-daemon-ngzc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:16:17.968733 master-0 kubenswrapper[7440]: I0312 14:16:17.968708 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" podUID="8e4d9407-ff79-4396-a37f-896617e024d4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:16:18.574747 master-0 kubenswrapper[7440]: I0312 14:16:18.574620 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:16:18.863664 master-0 kubenswrapper[7440]: I0312 14:16:18.863560 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmhgb" event={"ID":"cba33300-f7ef-4547-97ff-62e223da79cf","Type":"ContainerStarted","Data":"48a67c0385d8a7388255b47c510bfe700f7804124474e7e1a69fe4888870bf2a"} Mar 12 14:16:18.866569 master-0 kubenswrapper[7440]: I0312 14:16:18.866524 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gbmc" event={"ID":"e2742559-1f28-4f2c-a873-d6a9348972fb","Type":"ContainerStarted","Data":"86b3413a245ccb948b2791723b699bee2548d7f2a2bcf15246661ec724ccd645"} Mar 12 14:16:18.869978 master-0 kubenswrapper[7440]: I0312 14:16:18.869924 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgqz4" event={"ID":"70710a0b-8b5d-40f5-b726-fd5e2836ffbe","Type":"ContainerStarted","Data":"fb097b697c600a4c9949f08cdf30a60a633ba6d4b0ed4e2e71d781af9c42818b"} Mar 12 14:16:18.874410 master-0 kubenswrapper[7440]: I0312 14:16:18.874357 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9bljc" event={"ID":"2f59d485-9f69-4f36-836e-6338f84b7d69","Type":"ContainerStarted","Data":"0ae163796f8d852887d4f4bb30a0ee7a5d70bdc703b68435049e704d0b2a64bb"} Mar 12 14:16:18.891417 master-0 kubenswrapper[7440]: I0312 14:16:18.891331 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vmhgb" podStartSLOduration=10.443789496 podStartE2EDuration="12.891306818s" podCreationTimestamp="2026-03-12 14:16:06 +0000 UTC" firstStartedPulling="2026-03-12 14:16:15.820750479 +0000 UTC m=+236.156129048" lastFinishedPulling="2026-03-12 14:16:18.268267811 +0000 UTC m=+238.603646370" observedRunningTime="2026-03-12 14:16:18.889278696 +0000 UTC m=+239.224657265" watchObservedRunningTime="2026-03-12 14:16:18.891306818 +0000 UTC m=+239.226685387" Mar 12 14:16:18.946963 master-0 kubenswrapper[7440]: I0312 14:16:18.946875 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9bljc" podStartSLOduration=9.439535269 podStartE2EDuration="11.946860063s" podCreationTimestamp="2026-03-12 14:16:07 +0000 UTC" firstStartedPulling="2026-03-12 14:16:15.830374425 +0000 UTC m=+236.165752984" lastFinishedPulling="2026-03-12 14:16:18.337699219 +0000 UTC m=+238.673077778" observedRunningTime="2026-03-12 14:16:18.946285198 +0000 UTC m=+239.281663757" watchObservedRunningTime="2026-03-12 14:16:18.946860063 +0000 UTC m=+239.282238622" Mar 12 14:16:19.020126 master-0 kubenswrapper[7440]: I0312 14:16:19.019945 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4gbmc" podStartSLOduration=10.531737357 podStartE2EDuration="13.019924273s" podCreationTimestamp="2026-03-12 14:16:06 +0000 UTC" firstStartedPulling="2026-03-12 14:16:15.823591282 +0000 UTC m=+236.158969841" lastFinishedPulling="2026-03-12 14:16:18.311778208 +0000 UTC m=+238.647156757" observedRunningTime="2026-03-12 14:16:19.019152574 +0000 UTC m=+239.354531133" watchObservedRunningTime="2026-03-12 14:16:19.019924273 +0000 UTC m=+239.355302832" Mar 12 14:16:19.021131 master-0 kubenswrapper[7440]: I0312 14:16:19.021089 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mgqz4" podStartSLOduration=8.228288849 podStartE2EDuration="11.021083333s" podCreationTimestamp="2026-03-12 14:16:08 +0000 UTC" firstStartedPulling="2026-03-12 14:16:15.826783874 +0000 UTC m=+236.162162443" lastFinishedPulling="2026-03-12 14:16:18.619578368 +0000 UTC m=+238.954956927" observedRunningTime="2026-03-12 14:16:18.992152926 +0000 UTC m=+239.327531485" watchObservedRunningTime="2026-03-12 14:16:19.021083333 +0000 UTC m=+239.356461882" Mar 12 14:16:20.256198 master-0 kubenswrapper[7440]: I0312 14:16:20.256149 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:16:22.077343 master-0 kubenswrapper[7440]: I0312 14:16:22.077276 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-retry-1-master-0"] Mar 12 14:16:22.078225 master-0 kubenswrapper[7440]: I0312 14:16:22.078053 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 12 14:16:22.079826 master-0 kubenswrapper[7440]: I0312 14:16:22.079763 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-2gl5t" Mar 12 14:16:22.080009 master-0 kubenswrapper[7440]: I0312 14:16:22.079854 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 12 14:16:22.248682 master-0 kubenswrapper[7440]: I0312 14:16:22.248616 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-retry-1-master-0"] Mar 12 14:16:22.261777 master-0 kubenswrapper[7440]: I0312 14:16:22.261726 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/941c0808-bbfd-467e-b733-3a8294163ee5-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"941c0808-bbfd-467e-b733-3a8294163ee5\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 12 14:16:22.261982 master-0 kubenswrapper[7440]: I0312 14:16:22.261820 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/941c0808-bbfd-467e-b733-3a8294163ee5-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"941c0808-bbfd-467e-b733-3a8294163ee5\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 12 14:16:22.262033 master-0 kubenswrapper[7440]: I0312 14:16:22.261979 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/941c0808-bbfd-467e-b733-3a8294163ee5-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"941c0808-bbfd-467e-b733-3a8294163ee5\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 12 14:16:22.364026 master-0 kubenswrapper[7440]: I0312 14:16:22.363876 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/941c0808-bbfd-467e-b733-3a8294163ee5-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"941c0808-bbfd-467e-b733-3a8294163ee5\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 12 14:16:22.364216 master-0 kubenswrapper[7440]: I0312 14:16:22.364095 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/941c0808-bbfd-467e-b733-3a8294163ee5-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"941c0808-bbfd-467e-b733-3a8294163ee5\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 12 14:16:22.364216 master-0 kubenswrapper[7440]: I0312 14:16:22.364129 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/941c0808-bbfd-467e-b733-3a8294163ee5-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"941c0808-bbfd-467e-b733-3a8294163ee5\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 12 14:16:22.364216 master-0 kubenswrapper[7440]: I0312 14:16:22.364082 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/941c0808-bbfd-467e-b733-3a8294163ee5-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"941c0808-bbfd-467e-b733-3a8294163ee5\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 12 14:16:22.364216 master-0 kubenswrapper[7440]: I0312 14:16:22.364210 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/941c0808-bbfd-467e-b733-3a8294163ee5-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"941c0808-bbfd-467e-b733-3a8294163ee5\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 12 14:16:22.386778 master-0 kubenswrapper[7440]: I0312 14:16:22.386684 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/941c0808-bbfd-467e-b733-3a8294163ee5-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"941c0808-bbfd-467e-b733-3a8294163ee5\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 12 14:16:22.402012 master-0 kubenswrapper[7440]: I0312 14:16:22.401969 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 12 14:16:22.443706 master-0 kubenswrapper[7440]: I0312 14:16:22.443604 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-retry-1-master-0"] Mar 12 14:16:22.444652 master-0 kubenswrapper[7440]: I0312 14:16:22.444624 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 12 14:16:22.450985 master-0 kubenswrapper[7440]: I0312 14:16:22.450926 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-retry-1-master-0"] Mar 12 14:16:22.453512 master-0 kubenswrapper[7440]: I0312 14:16:22.453416 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-wm5wg" Mar 12 14:16:22.453512 master-0 kubenswrapper[7440]: I0312 14:16:22.453433 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 12 14:16:22.568469 master-0 kubenswrapper[7440]: I0312 14:16:22.568358 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c8675d4-a0be-42a3-96af-e56f5fb02983-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"0c8675d4-a0be-42a3-96af-e56f5fb02983\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 12 14:16:22.568778 master-0 kubenswrapper[7440]: I0312 14:16:22.568588 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0c8675d4-a0be-42a3-96af-e56f5fb02983-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"0c8675d4-a0be-42a3-96af-e56f5fb02983\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 12 14:16:22.568839 master-0 kubenswrapper[7440]: I0312 14:16:22.568811 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c8675d4-a0be-42a3-96af-e56f5fb02983-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"0c8675d4-a0be-42a3-96af-e56f5fb02983\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 12 14:16:22.672063 master-0 kubenswrapper[7440]: I0312 14:16:22.669886 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c8675d4-a0be-42a3-96af-e56f5fb02983-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"0c8675d4-a0be-42a3-96af-e56f5fb02983\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 12 14:16:22.672063 master-0 kubenswrapper[7440]: I0312 14:16:22.670006 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c8675d4-a0be-42a3-96af-e56f5fb02983-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"0c8675d4-a0be-42a3-96af-e56f5fb02983\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 12 14:16:22.672063 master-0 kubenswrapper[7440]: I0312 14:16:22.670055 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0c8675d4-a0be-42a3-96af-e56f5fb02983-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"0c8675d4-a0be-42a3-96af-e56f5fb02983\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 12 14:16:22.672063 master-0 kubenswrapper[7440]: I0312 14:16:22.670180 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0c8675d4-a0be-42a3-96af-e56f5fb02983-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"0c8675d4-a0be-42a3-96af-e56f5fb02983\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 12 14:16:22.672063 master-0 kubenswrapper[7440]: I0312 14:16:22.670225 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c8675d4-a0be-42a3-96af-e56f5fb02983-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"0c8675d4-a0be-42a3-96af-e56f5fb02983\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 12 14:16:22.686590 master-0 kubenswrapper[7440]: I0312 14:16:22.686545 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c8675d4-a0be-42a3-96af-e56f5fb02983-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"0c8675d4-a0be-42a3-96af-e56f5fb02983\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 12 14:16:22.809528 master-0 kubenswrapper[7440]: I0312 14:16:22.809474 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 12 14:16:22.815740 master-0 kubenswrapper[7440]: I0312 14:16:22.812526 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-retry-1-master-0"] Mar 12 14:16:22.828214 master-0 kubenswrapper[7440]: W0312 14:16:22.828066 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod941c0808_bbfd_467e_b733_3a8294163ee5.slice/crio-b728e0e598b7cc096f35be929d43eb0ed111353285b0505a0f58ce9dbef5d088 WatchSource:0}: Error finding container b728e0e598b7cc096f35be929d43eb0ed111353285b0505a0f58ce9dbef5d088: Status 404 returned error can't find the container with id b728e0e598b7cc096f35be929d43eb0ed111353285b0505a0f58ce9dbef5d088 Mar 12 14:16:22.900130 master-0 kubenswrapper[7440]: I0312 14:16:22.900083 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" event={"ID":"941c0808-bbfd-467e-b733-3a8294163ee5","Type":"ContainerStarted","Data":"b728e0e598b7cc096f35be929d43eb0ed111353285b0505a0f58ce9dbef5d088"} Mar 12 14:16:23.255108 master-0 kubenswrapper[7440]: I0312 14:16:23.255054 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-retry-1-master-0"] Mar 12 14:16:23.261441 master-0 kubenswrapper[7440]: W0312 14:16:23.261377 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0c8675d4_a0be_42a3_96af_e56f5fb02983.slice/crio-3378bf89846b15560831731ea870867860116f550ee6cc7c8a063f8901a47bce WatchSource:0}: Error finding container 3378bf89846b15560831731ea870867860116f550ee6cc7c8a063f8901a47bce: Status 404 returned error can't find the container with id 3378bf89846b15560831731ea870867860116f550ee6cc7c8a063f8901a47bce Mar 12 14:16:23.907259 master-0 kubenswrapper[7440]: I0312 14:16:23.907099 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" event={"ID":"941c0808-bbfd-467e-b733-3a8294163ee5","Type":"ContainerStarted","Data":"b0d7763766a63cc91dd74368313cbb94587dedcd2efd8ded0e17187af3e40d25"} Mar 12 14:16:23.908985 master-0 kubenswrapper[7440]: I0312 14:16:23.908942 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"0c8675d4-a0be-42a3-96af-e56f5fb02983","Type":"ContainerStarted","Data":"c501e9b39beb072c6b4373a31e843bee99560319d607f9fde7f18203290ac2ca"} Mar 12 14:16:23.909056 master-0 kubenswrapper[7440]: I0312 14:16:23.908989 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"0c8675d4-a0be-42a3-96af-e56f5fb02983","Type":"ContainerStarted","Data":"3378bf89846b15560831731ea870867860116f550ee6cc7c8a063f8901a47bce"} Mar 12 14:16:23.922858 master-0 kubenswrapper[7440]: I0312 14:16:23.922745 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" podStartSLOduration=1.922617701 podStartE2EDuration="1.922617701s" podCreationTimestamp="2026-03-12 14:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:16:23.920949479 +0000 UTC m=+244.256328058" watchObservedRunningTime="2026-03-12 14:16:23.922617701 +0000 UTC m=+244.257996270" Mar 12 14:16:23.940984 master-0 kubenswrapper[7440]: I0312 14:16:23.940878 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" podStartSLOduration=1.940855656 podStartE2EDuration="1.940855656s" podCreationTimestamp="2026-03-12 14:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:16:23.93669957 +0000 UTC m=+244.272078149" watchObservedRunningTime="2026-03-12 14:16:23.940855656 +0000 UTC m=+244.276234225" Mar 12 14:16:24.419649 master-0 kubenswrapper[7440]: I0312 14:16:24.419579 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9bljc" Mar 12 14:16:24.419649 master-0 kubenswrapper[7440]: I0312 14:16:24.419651 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9bljc" Mar 12 14:16:24.443955 master-0 kubenswrapper[7440]: I0312 14:16:24.443876 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mgqz4" Mar 12 14:16:24.443955 master-0 kubenswrapper[7440]: I0312 14:16:24.443952 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mgqz4" Mar 12 14:16:24.462324 master-0 kubenswrapper[7440]: I0312 14:16:24.462279 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vmhgb" Mar 12 14:16:24.462324 master-0 kubenswrapper[7440]: I0312 14:16:24.462321 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vmhgb" Mar 12 14:16:24.471313 master-0 kubenswrapper[7440]: I0312 14:16:24.471258 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4gbmc" Mar 12 14:16:24.471479 master-0 kubenswrapper[7440]: I0312 14:16:24.471320 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4gbmc" Mar 12 14:16:24.480192 master-0 kubenswrapper[7440]: I0312 14:16:24.480130 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mgqz4" Mar 12 14:16:24.499496 master-0 kubenswrapper[7440]: I0312 14:16:24.499205 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vmhgb" Mar 12 14:16:24.506778 master-0 kubenswrapper[7440]: I0312 14:16:24.506729 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4gbmc" Mar 12 14:16:24.804755 master-0 kubenswrapper[7440]: I0312 14:16:24.804688 7440 scope.go:117] "RemoveContainer" containerID="91a8f5c51245c9c31ad9e34f814e801c26cbe6ecd3a5aedc09c0fc9965981075" Mar 12 14:16:24.967383 master-0 kubenswrapper[7440]: I0312 14:16:24.967341 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vmhgb" Mar 12 14:16:24.971515 master-0 kubenswrapper[7440]: I0312 14:16:24.971459 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4gbmc" Mar 12 14:16:24.979210 master-0 kubenswrapper[7440]: I0312 14:16:24.979034 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mgqz4" Mar 12 14:16:25.033940 master-0 kubenswrapper[7440]: I0312 14:16:25.032887 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 12 14:16:25.033940 master-0 kubenswrapper[7440]: I0312 14:16:25.033582 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 12 14:16:25.037141 master-0 kubenswrapper[7440]: I0312 14:16:25.035600 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-svthn" Mar 12 14:16:25.037141 master-0 kubenswrapper[7440]: I0312 14:16:25.036215 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 12 14:16:25.051930 master-0 kubenswrapper[7440]: I0312 14:16:25.051418 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 12 14:16:25.201151 master-0 kubenswrapper[7440]: I0312 14:16:25.200956 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b2d8e6e9-c10f-4b43-8155-9addbfddba2e-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"b2d8e6e9-c10f-4b43-8155-9addbfddba2e\") " pod="openshift-etcd/installer-2-master-0" Mar 12 14:16:25.201151 master-0 kubenswrapper[7440]: I0312 14:16:25.201083 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b2d8e6e9-c10f-4b43-8155-9addbfddba2e-var-lock\") pod \"installer-2-master-0\" (UID: \"b2d8e6e9-c10f-4b43-8155-9addbfddba2e\") " pod="openshift-etcd/installer-2-master-0" Mar 12 14:16:25.201151 master-0 kubenswrapper[7440]: I0312 14:16:25.201125 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2d8e6e9-c10f-4b43-8155-9addbfddba2e-kube-api-access\") pod \"installer-2-master-0\" (UID: \"b2d8e6e9-c10f-4b43-8155-9addbfddba2e\") " pod="openshift-etcd/installer-2-master-0" Mar 12 14:16:25.302589 master-0 kubenswrapper[7440]: I0312 14:16:25.302500 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b2d8e6e9-c10f-4b43-8155-9addbfddba2e-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"b2d8e6e9-c10f-4b43-8155-9addbfddba2e\") " pod="openshift-etcd/installer-2-master-0" Mar 12 14:16:25.302589 master-0 kubenswrapper[7440]: I0312 14:16:25.302602 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b2d8e6e9-c10f-4b43-8155-9addbfddba2e-var-lock\") pod \"installer-2-master-0\" (UID: \"b2d8e6e9-c10f-4b43-8155-9addbfddba2e\") " pod="openshift-etcd/installer-2-master-0" Mar 12 14:16:25.303042 master-0 kubenswrapper[7440]: I0312 14:16:25.302629 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b2d8e6e9-c10f-4b43-8155-9addbfddba2e-var-lock\") pod \"installer-2-master-0\" (UID: \"b2d8e6e9-c10f-4b43-8155-9addbfddba2e\") " pod="openshift-etcd/installer-2-master-0" Mar 12 14:16:25.303042 master-0 kubenswrapper[7440]: I0312 14:16:25.302643 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2d8e6e9-c10f-4b43-8155-9addbfddba2e-kube-api-access\") pod \"installer-2-master-0\" (UID: \"b2d8e6e9-c10f-4b43-8155-9addbfddba2e\") " pod="openshift-etcd/installer-2-master-0" Mar 12 14:16:25.303042 master-0 kubenswrapper[7440]: I0312 14:16:25.302606 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b2d8e6e9-c10f-4b43-8155-9addbfddba2e-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"b2d8e6e9-c10f-4b43-8155-9addbfddba2e\") " pod="openshift-etcd/installer-2-master-0" Mar 12 14:16:25.325176 master-0 kubenswrapper[7440]: I0312 14:16:25.325092 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2d8e6e9-c10f-4b43-8155-9addbfddba2e-kube-api-access\") pod \"installer-2-master-0\" (UID: \"b2d8e6e9-c10f-4b43-8155-9addbfddba2e\") " pod="openshift-etcd/installer-2-master-0" Mar 12 14:16:25.359894 master-0 kubenswrapper[7440]: I0312 14:16:25.359806 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 12 14:16:25.482660 master-0 kubenswrapper[7440]: I0312 14:16:25.482243 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9bljc" podUID="2f59d485-9f69-4f36-836e-6338f84b7d69" containerName="registry-server" probeResult="failure" output=< Mar 12 14:16:25.482660 master-0 kubenswrapper[7440]: timeout: failed to connect service ":50051" within 1s Mar 12 14:16:25.482660 master-0 kubenswrapper[7440]: > Mar 12 14:16:25.785072 master-0 kubenswrapper[7440]: I0312 14:16:25.785010 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 12 14:16:25.789088 master-0 kubenswrapper[7440]: W0312 14:16:25.789031 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb2d8e6e9_c10f_4b43_8155_9addbfddba2e.slice/crio-f6b8e2c91dfdac4af077c810b8c82108167dd8fa5fde5c09fa329a80aae9a543 WatchSource:0}: Error finding container f6b8e2c91dfdac4af077c810b8c82108167dd8fa5fde5c09fa329a80aae9a543: Status 404 returned error can't find the container with id f6b8e2c91dfdac4af077c810b8c82108167dd8fa5fde5c09fa329a80aae9a543 Mar 12 14:16:25.932995 master-0 kubenswrapper[7440]: I0312 14:16:25.925959 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-gltz7" event={"ID":"dd29b21c-7a0e-4311-952f-427b00468e66","Type":"ContainerStarted","Data":"5c0e8a37f9d56e49ba600123779ab452255e4d506e12df3758cc982e1da22f30"} Mar 12 14:16:25.934220 master-0 kubenswrapper[7440]: I0312 14:16:25.934164 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"b2d8e6e9-c10f-4b43-8155-9addbfddba2e","Type":"ContainerStarted","Data":"f6b8e2c91dfdac4af077c810b8c82108167dd8fa5fde5c09fa329a80aae9a543"} Mar 12 14:16:26.751289 master-0 kubenswrapper[7440]: I0312 14:16:26.751220 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf"] Mar 12 14:16:26.752677 master-0 kubenswrapper[7440]: I0312 14:16:26.752635 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" Mar 12 14:16:26.754994 master-0 kubenswrapper[7440]: I0312 14:16:26.754953 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-kblmx" Mar 12 14:16:26.755248 master-0 kubenswrapper[7440]: I0312 14:16:26.755220 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 12 14:16:26.768926 master-0 kubenswrapper[7440]: I0312 14:16:26.768860 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf"] Mar 12 14:16:26.784919 master-0 kubenswrapper[7440]: I0312 14:16:26.784027 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5"] Mar 12 14:16:26.785569 master-0 kubenswrapper[7440]: I0312 14:16:26.785303 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:16:26.797930 master-0 kubenswrapper[7440]: I0312 14:16:26.797469 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 12 14:16:26.797930 master-0 kubenswrapper[7440]: I0312 14:16:26.797685 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 14:16:26.797930 master-0 kubenswrapper[7440]: I0312 14:16:26.797793 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 12 14:16:26.797930 master-0 kubenswrapper[7440]: I0312 14:16:26.797910 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-rn7z4" Mar 12 14:16:26.798294 master-0 kubenswrapper[7440]: I0312 14:16:26.798078 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 12 14:16:26.798294 master-0 kubenswrapper[7440]: I0312 14:16:26.798188 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 12 14:16:26.932776 master-0 kubenswrapper[7440]: I0312 14:16:26.932695 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1047bb4a-135f-488d-9399-0518cb3a827d-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:16:26.932776 master-0 kubenswrapper[7440]: I0312 14:16:26.932758 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1047bb4a-135f-488d-9399-0518cb3a827d-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:16:26.932776 master-0 kubenswrapper[7440]: I0312 14:16:26.932782 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flj9j\" (UniqueName: \"kubernetes.io/projected/1047bb4a-135f-488d-9399-0518cb3a827d-kube-api-access-flj9j\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:16:26.933129 master-0 kubenswrapper[7440]: I0312 14:16:26.933088 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/1047bb4a-135f-488d-9399-0518cb3a827d-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:16:26.933186 master-0 kubenswrapper[7440]: I0312 14:16:26.933157 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqx42\" (UniqueName: \"kubernetes.io/projected/61d829d7-38e1-4826-942c-f7317c4a4bec-kube-api-access-zqx42\") pod \"machine-config-controller-ff46b7bdf-vfsmf\" (UID: \"61d829d7-38e1-4826-942c-f7317c4a4bec\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" Mar 12 14:16:26.933229 master-0 kubenswrapper[7440]: I0312 14:16:26.933199 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1047bb4a-135f-488d-9399-0518cb3a827d-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:16:26.933274 master-0 kubenswrapper[7440]: I0312 14:16:26.933247 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/61d829d7-38e1-4826-942c-f7317c4a4bec-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-vfsmf\" (UID: \"61d829d7-38e1-4826-942c-f7317c4a4bec\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" Mar 12 14:16:26.933421 master-0 kubenswrapper[7440]: I0312 14:16:26.933353 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/61d829d7-38e1-4826-942c-f7317c4a4bec-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-vfsmf\" (UID: \"61d829d7-38e1-4826-942c-f7317c4a4bec\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" Mar 12 14:16:26.939785 master-0 kubenswrapper[7440]: I0312 14:16:26.939738 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"b2d8e6e9-c10f-4b43-8155-9addbfddba2e","Type":"ContainerStarted","Data":"6332902d5d84cf465484ab14dac64d9b60905fd555e191dc35b3857c84ea5469"} Mar 12 14:16:27.035005 master-0 kubenswrapper[7440]: I0312 14:16:27.034877 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1047bb4a-135f-488d-9399-0518cb3a827d-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:16:27.035005 master-0 kubenswrapper[7440]: I0312 14:16:27.034941 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1047bb4a-135f-488d-9399-0518cb3a827d-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:16:27.035005 master-0 kubenswrapper[7440]: I0312 14:16:27.034964 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flj9j\" (UniqueName: \"kubernetes.io/projected/1047bb4a-135f-488d-9399-0518cb3a827d-kube-api-access-flj9j\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:16:27.035243 master-0 kubenswrapper[7440]: I0312 14:16:27.035112 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1047bb4a-135f-488d-9399-0518cb3a827d-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:16:27.035243 master-0 kubenswrapper[7440]: I0312 14:16:27.035173 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/1047bb4a-135f-488d-9399-0518cb3a827d-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:16:27.035310 master-0 kubenswrapper[7440]: I0312 14:16:27.035258 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqx42\" (UniqueName: \"kubernetes.io/projected/61d829d7-38e1-4826-942c-f7317c4a4bec-kube-api-access-zqx42\") pod \"machine-config-controller-ff46b7bdf-vfsmf\" (UID: \"61d829d7-38e1-4826-942c-f7317c4a4bec\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" Mar 12 14:16:27.035500 master-0 kubenswrapper[7440]: I0312 14:16:27.035473 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1047bb4a-135f-488d-9399-0518cb3a827d-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:16:27.035552 master-0 kubenswrapper[7440]: I0312 14:16:27.035533 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/61d829d7-38e1-4826-942c-f7317c4a4bec-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-vfsmf\" (UID: \"61d829d7-38e1-4826-942c-f7317c4a4bec\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" Mar 12 14:16:27.035588 master-0 kubenswrapper[7440]: I0312 14:16:27.035555 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/61d829d7-38e1-4826-942c-f7317c4a4bec-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-vfsmf\" (UID: \"61d829d7-38e1-4826-942c-f7317c4a4bec\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" Mar 12 14:16:27.036122 master-0 kubenswrapper[7440]: I0312 14:16:27.035796 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1047bb4a-135f-488d-9399-0518cb3a827d-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:16:27.036327 master-0 kubenswrapper[7440]: I0312 14:16:27.036301 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1047bb4a-135f-488d-9399-0518cb3a827d-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:16:27.036715 master-0 kubenswrapper[7440]: I0312 14:16:27.036681 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/61d829d7-38e1-4826-942c-f7317c4a4bec-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-vfsmf\" (UID: \"61d829d7-38e1-4826-942c-f7317c4a4bec\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" Mar 12 14:16:27.038494 master-0 kubenswrapper[7440]: I0312 14:16:27.038466 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/1047bb4a-135f-488d-9399-0518cb3a827d-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:16:27.043569 master-0 kubenswrapper[7440]: I0312 14:16:27.043533 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/61d829d7-38e1-4826-942c-f7317c4a4bec-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-vfsmf\" (UID: \"61d829d7-38e1-4826-942c-f7317c4a4bec\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" Mar 12 14:16:27.054719 master-0 kubenswrapper[7440]: I0312 14:16:27.054633 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flj9j\" (UniqueName: \"kubernetes.io/projected/1047bb4a-135f-488d-9399-0518cb3a827d-kube-api-access-flj9j\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:16:27.057613 master-0 kubenswrapper[7440]: I0312 14:16:27.057576 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqx42\" (UniqueName: \"kubernetes.io/projected/61d829d7-38e1-4826-942c-f7317c4a4bec-kube-api-access-zqx42\") pod \"machine-config-controller-ff46b7bdf-vfsmf\" (UID: \"61d829d7-38e1-4826-942c-f7317c4a4bec\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" Mar 12 14:16:27.073208 master-0 kubenswrapper[7440]: I0312 14:16:27.073166 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" Mar 12 14:16:27.115482 master-0 kubenswrapper[7440]: I0312 14:16:27.115075 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:16:27.135138 master-0 kubenswrapper[7440]: W0312 14:16:27.135088 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1047bb4a_135f_488d_9399_0518cb3a827d.slice/crio-b1a27def0943392bc851926036706c077e2c62d9404ab94e4d470faf771c9199 WatchSource:0}: Error finding container b1a27def0943392bc851926036706c077e2c62d9404ab94e4d470faf771c9199: Status 404 returned error can't find the container with id b1a27def0943392bc851926036706c077e2c62d9404ab94e4d470faf771c9199 Mar 12 14:16:27.947911 master-0 kubenswrapper[7440]: I0312 14:16:27.947834 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" event={"ID":"1047bb4a-135f-488d-9399-0518cb3a827d","Type":"ContainerStarted","Data":"a573d71f938ba5f8098acbd1d172d8565a7766835eb5b928e725d99289f6a092"} Mar 12 14:16:27.947911 master-0 kubenswrapper[7440]: I0312 14:16:27.947891 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" event={"ID":"1047bb4a-135f-488d-9399-0518cb3a827d","Type":"ContainerStarted","Data":"380f10a329a2eea87fd21dfa83c04f4ce73f4e4ef348556c89b039d62e9dac7d"} Mar 12 14:16:27.947911 master-0 kubenswrapper[7440]: I0312 14:16:27.947919 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" event={"ID":"1047bb4a-135f-488d-9399-0518cb3a827d","Type":"ContainerStarted","Data":"b1a27def0943392bc851926036706c077e2c62d9404ab94e4d470faf771c9199"} Mar 12 14:16:29.352526 master-0 kubenswrapper[7440]: I0312 14:16:29.352459 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=4.352432983 podStartE2EDuration="4.352432983s" podCreationTimestamp="2026-03-12 14:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:16:26.975335705 +0000 UTC m=+247.310714264" watchObservedRunningTime="2026-03-12 14:16:29.352432983 +0000 UTC m=+249.687811552" Mar 12 14:16:29.353007 master-0 kubenswrapper[7440]: I0312 14:16:29.352832 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf"] Mar 12 14:16:30.032045 master-0 kubenswrapper[7440]: I0312 14:16:30.031964 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" event={"ID":"1047bb4a-135f-488d-9399-0518cb3a827d","Type":"ContainerStarted","Data":"2773a86cc6a182bf175dc97eef9809e0caf7310c36237fcf488f8202b3a5b3a1"} Mar 12 14:16:30.033908 master-0 kubenswrapper[7440]: I0312 14:16:30.033856 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" event={"ID":"61d829d7-38e1-4826-942c-f7317c4a4bec","Type":"ContainerStarted","Data":"ef9652ff46904d8020e6714eabfec803a7fe8bff55ab4610c8c71c7a4b16e47c"} Mar 12 14:16:30.033980 master-0 kubenswrapper[7440]: I0312 14:16:30.033918 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" event={"ID":"61d829d7-38e1-4826-942c-f7317c4a4bec","Type":"ContainerStarted","Data":"952a4e5cff72cd7499151126b7d570c4e426b0316c7d3f1d9462b433d44d34b6"} Mar 12 14:16:30.033980 master-0 kubenswrapper[7440]: I0312 14:16:30.033934 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" event={"ID":"61d829d7-38e1-4826-942c-f7317c4a4bec","Type":"ContainerStarted","Data":"f0298c9e8c7173c3949586fa731c073a558897f0792064c146633191e5244fab"} Mar 12 14:16:30.755065 master-0 kubenswrapper[7440]: I0312 14:16:30.754987 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" podStartSLOduration=4.75496843 podStartE2EDuration="4.75496843s" podCreationTimestamp="2026-03-12 14:16:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:16:30.739009603 +0000 UTC m=+251.074388172" watchObservedRunningTime="2026-03-12 14:16:30.75496843 +0000 UTC m=+251.090346989" Mar 12 14:16:30.756700 master-0 kubenswrapper[7440]: I0312 14:16:30.756659 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-b5qg4"] Mar 12 14:16:30.757345 master-0 kubenswrapper[7440]: I0312 14:16:30.757314 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-b5qg4" Mar 12 14:16:30.759038 master-0 kubenswrapper[7440]: I0312 14:16:30.758996 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 12 14:16:30.762032 master-0 kubenswrapper[7440]: I0312 14:16:30.761986 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-79f8cd6fdd-gjwhp"] Mar 12 14:16:30.762919 master-0 kubenswrapper[7440]: I0312 14:16:30.762863 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:16:30.764623 master-0 kubenswrapper[7440]: I0312 14:16:30.764573 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-wdt59"] Mar 12 14:16:30.765298 master-0 kubenswrapper[7440]: I0312 14:16:30.765269 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-wdt59" Mar 12 14:16:30.766345 master-0 kubenswrapper[7440]: I0312 14:16:30.766297 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 12 14:16:30.766832 master-0 kubenswrapper[7440]: I0312 14:16:30.766797 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 12 14:16:30.767102 master-0 kubenswrapper[7440]: I0312 14:16:30.767070 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 12 14:16:30.767289 master-0 kubenswrapper[7440]: I0312 14:16:30.767259 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 12 14:16:30.767378 master-0 kubenswrapper[7440]: I0312 14:16:30.767353 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 12 14:16:30.767378 master-0 kubenswrapper[7440]: I0312 14:16:30.767371 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 12 14:16:30.782509 master-0 kubenswrapper[7440]: I0312 14:16:30.782439 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-b5qg4"] Mar 12 14:16:30.784805 master-0 kubenswrapper[7440]: I0312 14:16:30.784756 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-wdt59"] Mar 12 14:16:30.833355 master-0 kubenswrapper[7440]: I0312 14:16:30.833300 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-service-ca-bundle\") pod \"router-default-79f8cd6fdd-gjwhp\" (UID: \"e7f6ebd3-98c8-457c-a88c-7e81270f01b5\") " pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:16:30.833355 master-0 kubenswrapper[7440]: I0312 14:16:30.833353 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-metrics-certs\") pod \"router-default-79f8cd6fdd-gjwhp\" (UID: \"e7f6ebd3-98c8-457c-a88c-7e81270f01b5\") " pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:16:30.833595 master-0 kubenswrapper[7440]: I0312 14:16:30.833383 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-stats-auth\") pod \"router-default-79f8cd6fdd-gjwhp\" (UID: \"e7f6ebd3-98c8-457c-a88c-7e81270f01b5\") " pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:16:30.833595 master-0 kubenswrapper[7440]: I0312 14:16:30.833423 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vntrg\" (UniqueName: \"kubernetes.io/projected/f7b68603-8af3-4a50-8d39-86bbcdf1c597-kube-api-access-vntrg\") pod \"network-check-source-7c67b67d47-wdt59\" (UID: \"f7b68603-8af3-4a50-8d39-86bbcdf1c597\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-wdt59" Mar 12 14:16:30.833595 master-0 kubenswrapper[7440]: I0312 14:16:30.833444 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/900b2a0e-1e2b-41a3-86f5-639ec1e95969-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-b5qg4\" (UID: \"900b2a0e-1e2b-41a3-86f5-639ec1e95969\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-b5qg4" Mar 12 14:16:30.833595 master-0 kubenswrapper[7440]: I0312 14:16:30.833472 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56twk\" (UniqueName: \"kubernetes.io/projected/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-kube-api-access-56twk\") pod \"router-default-79f8cd6fdd-gjwhp\" (UID: \"e7f6ebd3-98c8-457c-a88c-7e81270f01b5\") " pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:16:30.833595 master-0 kubenswrapper[7440]: I0312 14:16:30.833498 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-default-certificate\") pod \"router-default-79f8cd6fdd-gjwhp\" (UID: \"e7f6ebd3-98c8-457c-a88c-7e81270f01b5\") " pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:16:30.872463 master-0 kubenswrapper[7440]: I0312 14:16:30.872385 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-vr5md"] Mar 12 14:16:30.873066 master-0 kubenswrapper[7440]: I0312 14:16:30.873035 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" Mar 12 14:16:30.879508 master-0 kubenswrapper[7440]: I0312 14:16:30.879371 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-88dlf" Mar 12 14:16:30.879508 master-0 kubenswrapper[7440]: I0312 14:16:30.879456 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 12 14:16:30.934041 master-0 kubenswrapper[7440]: I0312 14:16:30.933995 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-service-ca-bundle\") pod \"router-default-79f8cd6fdd-gjwhp\" (UID: \"e7f6ebd3-98c8-457c-a88c-7e81270f01b5\") " pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:16:30.934041 master-0 kubenswrapper[7440]: I0312 14:16:30.934041 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-metrics-certs\") pod \"router-default-79f8cd6fdd-gjwhp\" (UID: \"e7f6ebd3-98c8-457c-a88c-7e81270f01b5\") " pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:16:30.934263 master-0 kubenswrapper[7440]: I0312 14:16:30.934063 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/37aeb9b1-9138-41e8-83d1-8c0e0a60a00e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-vr5md\" (UID: \"37aeb9b1-9138-41e8-83d1-8c0e0a60a00e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" Mar 12 14:16:30.934263 master-0 kubenswrapper[7440]: I0312 14:16:30.934196 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-stats-auth\") pod \"router-default-79f8cd6fdd-gjwhp\" (UID: \"e7f6ebd3-98c8-457c-a88c-7e81270f01b5\") " pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:16:30.934263 master-0 kubenswrapper[7440]: I0312 14:16:30.934259 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/37aeb9b1-9138-41e8-83d1-8c0e0a60a00e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-vr5md\" (UID: \"37aeb9b1-9138-41e8-83d1-8c0e0a60a00e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" Mar 12 14:16:30.934536 master-0 kubenswrapper[7440]: I0312 14:16:30.934486 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vntrg\" (UniqueName: \"kubernetes.io/projected/f7b68603-8af3-4a50-8d39-86bbcdf1c597-kube-api-access-vntrg\") pod \"network-check-source-7c67b67d47-wdt59\" (UID: \"f7b68603-8af3-4a50-8d39-86bbcdf1c597\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-wdt59" Mar 12 14:16:30.934740 master-0 kubenswrapper[7440]: I0312 14:16:30.934709 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/900b2a0e-1e2b-41a3-86f5-639ec1e95969-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-b5qg4\" (UID: \"900b2a0e-1e2b-41a3-86f5-639ec1e95969\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-b5qg4" Mar 12 14:16:30.934786 master-0 kubenswrapper[7440]: I0312 14:16:30.934772 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/37aeb9b1-9138-41e8-83d1-8c0e0a60a00e-ready\") pod \"cni-sysctl-allowlist-ds-vr5md\" (UID: \"37aeb9b1-9138-41e8-83d1-8c0e0a60a00e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" Mar 12 14:16:30.934829 master-0 kubenswrapper[7440]: I0312 14:16:30.934809 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vs6k\" (UniqueName: \"kubernetes.io/projected/37aeb9b1-9138-41e8-83d1-8c0e0a60a00e-kube-api-access-6vs6k\") pod \"cni-sysctl-allowlist-ds-vr5md\" (UID: \"37aeb9b1-9138-41e8-83d1-8c0e0a60a00e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" Mar 12 14:16:30.934877 master-0 kubenswrapper[7440]: I0312 14:16:30.934857 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56twk\" (UniqueName: \"kubernetes.io/projected/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-kube-api-access-56twk\") pod \"router-default-79f8cd6fdd-gjwhp\" (UID: \"e7f6ebd3-98c8-457c-a88c-7e81270f01b5\") " pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:16:30.935032 master-0 kubenswrapper[7440]: I0312 14:16:30.935005 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-service-ca-bundle\") pod \"router-default-79f8cd6fdd-gjwhp\" (UID: \"e7f6ebd3-98c8-457c-a88c-7e81270f01b5\") " pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:16:30.935098 master-0 kubenswrapper[7440]: I0312 14:16:30.935019 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-default-certificate\") pod \"router-default-79f8cd6fdd-gjwhp\" (UID: \"e7f6ebd3-98c8-457c-a88c-7e81270f01b5\") " pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:16:30.937028 master-0 kubenswrapper[7440]: I0312 14:16:30.936998 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-stats-auth\") pod \"router-default-79f8cd6fdd-gjwhp\" (UID: \"e7f6ebd3-98c8-457c-a88c-7e81270f01b5\") " pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:16:30.937802 master-0 kubenswrapper[7440]: I0312 14:16:30.937766 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-metrics-certs\") pod \"router-default-79f8cd6fdd-gjwhp\" (UID: \"e7f6ebd3-98c8-457c-a88c-7e81270f01b5\") " pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:16:30.938221 master-0 kubenswrapper[7440]: I0312 14:16:30.938202 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-default-certificate\") pod \"router-default-79f8cd6fdd-gjwhp\" (UID: \"e7f6ebd3-98c8-457c-a88c-7e81270f01b5\") " pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:16:30.945630 master-0 kubenswrapper[7440]: I0312 14:16:30.945589 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/900b2a0e-1e2b-41a3-86f5-639ec1e95969-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-b5qg4\" (UID: \"900b2a0e-1e2b-41a3-86f5-639ec1e95969\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-b5qg4" Mar 12 14:16:30.962913 master-0 kubenswrapper[7440]: I0312 14:16:30.962856 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56twk\" (UniqueName: \"kubernetes.io/projected/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-kube-api-access-56twk\") pod \"router-default-79f8cd6fdd-gjwhp\" (UID: \"e7f6ebd3-98c8-457c-a88c-7e81270f01b5\") " pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:16:30.963662 master-0 kubenswrapper[7440]: I0312 14:16:30.963641 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vntrg\" (UniqueName: \"kubernetes.io/projected/f7b68603-8af3-4a50-8d39-86bbcdf1c597-kube-api-access-vntrg\") pod \"network-check-source-7c67b67d47-wdt59\" (UID: \"f7b68603-8af3-4a50-8d39-86bbcdf1c597\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-wdt59" Mar 12 14:16:31.036068 master-0 kubenswrapper[7440]: I0312 14:16:31.035912 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/37aeb9b1-9138-41e8-83d1-8c0e0a60a00e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-vr5md\" (UID: \"37aeb9b1-9138-41e8-83d1-8c0e0a60a00e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" Mar 12 14:16:31.036068 master-0 kubenswrapper[7440]: I0312 14:16:31.036040 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/37aeb9b1-9138-41e8-83d1-8c0e0a60a00e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-vr5md\" (UID: \"37aeb9b1-9138-41e8-83d1-8c0e0a60a00e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" Mar 12 14:16:31.036285 master-0 kubenswrapper[7440]: I0312 14:16:31.036105 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/37aeb9b1-9138-41e8-83d1-8c0e0a60a00e-ready\") pod \"cni-sysctl-allowlist-ds-vr5md\" (UID: \"37aeb9b1-9138-41e8-83d1-8c0e0a60a00e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" Mar 12 14:16:31.036285 master-0 kubenswrapper[7440]: I0312 14:16:31.036130 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vs6k\" (UniqueName: \"kubernetes.io/projected/37aeb9b1-9138-41e8-83d1-8c0e0a60a00e-kube-api-access-6vs6k\") pod \"cni-sysctl-allowlist-ds-vr5md\" (UID: \"37aeb9b1-9138-41e8-83d1-8c0e0a60a00e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" Mar 12 14:16:31.036285 master-0 kubenswrapper[7440]: I0312 14:16:31.036185 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/37aeb9b1-9138-41e8-83d1-8c0e0a60a00e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-vr5md\" (UID: \"37aeb9b1-9138-41e8-83d1-8c0e0a60a00e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" Mar 12 14:16:31.036838 master-0 kubenswrapper[7440]: I0312 14:16:31.036790 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/37aeb9b1-9138-41e8-83d1-8c0e0a60a00e-ready\") pod \"cni-sysctl-allowlist-ds-vr5md\" (UID: \"37aeb9b1-9138-41e8-83d1-8c0e0a60a00e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" Mar 12 14:16:31.037066 master-0 kubenswrapper[7440]: I0312 14:16:31.036914 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/37aeb9b1-9138-41e8-83d1-8c0e0a60a00e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-vr5md\" (UID: \"37aeb9b1-9138-41e8-83d1-8c0e0a60a00e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" Mar 12 14:16:31.050752 master-0 kubenswrapper[7440]: I0312 14:16:31.050706 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vs6k\" (UniqueName: \"kubernetes.io/projected/37aeb9b1-9138-41e8-83d1-8c0e0a60a00e-kube-api-access-6vs6k\") pod \"cni-sysctl-allowlist-ds-vr5md\" (UID: \"37aeb9b1-9138-41e8-83d1-8c0e0a60a00e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" Mar 12 14:16:31.058681 master-0 kubenswrapper[7440]: I0312 14:16:31.056529 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" podStartSLOduration=5.05651325 podStartE2EDuration="5.05651325s" podCreationTimestamp="2026-03-12 14:16:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:16:31.054985481 +0000 UTC m=+251.390364090" watchObservedRunningTime="2026-03-12 14:16:31.05651325 +0000 UTC m=+251.391891809" Mar 12 14:16:31.097926 master-0 kubenswrapper[7440]: I0312 14:16:31.097846 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-b5qg4" Mar 12 14:16:31.128229 master-0 kubenswrapper[7440]: I0312 14:16:31.128200 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:16:31.147159 master-0 kubenswrapper[7440]: I0312 14:16:31.147088 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-wdt59" Mar 12 14:16:31.148666 master-0 kubenswrapper[7440]: W0312 14:16:31.148066 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7f6ebd3_98c8_457c_a88c_7e81270f01b5.slice/crio-fa512b9d1c47fba8ce4517c7ff55b3a36d2662e583e6b6952289b14b55413ef1 WatchSource:0}: Error finding container fa512b9d1c47fba8ce4517c7ff55b3a36d2662e583e6b6952289b14b55413ef1: Status 404 returned error can't find the container with id fa512b9d1c47fba8ce4517c7ff55b3a36d2662e583e6b6952289b14b55413ef1 Mar 12 14:16:31.199245 master-0 kubenswrapper[7440]: I0312 14:16:31.199165 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" Mar 12 14:16:31.229466 master-0 kubenswrapper[7440]: W0312 14:16:31.229418 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37aeb9b1_9138_41e8_83d1_8c0e0a60a00e.slice/crio-a9054a9359d736ed3f297de33ab43b49ffefd2bc4dddda05743306c3b05999a8 WatchSource:0}: Error finding container a9054a9359d736ed3f297de33ab43b49ffefd2bc4dddda05743306c3b05999a8: Status 404 returned error can't find the container with id a9054a9359d736ed3f297de33ab43b49ffefd2bc4dddda05743306c3b05999a8 Mar 12 14:16:31.478254 master-0 kubenswrapper[7440]: I0312 14:16:31.477985 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-b5qg4"] Mar 12 14:16:31.479741 master-0 kubenswrapper[7440]: W0312 14:16:31.479702 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod900b2a0e_1e2b_41a3_86f5_639ec1e95969.slice/crio-e7f98f2c20f8a17639a398b1fbfbba35de0dedfd7ce02e92e1a2182183ee86ac WatchSource:0}: Error finding container e7f98f2c20f8a17639a398b1fbfbba35de0dedfd7ce02e92e1a2182183ee86ac: Status 404 returned error can't find the container with id e7f98f2c20f8a17639a398b1fbfbba35de0dedfd7ce02e92e1a2182183ee86ac Mar 12 14:16:31.565506 master-0 kubenswrapper[7440]: I0312 14:16:31.565442 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-wdt59"] Mar 12 14:16:31.569568 master-0 kubenswrapper[7440]: W0312 14:16:31.569498 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7b68603_8af3_4a50_8d39_86bbcdf1c597.slice/crio-6a6f22295caf5561da4b53d5d1d44905e37cde1c7951dfd83965f63ee4f0c534 WatchSource:0}: Error finding container 6a6f22295caf5561da4b53d5d1d44905e37cde1c7951dfd83965f63ee4f0c534: Status 404 returned error can't find the container with id 6a6f22295caf5561da4b53d5d1d44905e37cde1c7951dfd83965f63ee4f0c534 Mar 12 14:16:31.805212 master-0 kubenswrapper[7440]: I0312 14:16:31.805182 7440 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 14:16:32.054778 master-0 kubenswrapper[7440]: I0312 14:16:32.054706 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-wdt59" event={"ID":"f7b68603-8af3-4a50-8d39-86bbcdf1c597","Type":"ContainerStarted","Data":"1564943ad1ff64ec05cc4bdb39b9cac207880b0ddd829f16092763ce6b2053d9"} Mar 12 14:16:32.055013 master-0 kubenswrapper[7440]: I0312 14:16:32.054800 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-wdt59" event={"ID":"f7b68603-8af3-4a50-8d39-86bbcdf1c597","Type":"ContainerStarted","Data":"6a6f22295caf5561da4b53d5d1d44905e37cde1c7951dfd83965f63ee4f0c534"} Mar 12 14:16:32.059541 master-0 kubenswrapper[7440]: I0312 14:16:32.059487 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-b5qg4" event={"ID":"900b2a0e-1e2b-41a3-86f5-639ec1e95969","Type":"ContainerStarted","Data":"e7f98f2c20f8a17639a398b1fbfbba35de0dedfd7ce02e92e1a2182183ee86ac"} Mar 12 14:16:32.063052 master-0 kubenswrapper[7440]: I0312 14:16:32.062819 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" event={"ID":"37aeb9b1-9138-41e8-83d1-8c0e0a60a00e","Type":"ContainerStarted","Data":"3787f36c30658b983a3a24e5747d079ed8e5f2c993c16a4b74574ce6690d96ca"} Mar 12 14:16:32.063140 master-0 kubenswrapper[7440]: I0312 14:16:32.063054 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" event={"ID":"37aeb9b1-9138-41e8-83d1-8c0e0a60a00e","Type":"ContainerStarted","Data":"a9054a9359d736ed3f297de33ab43b49ffefd2bc4dddda05743306c3b05999a8"} Mar 12 14:16:32.064157 master-0 kubenswrapper[7440]: I0312 14:16:32.064125 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" Mar 12 14:16:32.080266 master-0 kubenswrapper[7440]: I0312 14:16:32.080204 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" event={"ID":"e7f6ebd3-98c8-457c-a88c-7e81270f01b5","Type":"ContainerStarted","Data":"fa512b9d1c47fba8ce4517c7ff55b3a36d2662e583e6b6952289b14b55413ef1"} Mar 12 14:16:32.081496 master-0 kubenswrapper[7440]: I0312 14:16:32.081409 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-wdt59" podStartSLOduration=299.081396161 podStartE2EDuration="4m59.081396161s" podCreationTimestamp="2026-03-12 14:11:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:16:32.080395985 +0000 UTC m=+252.415774554" watchObservedRunningTime="2026-03-12 14:16:32.081396161 +0000 UTC m=+252.416774740" Mar 12 14:16:32.108123 master-0 kubenswrapper[7440]: I0312 14:16:32.108047 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" podStartSLOduration=2.108023759 podStartE2EDuration="2.108023759s" podCreationTimestamp="2026-03-12 14:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:16:32.102617611 +0000 UTC m=+252.437996190" watchObservedRunningTime="2026-03-12 14:16:32.108023759 +0000 UTC m=+252.443402318" Mar 12 14:16:32.121926 master-0 kubenswrapper[7440]: I0312 14:16:32.121235 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" Mar 12 14:16:33.532805 master-0 kubenswrapper[7440]: I0312 14:16:33.532742 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 12 14:16:33.533346 master-0 kubenswrapper[7440]: I0312 14:16:33.532957 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-2-master-0" podUID="be4847ff-0a31-4147-93f6-0cdb03f2418d" containerName="installer" containerID="cri-o://241aab17123596d30cb151981c1709611449c7907327ce4b19c53019951ff0d7" gracePeriod=30 Mar 12 14:16:34.093129 master-0 kubenswrapper[7440]: I0312 14:16:34.093013 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-b5qg4" event={"ID":"900b2a0e-1e2b-41a3-86f5-639ec1e95969","Type":"ContainerStarted","Data":"6b9d3f1d90ce9219f6b4917e4b3176236cb57e09e88592cc7f4e6e459e15ea90"} Mar 12 14:16:34.093288 master-0 kubenswrapper[7440]: I0312 14:16:34.093186 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-b5qg4" Mar 12 14:16:34.094659 master-0 kubenswrapper[7440]: I0312 14:16:34.094628 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" event={"ID":"e7f6ebd3-98c8-457c-a88c-7e81270f01b5","Type":"ContainerStarted","Data":"c7f808f64216bac6165a91847cbe1e04c9cbb2e41a6946684e87039fd940bcf1"} Mar 12 14:16:34.097925 master-0 kubenswrapper[7440]: I0312 14:16:34.097866 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-b5qg4" Mar 12 14:16:34.114446 master-0 kubenswrapper[7440]: I0312 14:16:34.114376 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-b5qg4" podStartSLOduration=212.788709247 podStartE2EDuration="3m35.114358114s" podCreationTimestamp="2026-03-12 14:12:59 +0000 UTC" firstStartedPulling="2026-03-12 14:16:31.4814016 +0000 UTC m=+251.816780159" lastFinishedPulling="2026-03-12 14:16:33.807050467 +0000 UTC m=+254.142429026" observedRunningTime="2026-03-12 14:16:34.110950827 +0000 UTC m=+254.446329386" watchObservedRunningTime="2026-03-12 14:16:34.114358114 +0000 UTC m=+254.449736673" Mar 12 14:16:34.130824 master-0 kubenswrapper[7440]: I0312 14:16:34.130781 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:16:34.136445 master-0 kubenswrapper[7440]: I0312 14:16:34.136396 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:16:34.136445 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:16:34.136445 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:16:34.136445 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:16:34.136702 master-0 kubenswrapper[7440]: I0312 14:16:34.136465 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:16:34.172092 master-0 kubenswrapper[7440]: I0312 14:16:34.172008 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podStartSLOduration=223.523815381 podStartE2EDuration="3m46.171985822s" podCreationTimestamp="2026-03-12 14:12:48 +0000 UTC" firstStartedPulling="2026-03-12 14:16:31.156169537 +0000 UTC m=+251.491548096" lastFinishedPulling="2026-03-12 14:16:33.804339978 +0000 UTC m=+254.139718537" observedRunningTime="2026-03-12 14:16:34.167878357 +0000 UTC m=+254.503256916" watchObservedRunningTime="2026-03-12 14:16:34.171985822 +0000 UTC m=+254.507364381" Mar 12 14:16:34.216844 master-0 kubenswrapper[7440]: I0312 14:16:34.216794 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-nj7qg"] Mar 12 14:16:34.217590 master-0 kubenswrapper[7440]: I0312 14:16:34.217560 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-nj7qg" Mar 12 14:16:34.219669 master-0 kubenswrapper[7440]: I0312 14:16:34.219631 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-28q99" Mar 12 14:16:34.219669 master-0 kubenswrapper[7440]: I0312 14:16:34.219660 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 12 14:16:34.219762 master-0 kubenswrapper[7440]: I0312 14:16:34.219740 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 12 14:16:34.293411 master-0 kubenswrapper[7440]: I0312 14:16:34.293353 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a-certs\") pod \"machine-config-server-nj7qg\" (UID: \"6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a\") " pod="openshift-machine-config-operator/machine-config-server-nj7qg" Mar 12 14:16:34.293411 master-0 kubenswrapper[7440]: I0312 14:16:34.293414 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcz8p\" (UniqueName: \"kubernetes.io/projected/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a-kube-api-access-jcz8p\") pod \"machine-config-server-nj7qg\" (UID: \"6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a\") " pod="openshift-machine-config-operator/machine-config-server-nj7qg" Mar 12 14:16:34.293637 master-0 kubenswrapper[7440]: I0312 14:16:34.293506 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a-node-bootstrap-token\") pod \"machine-config-server-nj7qg\" (UID: \"6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a\") " pod="openshift-machine-config-operator/machine-config-server-nj7qg" Mar 12 14:16:34.394125 master-0 kubenswrapper[7440]: I0312 14:16:34.394000 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a-node-bootstrap-token\") pod \"machine-config-server-nj7qg\" (UID: \"6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a\") " pod="openshift-machine-config-operator/machine-config-server-nj7qg" Mar 12 14:16:34.394125 master-0 kubenswrapper[7440]: I0312 14:16:34.394063 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a-certs\") pod \"machine-config-server-nj7qg\" (UID: \"6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a\") " pod="openshift-machine-config-operator/machine-config-server-nj7qg" Mar 12 14:16:34.394125 master-0 kubenswrapper[7440]: I0312 14:16:34.394107 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcz8p\" (UniqueName: \"kubernetes.io/projected/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a-kube-api-access-jcz8p\") pod \"machine-config-server-nj7qg\" (UID: \"6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a\") " pod="openshift-machine-config-operator/machine-config-server-nj7qg" Mar 12 14:16:34.397746 master-0 kubenswrapper[7440]: I0312 14:16:34.397093 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a-certs\") pod \"machine-config-server-nj7qg\" (UID: \"6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a\") " pod="openshift-machine-config-operator/machine-config-server-nj7qg" Mar 12 14:16:34.397908 master-0 kubenswrapper[7440]: I0312 14:16:34.397845 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a-node-bootstrap-token\") pod \"machine-config-server-nj7qg\" (UID: \"6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a\") " pod="openshift-machine-config-operator/machine-config-server-nj7qg" Mar 12 14:16:34.414072 master-0 kubenswrapper[7440]: I0312 14:16:34.414024 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcz8p\" (UniqueName: \"kubernetes.io/projected/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a-kube-api-access-jcz8p\") pod \"machine-config-server-nj7qg\" (UID: \"6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a\") " pod="openshift-machine-config-operator/machine-config-server-nj7qg" Mar 12 14:16:34.458890 master-0 kubenswrapper[7440]: I0312 14:16:34.458851 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9bljc" Mar 12 14:16:34.511248 master-0 kubenswrapper[7440]: I0312 14:16:34.511200 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9bljc" Mar 12 14:16:34.537619 master-0 kubenswrapper[7440]: I0312 14:16:34.537566 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-nj7qg" Mar 12 14:16:34.554344 master-0 kubenswrapper[7440]: W0312 14:16:34.554277 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b66a2a2_4e14_4d24_b89c_b1e8bbcec92a.slice/crio-133614914dd24d9ac9613df300e1e5f9690b2a5705765951b6217919a73bd40b WatchSource:0}: Error finding container 133614914dd24d9ac9613df300e1e5f9690b2a5705765951b6217919a73bd40b: Status 404 returned error can't find the container with id 133614914dd24d9ac9613df300e1e5f9690b2a5705765951b6217919a73bd40b Mar 12 14:16:35.112926 master-0 kubenswrapper[7440]: I0312 14:16:35.112781 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-nj7qg" event={"ID":"6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a","Type":"ContainerStarted","Data":"540809e8c57264298020b8f7c329852fdc11e5e328ec4a2eb78873d2a2fd4933"} Mar 12 14:16:35.112926 master-0 kubenswrapper[7440]: I0312 14:16:35.112819 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-nj7qg" event={"ID":"6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a","Type":"ContainerStarted","Data":"133614914dd24d9ac9613df300e1e5f9690b2a5705765951b6217919a73bd40b"} Mar 12 14:16:35.131957 master-0 kubenswrapper[7440]: I0312 14:16:35.131876 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:16:35.131957 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:16:35.131957 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:16:35.131957 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:16:35.131957 master-0 kubenswrapper[7440]: I0312 14:16:35.131952 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:16:35.237182 master-0 kubenswrapper[7440]: I0312 14:16:35.233727 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-nj7qg" podStartSLOduration=1.23370593 podStartE2EDuration="1.23370593s" podCreationTimestamp="2026-03-12 14:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:16:35.13319046 +0000 UTC m=+255.468569039" watchObservedRunningTime="2026-03-12 14:16:35.23370593 +0000 UTC m=+255.569084489" Mar 12 14:16:35.237182 master-0 kubenswrapper[7440]: I0312 14:16:35.236645 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h"] Mar 12 14:16:35.237768 master-0 kubenswrapper[7440]: I0312 14:16:35.237555 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" Mar 12 14:16:35.248482 master-0 kubenswrapper[7440]: I0312 14:16:35.248062 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 12 14:16:35.248482 master-0 kubenswrapper[7440]: I0312 14:16:35.248220 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-kl5h5" Mar 12 14:16:35.248482 master-0 kubenswrapper[7440]: I0312 14:16:35.248062 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 12 14:16:35.248482 master-0 kubenswrapper[7440]: I0312 14:16:35.248372 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 12 14:16:35.252928 master-0 kubenswrapper[7440]: I0312 14:16:35.250030 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h"] Mar 12 14:16:35.413849 master-0 kubenswrapper[7440]: I0312 14:16:35.413730 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-bwl7h\" (UID: \"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" Mar 12 14:16:35.414458 master-0 kubenswrapper[7440]: I0312 14:16:35.414144 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-bwl7h\" (UID: \"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" Mar 12 14:16:35.414458 master-0 kubenswrapper[7440]: I0312 14:16:35.414237 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-bwl7h\" (UID: \"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" Mar 12 14:16:35.414458 master-0 kubenswrapper[7440]: I0312 14:16:35.414287 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms688\" (UniqueName: \"kubernetes.io/projected/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-kube-api-access-ms688\") pod \"prometheus-operator-5ff8674d55-bwl7h\" (UID: \"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" Mar 12 14:16:35.515052 master-0 kubenswrapper[7440]: I0312 14:16:35.514972 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-bwl7h\" (UID: \"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" Mar 12 14:16:35.515052 master-0 kubenswrapper[7440]: I0312 14:16:35.515034 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-bwl7h\" (UID: \"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" Mar 12 14:16:35.515328 master-0 kubenswrapper[7440]: I0312 14:16:35.515216 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-bwl7h\" (UID: \"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" Mar 12 14:16:35.515328 master-0 kubenswrapper[7440]: I0312 14:16:35.515274 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms688\" (UniqueName: \"kubernetes.io/projected/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-kube-api-access-ms688\") pod \"prometheus-operator-5ff8674d55-bwl7h\" (UID: \"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" Mar 12 14:16:35.517164 master-0 kubenswrapper[7440]: I0312 14:16:35.517130 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-bwl7h\" (UID: \"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" Mar 12 14:16:35.519012 master-0 kubenswrapper[7440]: I0312 14:16:35.518984 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-bwl7h\" (UID: \"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" Mar 12 14:16:35.519525 master-0 kubenswrapper[7440]: I0312 14:16:35.519482 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-bwl7h\" (UID: \"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" Mar 12 14:16:35.536577 master-0 kubenswrapper[7440]: I0312 14:16:35.536513 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms688\" (UniqueName: \"kubernetes.io/projected/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-kube-api-access-ms688\") pod \"prometheus-operator-5ff8674d55-bwl7h\" (UID: \"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" Mar 12 14:16:35.571695 master-0 kubenswrapper[7440]: I0312 14:16:35.571641 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" Mar 12 14:16:36.042115 master-0 kubenswrapper[7440]: I0312 14:16:36.042062 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h"] Mar 12 14:16:36.117778 master-0 kubenswrapper[7440]: I0312 14:16:36.117680 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" event={"ID":"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba","Type":"ContainerStarted","Data":"6248f60ded635728b07f9ffbb9d72d48359f97cdb83b7f5d2e6153af60f77309"} Mar 12 14:16:36.131447 master-0 kubenswrapper[7440]: I0312 14:16:36.131397 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:16:36.131447 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:16:36.131447 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:16:36.131447 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:16:36.131700 master-0 kubenswrapper[7440]: I0312 14:16:36.131471 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:16:37.133632 master-0 kubenswrapper[7440]: I0312 14:16:37.132636 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:16:37.133632 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:16:37.133632 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:16:37.133632 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:16:37.133632 master-0 kubenswrapper[7440]: I0312 14:16:37.132699 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:16:37.930740 master-0 kubenswrapper[7440]: I0312 14:16:37.930662 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 12 14:16:37.931358 master-0 kubenswrapper[7440]: I0312 14:16:37.931323 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 14:16:37.942179 master-0 kubenswrapper[7440]: I0312 14:16:37.942120 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 12 14:16:37.949562 master-0 kubenswrapper[7440]: I0312 14:16:37.949522 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05fc4965-b390-4edc-a407-d431b06d7612-kube-api-access\") pod \"installer-3-master-0\" (UID: \"05fc4965-b390-4edc-a407-d431b06d7612\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 14:16:37.949752 master-0 kubenswrapper[7440]: I0312 14:16:37.949672 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05fc4965-b390-4edc-a407-d431b06d7612-var-lock\") pod \"installer-3-master-0\" (UID: \"05fc4965-b390-4edc-a407-d431b06d7612\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 14:16:37.950050 master-0 kubenswrapper[7440]: I0312 14:16:37.950012 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05fc4965-b390-4edc-a407-d431b06d7612-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"05fc4965-b390-4edc-a407-d431b06d7612\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 14:16:38.051188 master-0 kubenswrapper[7440]: I0312 14:16:38.051049 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05fc4965-b390-4edc-a407-d431b06d7612-kube-api-access\") pod \"installer-3-master-0\" (UID: \"05fc4965-b390-4edc-a407-d431b06d7612\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 14:16:38.051188 master-0 kubenswrapper[7440]: I0312 14:16:38.051092 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05fc4965-b390-4edc-a407-d431b06d7612-var-lock\") pod \"installer-3-master-0\" (UID: \"05fc4965-b390-4edc-a407-d431b06d7612\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 14:16:38.051188 master-0 kubenswrapper[7440]: I0312 14:16:38.051135 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05fc4965-b390-4edc-a407-d431b06d7612-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"05fc4965-b390-4edc-a407-d431b06d7612\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 14:16:38.051328 master-0 kubenswrapper[7440]: I0312 14:16:38.051205 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05fc4965-b390-4edc-a407-d431b06d7612-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"05fc4965-b390-4edc-a407-d431b06d7612\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 14:16:38.051328 master-0 kubenswrapper[7440]: I0312 14:16:38.051243 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05fc4965-b390-4edc-a407-d431b06d7612-var-lock\") pod \"installer-3-master-0\" (UID: \"05fc4965-b390-4edc-a407-d431b06d7612\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 14:16:38.065019 master-0 kubenswrapper[7440]: I0312 14:16:38.064981 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05fc4965-b390-4edc-a407-d431b06d7612-kube-api-access\") pod \"installer-3-master-0\" (UID: \"05fc4965-b390-4edc-a407-d431b06d7612\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 14:16:38.131441 master-0 kubenswrapper[7440]: I0312 14:16:38.131387 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:16:38.131441 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:16:38.131441 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:16:38.131441 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:16:38.131725 master-0 kubenswrapper[7440]: I0312 14:16:38.131451 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:16:38.149711 master-0 kubenswrapper[7440]: I0312 14:16:38.149647 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" event={"ID":"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba","Type":"ContainerStarted","Data":"d705af3964cb121f77d5ca09181cfdf91c9d4d07e3a5599879eb179498167449"} Mar 12 14:16:38.149711 master-0 kubenswrapper[7440]: I0312 14:16:38.149702 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" event={"ID":"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba","Type":"ContainerStarted","Data":"050fc0b90a67cc99fced813d2d0dfac828853a651e063ec897d38aebb5d47e8e"} Mar 12 14:16:38.316586 master-0 kubenswrapper[7440]: I0312 14:16:38.316511 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 14:16:38.683174 master-0 kubenswrapper[7440]: I0312 14:16:38.683012 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" podStartSLOduration=2.018598476 podStartE2EDuration="3.682994004s" podCreationTimestamp="2026-03-12 14:16:35 +0000 UTC" firstStartedPulling="2026-03-12 14:16:36.046861189 +0000 UTC m=+256.382239748" lastFinishedPulling="2026-03-12 14:16:37.711256717 +0000 UTC m=+258.046635276" observedRunningTime="2026-03-12 14:16:38.170267617 +0000 UTC m=+258.505646186" watchObservedRunningTime="2026-03-12 14:16:38.682994004 +0000 UTC m=+259.018372563" Mar 12 14:16:38.684947 master-0 kubenswrapper[7440]: I0312 14:16:38.684407 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 12 14:16:38.690140 master-0 kubenswrapper[7440]: W0312 14:16:38.690092 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod05fc4965_b390_4edc_a407_d431b06d7612.slice/crio-2994881b5befdba78efa5f6568b4edfa2a8b9fa1561fed91504e637ca759f929 WatchSource:0}: Error finding container 2994881b5befdba78efa5f6568b4edfa2a8b9fa1561fed91504e637ca759f929: Status 404 returned error can't find the container with id 2994881b5befdba78efa5f6568b4edfa2a8b9fa1561fed91504e637ca759f929 Mar 12 14:16:39.132007 master-0 kubenswrapper[7440]: I0312 14:16:39.131925 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:16:39.132007 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:16:39.132007 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:16:39.132007 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:16:39.132007 master-0 kubenswrapper[7440]: I0312 14:16:39.132003 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:16:39.157581 master-0 kubenswrapper[7440]: I0312 14:16:39.157507 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"05fc4965-b390-4edc-a407-d431b06d7612","Type":"ContainerStarted","Data":"6aa44e483ff3af56ade2c830f5190301f0a2aff21489693f95cab78436b2ad8d"} Mar 12 14:16:39.157581 master-0 kubenswrapper[7440]: I0312 14:16:39.157563 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"05fc4965-b390-4edc-a407-d431b06d7612","Type":"ContainerStarted","Data":"2994881b5befdba78efa5f6568b4edfa2a8b9fa1561fed91504e637ca759f929"} Mar 12 14:16:39.179097 master-0 kubenswrapper[7440]: I0312 14:16:39.178997 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=2.178979175 podStartE2EDuration="2.178979175s" podCreationTimestamp="2026-03-12 14:16:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:16:39.177956249 +0000 UTC m=+259.513334818" watchObservedRunningTime="2026-03-12 14:16:39.178979175 +0000 UTC m=+259.514357754" Mar 12 14:16:39.552049 master-0 kubenswrapper[7440]: I0312 14:16:39.551979 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82"] Mar 12 14:16:39.553116 master-0 kubenswrapper[7440]: I0312 14:16:39.553090 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" Mar 12 14:16:39.568283 master-0 kubenswrapper[7440]: I0312 14:16:39.568235 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 12 14:16:39.569043 master-0 kubenswrapper[7440]: I0312 14:16:39.569010 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-7zf28" Mar 12 14:16:39.570207 master-0 kubenswrapper[7440]: I0312 14:16:39.570177 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/59f21770-429b-4b63-82fd-50ce0daf698d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-jms82\" (UID: \"59f21770-429b-4b63-82fd-50ce0daf698d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" Mar 12 14:16:39.570262 master-0 kubenswrapper[7440]: I0312 14:16:39.570209 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/59f21770-429b-4b63-82fd-50ce0daf698d-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-jms82\" (UID: \"59f21770-429b-4b63-82fd-50ce0daf698d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" Mar 12 14:16:39.570262 master-0 kubenswrapper[7440]: I0312 14:16:39.570233 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxdqn\" (UniqueName: \"kubernetes.io/projected/59f21770-429b-4b63-82fd-50ce0daf698d-kube-api-access-qxdqn\") pod \"openshift-state-metrics-74cc79fd76-jms82\" (UID: \"59f21770-429b-4b63-82fd-50ce0daf698d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" Mar 12 14:16:39.570262 master-0 kubenswrapper[7440]: I0312 14:16:39.570253 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/59f21770-429b-4b63-82fd-50ce0daf698d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-jms82\" (UID: \"59f21770-429b-4b63-82fd-50ce0daf698d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" Mar 12 14:16:39.573034 master-0 kubenswrapper[7440]: I0312 14:16:39.573002 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 12 14:16:39.587296 master-0 kubenswrapper[7440]: I0312 14:16:39.587247 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82"] Mar 12 14:16:39.596029 master-0 kubenswrapper[7440]: I0312 14:16:39.595991 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-5pkwh"] Mar 12 14:16:39.597425 master-0 kubenswrapper[7440]: I0312 14:16:39.597404 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:39.598448 master-0 kubenswrapper[7440]: I0312 14:16:39.598409 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts"] Mar 12 14:16:39.599468 master-0 kubenswrapper[7440]: I0312 14:16:39.599448 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:16:39.606065 master-0 kubenswrapper[7440]: I0312 14:16:39.606022 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 12 14:16:39.608274 master-0 kubenswrapper[7440]: I0312 14:16:39.608235 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-gbqf6" Mar 12 14:16:39.608452 master-0 kubenswrapper[7440]: I0312 14:16:39.608417 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 12 14:16:39.608531 master-0 kubenswrapper[7440]: I0312 14:16:39.608509 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-8hbfc" Mar 12 14:16:39.608590 master-0 kubenswrapper[7440]: I0312 14:16:39.608568 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 12 14:16:39.608666 master-0 kubenswrapper[7440]: I0312 14:16:39.608649 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 12 14:16:39.615916 master-0 kubenswrapper[7440]: I0312 14:16:39.615125 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 12 14:16:39.657923 master-0 kubenswrapper[7440]: I0312 14:16:39.650009 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts"] Mar 12 14:16:39.677923 master-0 kubenswrapper[7440]: I0312 14:16:39.671632 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67sxk\" (UniqueName: \"kubernetes.io/projected/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-kube-api-access-67sxk\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:39.677923 master-0 kubenswrapper[7440]: I0312 14:16:39.671693 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:39.677923 master-0 kubenswrapper[7440]: I0312 14:16:39.671713 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-tls\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:39.677923 master-0 kubenswrapper[7440]: I0312 14:16:39.671735 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/a81be38f-e07e-4863-8d61-fdefc2713a6a-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:16:39.677923 master-0 kubenswrapper[7440]: I0312 14:16:39.671758 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-root\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:39.677923 master-0 kubenswrapper[7440]: I0312 14:16:39.671789 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:16:39.677923 master-0 kubenswrapper[7440]: I0312 14:16:39.671819 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/59f21770-429b-4b63-82fd-50ce0daf698d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-jms82\" (UID: \"59f21770-429b-4b63-82fd-50ce0daf698d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" Mar 12 14:16:39.677923 master-0 kubenswrapper[7440]: I0312 14:16:39.671842 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/59f21770-429b-4b63-82fd-50ce0daf698d-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-jms82\" (UID: \"59f21770-429b-4b63-82fd-50ce0daf698d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" Mar 12 14:16:39.677923 master-0 kubenswrapper[7440]: I0312 14:16:39.671869 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7krt\" (UniqueName: \"kubernetes.io/projected/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-api-access-b7krt\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:16:39.677923 master-0 kubenswrapper[7440]: I0312 14:16:39.671889 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-textfile\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:39.677923 master-0 kubenswrapper[7440]: I0312 14:16:39.671923 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxdqn\" (UniqueName: \"kubernetes.io/projected/59f21770-429b-4b63-82fd-50ce0daf698d-kube-api-access-qxdqn\") pod \"openshift-state-metrics-74cc79fd76-jms82\" (UID: \"59f21770-429b-4b63-82fd-50ce0daf698d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" Mar 12 14:16:39.677923 master-0 kubenswrapper[7440]: I0312 14:16:39.671942 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/59f21770-429b-4b63-82fd-50ce0daf698d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-jms82\" (UID: \"59f21770-429b-4b63-82fd-50ce0daf698d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" Mar 12 14:16:39.677923 master-0 kubenswrapper[7440]: I0312 14:16:39.671959 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-metrics-client-ca\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:39.677923 master-0 kubenswrapper[7440]: I0312 14:16:39.671977 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-sys\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:39.677923 master-0 kubenswrapper[7440]: I0312 14:16:39.672018 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:16:39.677923 master-0 kubenswrapper[7440]: I0312 14:16:39.672044 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:16:39.677923 master-0 kubenswrapper[7440]: I0312 14:16:39.672064 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-wtmp\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:39.677923 master-0 kubenswrapper[7440]: I0312 14:16:39.672085 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a81be38f-e07e-4863-8d61-fdefc2713a6a-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:16:39.681912 master-0 kubenswrapper[7440]: I0312 14:16:39.678882 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/59f21770-429b-4b63-82fd-50ce0daf698d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-jms82\" (UID: \"59f21770-429b-4b63-82fd-50ce0daf698d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" Mar 12 14:16:39.681912 master-0 kubenswrapper[7440]: I0312 14:16:39.679530 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/59f21770-429b-4b63-82fd-50ce0daf698d-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-jms82\" (UID: \"59f21770-429b-4b63-82fd-50ce0daf698d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" Mar 12 14:16:39.681912 master-0 kubenswrapper[7440]: E0312 14:16:39.679843 7440 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Mar 12 14:16:39.681912 master-0 kubenswrapper[7440]: E0312 14:16:39.679880 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59f21770-429b-4b63-82fd-50ce0daf698d-openshift-state-metrics-tls podName:59f21770-429b-4b63-82fd-50ce0daf698d nodeName:}" failed. No retries permitted until 2026-03-12 14:16:40.179868642 +0000 UTC m=+260.515247201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/59f21770-429b-4b63-82fd-50ce0daf698d-openshift-state-metrics-tls") pod "openshift-state-metrics-74cc79fd76-jms82" (UID: "59f21770-429b-4b63-82fd-50ce0daf698d") : secret "openshift-state-metrics-tls" not found Mar 12 14:16:39.759065 master-0 kubenswrapper[7440]: I0312 14:16:39.746957 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxdqn\" (UniqueName: \"kubernetes.io/projected/59f21770-429b-4b63-82fd-50ce0daf698d-kube-api-access-qxdqn\") pod \"openshift-state-metrics-74cc79fd76-jms82\" (UID: \"59f21770-429b-4b63-82fd-50ce0daf698d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" Mar 12 14:16:39.775486 master-0 kubenswrapper[7440]: I0312 14:16:39.772851 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-textfile\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:39.775486 master-0 kubenswrapper[7440]: I0312 14:16:39.772961 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-metrics-client-ca\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:39.775486 master-0 kubenswrapper[7440]: I0312 14:16:39.772979 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-sys\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:39.775486 master-0 kubenswrapper[7440]: I0312 14:16:39.773026 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:16:39.775486 master-0 kubenswrapper[7440]: I0312 14:16:39.773049 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:16:39.775486 master-0 kubenswrapper[7440]: I0312 14:16:39.773071 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-wtmp\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:39.775486 master-0 kubenswrapper[7440]: I0312 14:16:39.773296 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a81be38f-e07e-4863-8d61-fdefc2713a6a-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:16:39.775486 master-0 kubenswrapper[7440]: I0312 14:16:39.773315 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67sxk\" (UniqueName: \"kubernetes.io/projected/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-kube-api-access-67sxk\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:39.775486 master-0 kubenswrapper[7440]: I0312 14:16:39.773355 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:39.775486 master-0 kubenswrapper[7440]: I0312 14:16:39.773371 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-tls\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:39.775486 master-0 kubenswrapper[7440]: I0312 14:16:39.773390 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/a81be38f-e07e-4863-8d61-fdefc2713a6a-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:16:39.775486 master-0 kubenswrapper[7440]: I0312 14:16:39.773407 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-root\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:39.775486 master-0 kubenswrapper[7440]: I0312 14:16:39.773452 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:16:39.775486 master-0 kubenswrapper[7440]: I0312 14:16:39.773479 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7krt\" (UniqueName: \"kubernetes.io/projected/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-api-access-b7krt\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:16:39.775486 master-0 kubenswrapper[7440]: E0312 14:16:39.774382 7440 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Mar 12 14:16:39.775486 master-0 kubenswrapper[7440]: E0312 14:16:39.774431 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-tls podName:b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7 nodeName:}" failed. No retries permitted until 2026-03-12 14:16:40.274419349 +0000 UTC m=+260.609797908 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-tls") pod "node-exporter-5pkwh" (UID: "b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7") : secret "node-exporter-tls" not found Mar 12 14:16:39.775486 master-0 kubenswrapper[7440]: I0312 14:16:39.774942 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/a81be38f-e07e-4863-8d61-fdefc2713a6a-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:16:39.775486 master-0 kubenswrapper[7440]: I0312 14:16:39.774983 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-root\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:39.775486 master-0 kubenswrapper[7440]: E0312 14:16:39.775043 7440 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Mar 12 14:16:39.775486 master-0 kubenswrapper[7440]: E0312 14:16:39.775068 7440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-tls podName:a81be38f-e07e-4863-8d61-fdefc2713a6a nodeName:}" failed. No retries permitted until 2026-03-12 14:16:40.275060676 +0000 UTC m=+260.610439235 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-tls") pod "kube-state-metrics-68b88f8cb5-vfvts" (UID: "a81be38f-e07e-4863-8d61-fdefc2713a6a") : secret "kube-state-metrics-tls" not found Mar 12 14:16:39.776112 master-0 kubenswrapper[7440]: I0312 14:16:39.775706 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a81be38f-e07e-4863-8d61-fdefc2713a6a-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:16:39.776112 master-0 kubenswrapper[7440]: I0312 14:16:39.775752 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-sys\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:39.782881 master-0 kubenswrapper[7440]: I0312 14:16:39.779867 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-textfile\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:39.782881 master-0 kubenswrapper[7440]: I0312 14:16:39.780006 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-wtmp\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:39.782881 master-0 kubenswrapper[7440]: I0312 14:16:39.782838 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-metrics-client-ca\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:39.782881 master-0 kubenswrapper[7440]: I0312 14:16:39.782863 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:16:39.787875 master-0 kubenswrapper[7440]: I0312 14:16:39.787833 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:39.790578 master-0 kubenswrapper[7440]: I0312 14:16:39.788580 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:16:39.814063 master-0 kubenswrapper[7440]: I0312 14:16:39.813763 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67sxk\" (UniqueName: \"kubernetes.io/projected/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-kube-api-access-67sxk\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:39.815563 master-0 kubenswrapper[7440]: I0312 14:16:39.815522 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7krt\" (UniqueName: \"kubernetes.io/projected/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-api-access-b7krt\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:16:40.131414 master-0 kubenswrapper[7440]: I0312 14:16:40.131369 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:16:40.131414 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:16:40.131414 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:16:40.131414 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:16:40.133079 master-0 kubenswrapper[7440]: I0312 14:16:40.131429 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:16:40.164997 master-0 kubenswrapper[7440]: I0312 14:16:40.164956 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_be4847ff-0a31-4147-93f6-0cdb03f2418d/installer/0.log" Mar 12 14:16:40.165573 master-0 kubenswrapper[7440]: I0312 14:16:40.165011 7440 generic.go:334] "Generic (PLEG): container finished" podID="be4847ff-0a31-4147-93f6-0cdb03f2418d" containerID="241aab17123596d30cb151981c1709611449c7907327ce4b19c53019951ff0d7" exitCode=1 Mar 12 14:16:40.165573 master-0 kubenswrapper[7440]: I0312 14:16:40.165173 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"be4847ff-0a31-4147-93f6-0cdb03f2418d","Type":"ContainerDied","Data":"241aab17123596d30cb151981c1709611449c7907327ce4b19c53019951ff0d7"} Mar 12 14:16:40.165573 master-0 kubenswrapper[7440]: I0312 14:16:40.165286 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"be4847ff-0a31-4147-93f6-0cdb03f2418d","Type":"ContainerDied","Data":"189b7dd40431337c3300f45e7a77aa01791623bb48bbe77b2a0e96890a222c74"} Mar 12 14:16:40.165573 master-0 kubenswrapper[7440]: I0312 14:16:40.165305 7440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="189b7dd40431337c3300f45e7a77aa01791623bb48bbe77b2a0e96890a222c74" Mar 12 14:16:40.167974 master-0 kubenswrapper[7440]: I0312 14:16:40.167920 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_be4847ff-0a31-4147-93f6-0cdb03f2418d/installer/0.log" Mar 12 14:16:40.167974 master-0 kubenswrapper[7440]: I0312 14:16:40.167961 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 14:16:40.188152 master-0 kubenswrapper[7440]: I0312 14:16:40.188101 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be4847ff-0a31-4147-93f6-0cdb03f2418d-var-lock\") pod \"be4847ff-0a31-4147-93f6-0cdb03f2418d\" (UID: \"be4847ff-0a31-4147-93f6-0cdb03f2418d\") " Mar 12 14:16:40.188487 master-0 kubenswrapper[7440]: I0312 14:16:40.188470 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be4847ff-0a31-4147-93f6-0cdb03f2418d-kube-api-access\") pod \"be4847ff-0a31-4147-93f6-0cdb03f2418d\" (UID: \"be4847ff-0a31-4147-93f6-0cdb03f2418d\") " Mar 12 14:16:40.188624 master-0 kubenswrapper[7440]: I0312 14:16:40.188612 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be4847ff-0a31-4147-93f6-0cdb03f2418d-kubelet-dir\") pod \"be4847ff-0a31-4147-93f6-0cdb03f2418d\" (UID: \"be4847ff-0a31-4147-93f6-0cdb03f2418d\") " Mar 12 14:16:40.188949 master-0 kubenswrapper[7440]: I0312 14:16:40.188927 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/59f21770-429b-4b63-82fd-50ce0daf698d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-jms82\" (UID: \"59f21770-429b-4b63-82fd-50ce0daf698d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" Mar 12 14:16:40.189109 master-0 kubenswrapper[7440]: I0312 14:16:40.188234 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4847ff-0a31-4147-93f6-0cdb03f2418d-var-lock" (OuterVolumeSpecName: "var-lock") pod "be4847ff-0a31-4147-93f6-0cdb03f2418d" (UID: "be4847ff-0a31-4147-93f6-0cdb03f2418d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:16:40.189194 master-0 kubenswrapper[7440]: I0312 14:16:40.189159 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4847ff-0a31-4147-93f6-0cdb03f2418d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "be4847ff-0a31-4147-93f6-0cdb03f2418d" (UID: "be4847ff-0a31-4147-93f6-0cdb03f2418d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:16:40.191677 master-0 kubenswrapper[7440]: I0312 14:16:40.191608 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be4847ff-0a31-4147-93f6-0cdb03f2418d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "be4847ff-0a31-4147-93f6-0cdb03f2418d" (UID: "be4847ff-0a31-4147-93f6-0cdb03f2418d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:16:40.194763 master-0 kubenswrapper[7440]: I0312 14:16:40.194720 7440 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be4847ff-0a31-4147-93f6-0cdb03f2418d-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:40.194763 master-0 kubenswrapper[7440]: I0312 14:16:40.194751 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be4847ff-0a31-4147-93f6-0cdb03f2418d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:40.194888 master-0 kubenswrapper[7440]: I0312 14:16:40.194766 7440 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be4847ff-0a31-4147-93f6-0cdb03f2418d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:40.238654 master-0 kubenswrapper[7440]: I0312 14:16:40.238617 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/59f21770-429b-4b63-82fd-50ce0daf698d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-jms82\" (UID: \"59f21770-429b-4b63-82fd-50ce0daf698d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" Mar 12 14:16:40.296253 master-0 kubenswrapper[7440]: I0312 14:16:40.296205 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:16:40.296491 master-0 kubenswrapper[7440]: I0312 14:16:40.296341 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-tls\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:40.299515 master-0 kubenswrapper[7440]: I0312 14:16:40.299448 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-tls\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:40.299592 master-0 kubenswrapper[7440]: I0312 14:16:40.299527 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:16:40.380771 master-0 kubenswrapper[7440]: I0312 14:16:40.380602 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:16:40.469040 master-0 kubenswrapper[7440]: I0312 14:16:40.468992 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" Mar 12 14:16:40.547413 master-0 kubenswrapper[7440]: I0312 14:16:40.547342 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:16:40.597166 master-0 kubenswrapper[7440]: W0312 14:16:40.597028 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb90e26a5_b42f_4fd5_a79b_6f4836a4bfc7.slice/crio-4e4174446867a7a20182ef847c837a9996a0c6baab2ed07f50687234fab093d4 WatchSource:0}: Error finding container 4e4174446867a7a20182ef847c837a9996a0c6baab2ed07f50687234fab093d4: Status 404 returned error can't find the container with id 4e4174446867a7a20182ef847c837a9996a0c6baab2ed07f50687234fab093d4 Mar 12 14:16:40.767913 master-0 kubenswrapper[7440]: I0312 14:16:40.767858 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts"] Mar 12 14:16:40.895951 master-0 kubenswrapper[7440]: I0312 14:16:40.895914 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82"] Mar 12 14:16:41.130150 master-0 kubenswrapper[7440]: I0312 14:16:41.130094 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:16:41.131709 master-0 kubenswrapper[7440]: I0312 14:16:41.131671 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:16:41.131709 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:16:41.131709 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:16:41.131709 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:16:41.131842 master-0 kubenswrapper[7440]: I0312 14:16:41.131731 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:16:41.182694 master-0 kubenswrapper[7440]: I0312 14:16:41.182641 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" event={"ID":"59f21770-429b-4b63-82fd-50ce0daf698d","Type":"ContainerStarted","Data":"eef0b37dd526322eaef7c1aca76f63285c998e29a07055dc363715ea766db015"} Mar 12 14:16:41.182694 master-0 kubenswrapper[7440]: I0312 14:16:41.182692 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" event={"ID":"59f21770-429b-4b63-82fd-50ce0daf698d","Type":"ContainerStarted","Data":"b91ed73a339c21ab18d17bc789c0ba3301a928d38dce2afb46b197b75f34b51e"} Mar 12 14:16:41.183882 master-0 kubenswrapper[7440]: I0312 14:16:41.183811 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" event={"ID":"a81be38f-e07e-4863-8d61-fdefc2713a6a","Type":"ContainerStarted","Data":"b067750f065ba84cd14fac759b144c851d17dfcf9ba98a9096e90f8e2906332d"} Mar 12 14:16:41.185612 master-0 kubenswrapper[7440]: I0312 14:16:41.185590 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 14:16:41.186615 master-0 kubenswrapper[7440]: I0312 14:16:41.186572 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-5pkwh" event={"ID":"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7","Type":"ContainerStarted","Data":"4e4174446867a7a20182ef847c837a9996a0c6baab2ed07f50687234fab093d4"} Mar 12 14:16:41.221975 master-0 kubenswrapper[7440]: I0312 14:16:41.221885 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 12 14:16:41.229917 master-0 kubenswrapper[7440]: I0312 14:16:41.225473 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 12 14:16:41.814030 master-0 kubenswrapper[7440]: I0312 14:16:41.813967 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be4847ff-0a31-4147-93f6-0cdb03f2418d" path="/var/lib/kubelet/pods/be4847ff-0a31-4147-93f6-0cdb03f2418d/volumes" Mar 12 14:16:42.130771 master-0 kubenswrapper[7440]: I0312 14:16:42.130710 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:16:42.130771 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:16:42.130771 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:16:42.130771 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:16:42.131167 master-0 kubenswrapper[7440]: I0312 14:16:42.130777 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:16:42.195961 master-0 kubenswrapper[7440]: I0312 14:16:42.195889 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" event={"ID":"59f21770-429b-4b63-82fd-50ce0daf698d","Type":"ContainerStarted","Data":"3888e133cb6f93fcd878da6d7969a89f350958d23b4b08aa7f61aa0370050771"} Mar 12 14:16:42.197356 master-0 kubenswrapper[7440]: I0312 14:16:42.197316 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" event={"ID":"a81be38f-e07e-4863-8d61-fdefc2713a6a","Type":"ContainerStarted","Data":"0e8fc01e9a8eda98a015f25b77b74c387b2748cffe4174ae0263f83f13e0be0a"} Mar 12 14:16:43.131136 master-0 kubenswrapper[7440]: I0312 14:16:43.130974 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:16:43.131136 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:16:43.131136 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:16:43.131136 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:16:43.131136 master-0 kubenswrapper[7440]: I0312 14:16:43.131083 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:16:43.211999 master-0 kubenswrapper[7440]: I0312 14:16:43.211887 7440 generic.go:334] "Generic (PLEG): container finished" podID="b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7" containerID="36ab6a383938c1c2c65deef282e5bd58d913849b1497608417a2412a1cf8ab99" exitCode=0 Mar 12 14:16:43.211999 master-0 kubenswrapper[7440]: I0312 14:16:43.211981 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-5pkwh" event={"ID":"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7","Type":"ContainerDied","Data":"36ab6a383938c1c2c65deef282e5bd58d913849b1497608417a2412a1cf8ab99"} Mar 12 14:16:43.215441 master-0 kubenswrapper[7440]: I0312 14:16:43.215325 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" event={"ID":"a81be38f-e07e-4863-8d61-fdefc2713a6a","Type":"ContainerStarted","Data":"6f73e85300f82c74d7e3de259c7823bc1fa3d345012078ea6cbaa374b7196577"} Mar 12 14:16:43.215441 master-0 kubenswrapper[7440]: I0312 14:16:43.215352 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" event={"ID":"a81be38f-e07e-4863-8d61-fdefc2713a6a","Type":"ContainerStarted","Data":"e9a89dbf9c4b5498b299505ee1db6b94e8fd5fbe2a7174de9621cd8bdf42917f"} Mar 12 14:16:43.257142 master-0 kubenswrapper[7440]: I0312 14:16:43.257066 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" podStartSLOduration=3.02594104 podStartE2EDuration="4.257019012s" podCreationTimestamp="2026-03-12 14:16:39 +0000 UTC" firstStartedPulling="2026-03-12 14:16:40.777804483 +0000 UTC m=+261.113183042" lastFinishedPulling="2026-03-12 14:16:42.008882455 +0000 UTC m=+262.344261014" observedRunningTime="2026-03-12 14:16:43.252358022 +0000 UTC m=+263.587736581" watchObservedRunningTime="2026-03-12 14:16:43.257019012 +0000 UTC m=+263.592397601" Mar 12 14:16:44.131050 master-0 kubenswrapper[7440]: I0312 14:16:44.130996 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:16:44.131050 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:16:44.131050 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:16:44.131050 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:16:44.131387 master-0 kubenswrapper[7440]: I0312 14:16:44.131065 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:16:44.221798 master-0 kubenswrapper[7440]: I0312 14:16:44.221735 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" event={"ID":"59f21770-429b-4b63-82fd-50ce0daf698d","Type":"ContainerStarted","Data":"4090851a8a1e04f68cb376f8a537549cd0813cb04a4f0fc1281d6f979e4c7445"} Mar 12 14:16:44.223809 master-0 kubenswrapper[7440]: I0312 14:16:44.223783 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-5pkwh" event={"ID":"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7","Type":"ContainerStarted","Data":"b83e97de107007adbcd23692a3bdbb649ea8264dd63f326fab85915ecb6c5f3a"} Mar 12 14:16:44.223940 master-0 kubenswrapper[7440]: I0312 14:16:44.223923 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-5pkwh" event={"ID":"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7","Type":"ContainerStarted","Data":"19be5ccb5230010d84871a29080c878437ffbe4a525b10e61775810b14c25703"} Mar 12 14:16:44.246816 master-0 kubenswrapper[7440]: I0312 14:16:44.246745 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" podStartSLOduration=3.074381093 podStartE2EDuration="5.246722636s" podCreationTimestamp="2026-03-12 14:16:39 +0000 UTC" firstStartedPulling="2026-03-12 14:16:41.183734721 +0000 UTC m=+261.519113280" lastFinishedPulling="2026-03-12 14:16:43.356076264 +0000 UTC m=+263.691454823" observedRunningTime="2026-03-12 14:16:44.241534544 +0000 UTC m=+264.576913113" watchObservedRunningTime="2026-03-12 14:16:44.246722636 +0000 UTC m=+264.582101195" Mar 12 14:16:44.264225 master-0 kubenswrapper[7440]: I0312 14:16:44.264152 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-5pkwh" podStartSLOduration=3.860491684 podStartE2EDuration="5.26413562s" podCreationTimestamp="2026-03-12 14:16:39 +0000 UTC" firstStartedPulling="2026-03-12 14:16:40.601305048 +0000 UTC m=+260.936683607" lastFinishedPulling="2026-03-12 14:16:42.004948984 +0000 UTC m=+262.340327543" observedRunningTime="2026-03-12 14:16:44.263480532 +0000 UTC m=+264.598859101" watchObservedRunningTime="2026-03-12 14:16:44.26413562 +0000 UTC m=+264.599514179" Mar 12 14:16:44.896676 master-0 kubenswrapper[7440]: I0312 14:16:44.896631 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4"] Mar 12 14:16:44.896890 master-0 kubenswrapper[7440]: E0312 14:16:44.896876 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4847ff-0a31-4147-93f6-0cdb03f2418d" containerName="installer" Mar 12 14:16:44.896890 master-0 kubenswrapper[7440]: I0312 14:16:44.896888 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4847ff-0a31-4147-93f6-0cdb03f2418d" containerName="installer" Mar 12 14:16:44.897023 master-0 kubenswrapper[7440]: I0312 14:16:44.897007 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4847ff-0a31-4147-93f6-0cdb03f2418d" containerName="installer" Mar 12 14:16:44.897854 master-0 kubenswrapper[7440]: I0312 14:16:44.897827 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:16:44.902361 master-0 kubenswrapper[7440]: I0312 14:16:44.902322 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 12 14:16:44.902509 master-0 kubenswrapper[7440]: I0312 14:16:44.902491 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-gbzqr" Mar 12 14:16:44.904118 master-0 kubenswrapper[7440]: I0312 14:16:44.904090 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 12 14:16:44.904304 master-0 kubenswrapper[7440]: I0312 14:16:44.904278 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 12 14:16:44.904453 master-0 kubenswrapper[7440]: I0312 14:16:44.904431 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 12 14:16:44.905069 master-0 kubenswrapper[7440]: I0312 14:16:44.905039 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 12 14:16:44.913130 master-0 kubenswrapper[7440]: I0312 14:16:44.913088 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 12 14:16:44.922196 master-0 kubenswrapper[7440]: I0312 14:16:44.922140 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4"] Mar 12 14:16:45.028590 master-0 kubenswrapper[7440]: I0312 14:16:45.028527 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-85b44c7984-pzbfq"] Mar 12 14:16:45.029475 master-0 kubenswrapper[7440]: I0312 14:16:45.029450 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:16:45.031421 master-0 kubenswrapper[7440]: I0312 14:16:45.031393 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-7nn9s21bftmgp" Mar 12 14:16:45.032193 master-0 kubenswrapper[7440]: I0312 14:16:45.032146 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 12 14:16:45.032193 master-0 kubenswrapper[7440]: I0312 14:16:45.032190 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-787xq" Mar 12 14:16:45.032304 master-0 kubenswrapper[7440]: I0312 14:16:45.032289 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 12 14:16:45.032374 master-0 kubenswrapper[7440]: I0312 14:16:45.032177 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 12 14:16:45.032569 master-0 kubenswrapper[7440]: I0312 14:16:45.032540 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 12 14:16:45.039102 master-0 kubenswrapper[7440]: I0312 14:16:45.039060 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-85b44c7984-pzbfq"] Mar 12 14:16:45.061595 master-0 kubenswrapper[7440]: I0312 14:16:45.061548 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e2a8ac56-734c-4d51-9171-0540f8b9f242-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-cbb5fd9f8-zjmz4\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:16:45.061788 master-0 kubenswrapper[7440]: I0312 14:16:45.061601 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/e2a8ac56-734c-4d51-9171-0540f8b9f242-telemeter-client-tls\") pod \"telemeter-client-cbb5fd9f8-zjmz4\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:16:45.061788 master-0 kubenswrapper[7440]: I0312 14:16:45.061622 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2a8ac56-734c-4d51-9171-0540f8b9f242-serving-certs-ca-bundle\") pod \"telemeter-client-cbb5fd9f8-zjmz4\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:16:45.061788 master-0 kubenswrapper[7440]: I0312 14:16:45.061639 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/e2a8ac56-734c-4d51-9171-0540f8b9f242-secret-telemeter-client\") pod \"telemeter-client-cbb5fd9f8-zjmz4\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:16:45.061788 master-0 kubenswrapper[7440]: I0312 14:16:45.061668 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/e2a8ac56-734c-4d51-9171-0540f8b9f242-federate-client-tls\") pod \"telemeter-client-cbb5fd9f8-zjmz4\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:16:45.061788 master-0 kubenswrapper[7440]: I0312 14:16:45.061717 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2a8ac56-734c-4d51-9171-0540f8b9f242-telemeter-trusted-ca-bundle\") pod \"telemeter-client-cbb5fd9f8-zjmz4\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:16:45.061788 master-0 kubenswrapper[7440]: I0312 14:16:45.061735 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjplz\" (UniqueName: \"kubernetes.io/projected/e2a8ac56-734c-4d51-9171-0540f8b9f242-kube-api-access-kjplz\") pod \"telemeter-client-cbb5fd9f8-zjmz4\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:16:45.061788 master-0 kubenswrapper[7440]: I0312 14:16:45.061762 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e2a8ac56-734c-4d51-9171-0540f8b9f242-metrics-client-ca\") pod \"telemeter-client-cbb5fd9f8-zjmz4\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:16:45.137921 master-0 kubenswrapper[7440]: I0312 14:16:45.133976 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:16:45.137921 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:16:45.137921 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:16:45.137921 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:16:45.137921 master-0 kubenswrapper[7440]: I0312 14:16:45.134034 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:16:45.163018 master-0 kubenswrapper[7440]: I0312 14:16:45.162875 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e2a8ac56-734c-4d51-9171-0540f8b9f242-metrics-client-ca\") pod \"telemeter-client-cbb5fd9f8-zjmz4\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:16:45.163300 master-0 kubenswrapper[7440]: I0312 14:16:45.163281 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e2a8ac56-734c-4d51-9171-0540f8b9f242-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-cbb5fd9f8-zjmz4\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:16:45.163847 master-0 kubenswrapper[7440]: I0312 14:16:45.163827 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/addf66af-4d97-4c1e-960d-ace98c27961b-audit-log\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:16:45.164047 master-0 kubenswrapper[7440]: I0312 14:16:45.164027 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/e2a8ac56-734c-4d51-9171-0540f8b9f242-telemeter-client-tls\") pod \"telemeter-client-cbb5fd9f8-zjmz4\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:16:45.164165 master-0 kubenswrapper[7440]: I0312 14:16:45.164142 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2a8ac56-734c-4d51-9171-0540f8b9f242-serving-certs-ca-bundle\") pod \"telemeter-client-cbb5fd9f8-zjmz4\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:16:45.164269 master-0 kubenswrapper[7440]: I0312 14:16:45.164253 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/e2a8ac56-734c-4d51-9171-0540f8b9f242-secret-telemeter-client\") pod \"telemeter-client-cbb5fd9f8-zjmz4\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:16:45.164390 master-0 kubenswrapper[7440]: I0312 14:16:45.164372 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-metrics-server-audit-profiles\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:16:45.164493 master-0 kubenswrapper[7440]: I0312 14:16:45.164477 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/e2a8ac56-734c-4d51-9171-0540f8b9f242-federate-client-tls\") pod \"telemeter-client-cbb5fd9f8-zjmz4\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:16:45.164598 master-0 kubenswrapper[7440]: I0312 14:16:45.164579 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-client-certs\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:16:45.164710 master-0 kubenswrapper[7440]: I0312 14:16:45.164693 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:16:45.164839 master-0 kubenswrapper[7440]: I0312 14:16:45.164822 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-server-tls\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:16:45.164983 master-0 kubenswrapper[7440]: I0312 14:16:45.164963 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2a8ac56-734c-4d51-9171-0540f8b9f242-telemeter-trusted-ca-bundle\") pod \"telemeter-client-cbb5fd9f8-zjmz4\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:16:45.165092 master-0 kubenswrapper[7440]: I0312 14:16:45.165074 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjplz\" (UniqueName: \"kubernetes.io/projected/e2a8ac56-734c-4d51-9171-0540f8b9f242-kube-api-access-kjplz\") pod \"telemeter-client-cbb5fd9f8-zjmz4\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:16:45.165209 master-0 kubenswrapper[7440]: I0312 14:16:45.165193 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-client-ca-bundle\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:16:45.165324 master-0 kubenswrapper[7440]: I0312 14:16:45.165306 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6d7w\" (UniqueName: \"kubernetes.io/projected/addf66af-4d97-4c1e-960d-ace98c27961b-kube-api-access-l6d7w\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:16:45.165448 master-0 kubenswrapper[7440]: I0312 14:16:45.163655 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e2a8ac56-734c-4d51-9171-0540f8b9f242-metrics-client-ca\") pod \"telemeter-client-cbb5fd9f8-zjmz4\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:16:45.166430 master-0 kubenswrapper[7440]: I0312 14:16:45.166389 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2a8ac56-734c-4d51-9171-0540f8b9f242-serving-certs-ca-bundle\") pod \"telemeter-client-cbb5fd9f8-zjmz4\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:16:45.166699 master-0 kubenswrapper[7440]: I0312 14:16:45.166660 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2a8ac56-734c-4d51-9171-0540f8b9f242-telemeter-trusted-ca-bundle\") pod \"telemeter-client-cbb5fd9f8-zjmz4\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:16:45.166865 master-0 kubenswrapper[7440]: I0312 14:16:45.166825 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e2a8ac56-734c-4d51-9171-0540f8b9f242-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-cbb5fd9f8-zjmz4\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:16:45.168154 master-0 kubenswrapper[7440]: I0312 14:16:45.168083 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/e2a8ac56-734c-4d51-9171-0540f8b9f242-telemeter-client-tls\") pod \"telemeter-client-cbb5fd9f8-zjmz4\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:16:45.168466 master-0 kubenswrapper[7440]: I0312 14:16:45.168434 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/e2a8ac56-734c-4d51-9171-0540f8b9f242-federate-client-tls\") pod \"telemeter-client-cbb5fd9f8-zjmz4\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:16:45.169656 master-0 kubenswrapper[7440]: I0312 14:16:45.169611 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/e2a8ac56-734c-4d51-9171-0540f8b9f242-secret-telemeter-client\") pod \"telemeter-client-cbb5fd9f8-zjmz4\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:16:45.184846 master-0 kubenswrapper[7440]: I0312 14:16:45.184786 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjplz\" (UniqueName: \"kubernetes.io/projected/e2a8ac56-734c-4d51-9171-0540f8b9f242-kube-api-access-kjplz\") pod \"telemeter-client-cbb5fd9f8-zjmz4\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:16:45.215578 master-0 kubenswrapper[7440]: I0312 14:16:45.215523 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:16:45.266343 master-0 kubenswrapper[7440]: I0312 14:16:45.266278 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-metrics-server-audit-profiles\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:16:45.266343 master-0 kubenswrapper[7440]: I0312 14:16:45.266335 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-client-certs\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:16:45.266882 master-0 kubenswrapper[7440]: I0312 14:16:45.266362 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:16:45.266882 master-0 kubenswrapper[7440]: I0312 14:16:45.266573 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-server-tls\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:16:45.266882 master-0 kubenswrapper[7440]: I0312 14:16:45.266703 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-client-ca-bundle\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:16:45.266882 master-0 kubenswrapper[7440]: I0312 14:16:45.266762 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6d7w\" (UniqueName: \"kubernetes.io/projected/addf66af-4d97-4c1e-960d-ace98c27961b-kube-api-access-l6d7w\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:16:45.266882 master-0 kubenswrapper[7440]: I0312 14:16:45.266860 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/addf66af-4d97-4c1e-960d-ace98c27961b-audit-log\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:16:45.267884 master-0 kubenswrapper[7440]: I0312 14:16:45.267349 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:16:45.267884 master-0 kubenswrapper[7440]: I0312 14:16:45.267537 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/addf66af-4d97-4c1e-960d-ace98c27961b-audit-log\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:16:45.268591 master-0 kubenswrapper[7440]: I0312 14:16:45.268546 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-metrics-server-audit-profiles\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:16:45.270833 master-0 kubenswrapper[7440]: I0312 14:16:45.270793 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-server-tls\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:16:45.273540 master-0 kubenswrapper[7440]: I0312 14:16:45.273499 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-client-certs\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:16:45.274395 master-0 kubenswrapper[7440]: I0312 14:16:45.274345 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-client-ca-bundle\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:16:45.281833 master-0 kubenswrapper[7440]: I0312 14:16:45.281791 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6d7w\" (UniqueName: \"kubernetes.io/projected/addf66af-4d97-4c1e-960d-ace98c27961b-kube-api-access-l6d7w\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:16:45.355684 master-0 kubenswrapper[7440]: I0312 14:16:45.355621 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:16:45.600390 master-0 kubenswrapper[7440]: I0312 14:16:45.600289 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4"] Mar 12 14:16:45.604927 master-0 kubenswrapper[7440]: W0312 14:16:45.604383 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2a8ac56_734c_4d51_9171_0540f8b9f242.slice/crio-73192d725f60950aaefb2920f89f9df4128d62e1eb94b2e0025225f509337195 WatchSource:0}: Error finding container 73192d725f60950aaefb2920f89f9df4128d62e1eb94b2e0025225f509337195: Status 404 returned error can't find the container with id 73192d725f60950aaefb2920f89f9df4128d62e1eb94b2e0025225f509337195 Mar 12 14:16:45.729681 master-0 kubenswrapper[7440]: I0312 14:16:45.729631 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-85b44c7984-pzbfq"] Mar 12 14:16:45.732330 master-0 kubenswrapper[7440]: W0312 14:16:45.732292 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaddf66af_4d97_4c1e_960d_ace98c27961b.slice/crio-b4e230d3f789f82e2598481603b93fd52d829378a89dce8399b53642cd4db5c4 WatchSource:0}: Error finding container b4e230d3f789f82e2598481603b93fd52d829378a89dce8399b53642cd4db5c4: Status 404 returned error can't find the container with id b4e230d3f789f82e2598481603b93fd52d829378a89dce8399b53642cd4db5c4 Mar 12 14:16:46.133527 master-0 kubenswrapper[7440]: I0312 14:16:46.133437 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:16:46.133527 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:16:46.133527 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:16:46.133527 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:16:46.134133 master-0 kubenswrapper[7440]: I0312 14:16:46.133559 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:16:46.238796 master-0 kubenswrapper[7440]: I0312 14:16:46.238724 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" event={"ID":"addf66af-4d97-4c1e-960d-ace98c27961b","Type":"ContainerStarted","Data":"b4e230d3f789f82e2598481603b93fd52d829378a89dce8399b53642cd4db5c4"} Mar 12 14:16:46.239853 master-0 kubenswrapper[7440]: I0312 14:16:46.239814 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" event={"ID":"e2a8ac56-734c-4d51-9171-0540f8b9f242","Type":"ContainerStarted","Data":"73192d725f60950aaefb2920f89f9df4128d62e1eb94b2e0025225f509337195"} Mar 12 14:16:47.131439 master-0 kubenswrapper[7440]: I0312 14:16:47.131347 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:16:47.131439 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:16:47.131439 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:16:47.131439 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:16:47.131439 master-0 kubenswrapper[7440]: I0312 14:16:47.131436 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:16:47.968311 master-0 kubenswrapper[7440]: I0312 14:16:47.968196 7440 patch_prober.go:28] interesting pod/machine-config-daemon-ngzc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:16:47.968311 master-0 kubenswrapper[7440]: I0312 14:16:47.968267 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" podUID="8e4d9407-ff79-4396-a37f-896617e024d4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:16:48.135315 master-0 kubenswrapper[7440]: I0312 14:16:48.135263 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:16:48.135315 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:16:48.135315 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:16:48.135315 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:16:48.135981 master-0 kubenswrapper[7440]: I0312 14:16:48.135337 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:16:48.252660 master-0 kubenswrapper[7440]: I0312 14:16:48.252613 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" event={"ID":"addf66af-4d97-4c1e-960d-ace98c27961b","Type":"ContainerStarted","Data":"a005763593d95225e12c3935e14975d230d9626dd66e3c9c188263f4188a5757"} Mar 12 14:16:48.253670 master-0 kubenswrapper[7440]: I0312 14:16:48.253645 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" event={"ID":"e2a8ac56-734c-4d51-9171-0540f8b9f242","Type":"ContainerStarted","Data":"e168f066de3b5e7cee61e5586918c799488b316ce826a0bf7d5dd489987b0eb1"} Mar 12 14:16:48.272924 master-0 kubenswrapper[7440]: I0312 14:16:48.271063 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" podStartSLOduration=1.360458476 podStartE2EDuration="3.271046873s" podCreationTimestamp="2026-03-12 14:16:45 +0000 UTC" firstStartedPulling="2026-03-12 14:16:45.734923705 +0000 UTC m=+266.070302264" lastFinishedPulling="2026-03-12 14:16:47.645512102 +0000 UTC m=+267.980890661" observedRunningTime="2026-03-12 14:16:48.271028663 +0000 UTC m=+268.606407242" watchObservedRunningTime="2026-03-12 14:16:48.271046873 +0000 UTC m=+268.606425432" Mar 12 14:16:49.132519 master-0 kubenswrapper[7440]: I0312 14:16:49.131008 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:16:49.132519 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:16:49.132519 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:16:49.132519 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:16:49.132519 master-0 kubenswrapper[7440]: I0312 14:16:49.131156 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:16:49.262060 master-0 kubenswrapper[7440]: I0312 14:16:49.262003 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" event={"ID":"e2a8ac56-734c-4d51-9171-0540f8b9f242","Type":"ContainerStarted","Data":"a010fc25af9de91f067075c308675d17b18bd607610859f12d4815abc91678e3"} Mar 12 14:16:50.131610 master-0 kubenswrapper[7440]: I0312 14:16:50.131544 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:16:50.131610 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:16:50.131610 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:16:50.131610 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:16:50.131942 master-0 kubenswrapper[7440]: I0312 14:16:50.131614 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:16:50.271025 master-0 kubenswrapper[7440]: I0312 14:16:50.270953 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" event={"ID":"e2a8ac56-734c-4d51-9171-0540f8b9f242","Type":"ContainerStarted","Data":"41b5c53c49fe52ce73b445331a08ad2c82edf1c84e6716a8772e0ee97bf8ec25"} Mar 12 14:16:50.298332 master-0 kubenswrapper[7440]: I0312 14:16:50.298251 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" podStartSLOduration=2.811674697 podStartE2EDuration="6.298232929s" podCreationTimestamp="2026-03-12 14:16:44 +0000 UTC" firstStartedPulling="2026-03-12 14:16:45.606413543 +0000 UTC m=+265.941792102" lastFinishedPulling="2026-03-12 14:16:49.092971775 +0000 UTC m=+269.428350334" observedRunningTime="2026-03-12 14:16:50.296435163 +0000 UTC m=+270.631813722" watchObservedRunningTime="2026-03-12 14:16:50.298232929 +0000 UTC m=+270.633611478" Mar 12 14:16:51.131085 master-0 kubenswrapper[7440]: I0312 14:16:51.131005 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:16:51.131085 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:16:51.131085 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:16:51.131085 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:16:51.131391 master-0 kubenswrapper[7440]: I0312 14:16:51.131081 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:16:52.132630 master-0 kubenswrapper[7440]: I0312 14:16:52.132573 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:16:52.132630 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:16:52.132630 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:16:52.132630 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:16:52.133240 master-0 kubenswrapper[7440]: I0312 14:16:52.132656 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:16:53.132171 master-0 kubenswrapper[7440]: I0312 14:16:53.132091 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:16:53.132171 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:16:53.132171 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:16:53.132171 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:16:53.132429 master-0 kubenswrapper[7440]: I0312 14:16:53.132185 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:16:54.061064 master-0 kubenswrapper[7440]: I0312 14:16:54.060965 7440 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 12 14:16:54.061754 master-0 kubenswrapper[7440]: I0312 14:16:54.061206 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" containerID="cri-o://95ba11fc8a440b0f75fb1a6bf90aed334dc73dd1799f7af488f9efe94a5e77b1" gracePeriod=30 Mar 12 14:16:54.063526 master-0 kubenswrapper[7440]: I0312 14:16:54.063015 7440 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 12 14:16:54.063526 master-0 kubenswrapper[7440]: E0312 14:16:54.063264 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 12 14:16:54.063526 master-0 kubenswrapper[7440]: I0312 14:16:54.063276 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 12 14:16:54.063526 master-0 kubenswrapper[7440]: E0312 14:16:54.063293 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 12 14:16:54.063526 master-0 kubenswrapper[7440]: I0312 14:16:54.063300 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 12 14:16:54.063526 master-0 kubenswrapper[7440]: I0312 14:16:54.063507 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 12 14:16:54.063881 master-0 kubenswrapper[7440]: I0312 14:16:54.063775 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 12 14:16:54.064736 master-0 kubenswrapper[7440]: I0312 14:16:54.064701 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:16:54.131645 master-0 kubenswrapper[7440]: I0312 14:16:54.131569 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:16:54.131645 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:16:54.131645 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:16:54.131645 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:16:54.131645 master-0 kubenswrapper[7440]: I0312 14:16:54.131627 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:16:54.134988 master-0 kubenswrapper[7440]: I0312 14:16:54.134929 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c6a711bc27e73e2efc239fb72d1184e6-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"c6a711bc27e73e2efc239fb72d1184e6\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:16:54.135123 master-0 kubenswrapper[7440]: I0312 14:16:54.135032 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c6a711bc27e73e2efc239fb72d1184e6-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"c6a711bc27e73e2efc239fb72d1184e6\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:16:54.216331 master-0 kubenswrapper[7440]: I0312 14:16:54.216294 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 14:16:54.223103 master-0 kubenswrapper[7440]: I0312 14:16:54.223025 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 12 14:16:54.239658 master-0 kubenswrapper[7440]: I0312 14:16:54.239591 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c6a711bc27e73e2efc239fb72d1184e6-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"c6a711bc27e73e2efc239fb72d1184e6\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:16:54.239875 master-0 kubenswrapper[7440]: I0312 14:16:54.239701 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c6a711bc27e73e2efc239fb72d1184e6-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"c6a711bc27e73e2efc239fb72d1184e6\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:16:54.239875 master-0 kubenswrapper[7440]: I0312 14:16:54.239804 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c6a711bc27e73e2efc239fb72d1184e6-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"c6a711bc27e73e2efc239fb72d1184e6\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:16:54.239875 master-0 kubenswrapper[7440]: I0312 14:16:54.239861 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c6a711bc27e73e2efc239fb72d1184e6-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"c6a711bc27e73e2efc239fb72d1184e6\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:16:54.248480 master-0 kubenswrapper[7440]: I0312 14:16:54.248414 7440 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="b52b09e7-5438-4354-a6de-1760d23da161" Mar 12 14:16:54.299792 master-0 kubenswrapper[7440]: I0312 14:16:54.299740 7440 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="95ba11fc8a440b0f75fb1a6bf90aed334dc73dd1799f7af488f9efe94a5e77b1" exitCode=0 Mar 12 14:16:54.299993 master-0 kubenswrapper[7440]: I0312 14:16:54.299822 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 14:16:54.300095 master-0 kubenswrapper[7440]: I0312 14:16:54.300075 7440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cddeeb3d78172cd6ac796885f0e90479fda94b207b0174c18397e7f3e17b7e9" Mar 12 14:16:54.300228 master-0 kubenswrapper[7440]: I0312 14:16:54.300166 7440 scope.go:117] "RemoveContainer" containerID="d81715b1a2dbc54afa6b4ebf0b0cbc31e29e0bdb6377beba9d7f0f245fb67694" Mar 12 14:16:54.301956 master-0 kubenswrapper[7440]: I0312 14:16:54.301935 7440 generic.go:334] "Generic (PLEG): container finished" podID="941c0808-bbfd-467e-b733-3a8294163ee5" containerID="b0d7763766a63cc91dd74368313cbb94587dedcd2efd8ded0e17187af3e40d25" exitCode=0 Mar 12 14:16:54.302067 master-0 kubenswrapper[7440]: I0312 14:16:54.302048 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" event={"ID":"941c0808-bbfd-467e-b733-3a8294163ee5","Type":"ContainerDied","Data":"b0d7763766a63cc91dd74368313cbb94587dedcd2efd8ded0e17187af3e40d25"} Mar 12 14:16:54.340661 master-0 kubenswrapper[7440]: I0312 14:16:54.340539 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"a1a56802af72ce1aac6b5077f1695ac0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " Mar 12 14:16:54.340862 master-0 kubenswrapper[7440]: I0312 14:16:54.340683 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"a1a56802af72ce1aac6b5077f1695ac0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " Mar 12 14:16:54.341033 master-0 kubenswrapper[7440]: I0312 14:16:54.341010 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets" (OuterVolumeSpecName: "secrets") pod "a1a56802af72ce1aac6b5077f1695ac0" (UID: "a1a56802af72ce1aac6b5077f1695ac0"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:16:54.341093 master-0 kubenswrapper[7440]: I0312 14:16:54.341043 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs" (OuterVolumeSpecName: "logs") pod "a1a56802af72ce1aac6b5077f1695ac0" (UID: "a1a56802af72ce1aac6b5077f1695ac0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:16:54.443119 master-0 kubenswrapper[7440]: I0312 14:16:54.442537 7440 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:54.443119 master-0 kubenswrapper[7440]: I0312 14:16:54.442585 7440 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:54.518707 master-0 kubenswrapper[7440]: I0312 14:16:54.518650 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:16:54.535083 master-0 kubenswrapper[7440]: W0312 14:16:54.535029 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6a711bc27e73e2efc239fb72d1184e6.slice/crio-c3a306c26a0173b7c306591d8fb09ccc137a9d2d80b43e56b18e1f7e938dbefa WatchSource:0}: Error finding container c3a306c26a0173b7c306591d8fb09ccc137a9d2d80b43e56b18e1f7e938dbefa: Status 404 returned error can't find the container with id c3a306c26a0173b7c306591d8fb09ccc137a9d2d80b43e56b18e1f7e938dbefa Mar 12 14:16:55.132126 master-0 kubenswrapper[7440]: I0312 14:16:55.132047 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:16:55.132126 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:16:55.132126 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:16:55.132126 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:16:55.132633 master-0 kubenswrapper[7440]: I0312 14:16:55.132134 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:16:55.313200 master-0 kubenswrapper[7440]: I0312 14:16:55.313128 7440 generic.go:334] "Generic (PLEG): container finished" podID="c6a711bc27e73e2efc239fb72d1184e6" containerID="3dd7a4da04b6c01935c26571e75395e15a7850b95c867d09c0ff6a148fabca36" exitCode=0 Mar 12 14:16:55.313361 master-0 kubenswrapper[7440]: I0312 14:16:55.313255 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"c6a711bc27e73e2efc239fb72d1184e6","Type":"ContainerDied","Data":"3dd7a4da04b6c01935c26571e75395e15a7850b95c867d09c0ff6a148fabca36"} Mar 12 14:16:55.313361 master-0 kubenswrapper[7440]: I0312 14:16:55.313316 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"c6a711bc27e73e2efc239fb72d1184e6","Type":"ContainerStarted","Data":"c3a306c26a0173b7c306591d8fb09ccc137a9d2d80b43e56b18e1f7e938dbefa"} Mar 12 14:16:55.571479 master-0 kubenswrapper[7440]: I0312 14:16:55.571432 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 12 14:16:55.660646 master-0 kubenswrapper[7440]: I0312 14:16:55.660479 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/941c0808-bbfd-467e-b733-3a8294163ee5-kubelet-dir\") pod \"941c0808-bbfd-467e-b733-3a8294163ee5\" (UID: \"941c0808-bbfd-467e-b733-3a8294163ee5\") " Mar 12 14:16:55.660646 master-0 kubenswrapper[7440]: I0312 14:16:55.660613 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/941c0808-bbfd-467e-b733-3a8294163ee5-kube-api-access\") pod \"941c0808-bbfd-467e-b733-3a8294163ee5\" (UID: \"941c0808-bbfd-467e-b733-3a8294163ee5\") " Mar 12 14:16:55.660940 master-0 kubenswrapper[7440]: I0312 14:16:55.660662 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/941c0808-bbfd-467e-b733-3a8294163ee5-var-lock\") pod \"941c0808-bbfd-467e-b733-3a8294163ee5\" (UID: \"941c0808-bbfd-467e-b733-3a8294163ee5\") " Mar 12 14:16:55.661226 master-0 kubenswrapper[7440]: I0312 14:16:55.661184 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/941c0808-bbfd-467e-b733-3a8294163ee5-var-lock" (OuterVolumeSpecName: "var-lock") pod "941c0808-bbfd-467e-b733-3a8294163ee5" (UID: "941c0808-bbfd-467e-b733-3a8294163ee5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:16:55.661488 master-0 kubenswrapper[7440]: I0312 14:16:55.661454 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/941c0808-bbfd-467e-b733-3a8294163ee5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "941c0808-bbfd-467e-b733-3a8294163ee5" (UID: "941c0808-bbfd-467e-b733-3a8294163ee5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:16:55.664231 master-0 kubenswrapper[7440]: I0312 14:16:55.664170 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/941c0808-bbfd-467e-b733-3a8294163ee5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "941c0808-bbfd-467e-b733-3a8294163ee5" (UID: "941c0808-bbfd-467e-b733-3a8294163ee5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:16:55.762881 master-0 kubenswrapper[7440]: I0312 14:16:55.762818 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/941c0808-bbfd-467e-b733-3a8294163ee5-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:55.762881 master-0 kubenswrapper[7440]: I0312 14:16:55.762863 7440 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/941c0808-bbfd-467e-b733-3a8294163ee5-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:55.762881 master-0 kubenswrapper[7440]: I0312 14:16:55.762873 7440 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/941c0808-bbfd-467e-b733-3a8294163ee5-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:55.812789 master-0 kubenswrapper[7440]: I0312 14:16:55.812748 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1a56802af72ce1aac6b5077f1695ac0" path="/var/lib/kubelet/pods/a1a56802af72ce1aac6b5077f1695ac0/volumes" Mar 12 14:16:55.813066 master-0 kubenswrapper[7440]: I0312 14:16:55.813036 7440 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Mar 12 14:16:55.830358 master-0 kubenswrapper[7440]: I0312 14:16:55.830300 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 12 14:16:55.830358 master-0 kubenswrapper[7440]: I0312 14:16:55.830345 7440 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="b52b09e7-5438-4354-a6de-1760d23da161" Mar 12 14:16:55.834724 master-0 kubenswrapper[7440]: I0312 14:16:55.834470 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 12 14:16:55.834781 master-0 kubenswrapper[7440]: I0312 14:16:55.834729 7440 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="b52b09e7-5438-4354-a6de-1760d23da161" Mar 12 14:16:56.047995 master-0 kubenswrapper[7440]: I0312 14:16:56.047951 7440 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 12 14:16:56.048206 master-0 kubenswrapper[7440]: I0312 14:16:56.048185 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" containerID="cri-o://24ee3eeca5a94629f5c47b0ce9433577ce076c824acff7a3bc086c327eefa56a" gracePeriod=30 Mar 12 14:16:56.048342 master-0 kubenswrapper[7440]: I0312 14:16:56.048276 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" containerID="cri-o://80ae1c45663433034e72c5c20f8723a435fbf83c810f99ce19145980cd404753" gracePeriod=30 Mar 12 14:16:56.050510 master-0 kubenswrapper[7440]: I0312 14:16:56.050218 7440 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 14:16:56.050510 master-0 kubenswrapper[7440]: E0312 14:16:56.050465 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 14:16:56.050510 master-0 kubenswrapper[7440]: I0312 14:16:56.050481 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 14:16:56.050510 master-0 kubenswrapper[7440]: E0312 14:16:56.050499 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 14:16:56.050510 master-0 kubenswrapper[7440]: I0312 14:16:56.050508 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 14:16:56.050510 master-0 kubenswrapper[7440]: E0312 14:16:56.050517 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 14:16:56.050839 master-0 kubenswrapper[7440]: I0312 14:16:56.050523 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 14:16:56.050839 master-0 kubenswrapper[7440]: E0312 14:16:56.050544 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="941c0808-bbfd-467e-b733-3a8294163ee5" containerName="installer" Mar 12 14:16:56.050839 master-0 kubenswrapper[7440]: I0312 14:16:56.050550 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="941c0808-bbfd-467e-b733-3a8294163ee5" containerName="installer" Mar 12 14:16:56.050839 master-0 kubenswrapper[7440]: E0312 14:16:56.050572 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 12 14:16:56.050839 master-0 kubenswrapper[7440]: I0312 14:16:56.050578 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 12 14:16:56.050839 master-0 kubenswrapper[7440]: I0312 14:16:56.050703 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 14:16:56.050839 master-0 kubenswrapper[7440]: I0312 14:16:56.050714 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="941c0808-bbfd-467e-b733-3a8294163ee5" containerName="installer" Mar 12 14:16:56.050839 master-0 kubenswrapper[7440]: I0312 14:16:56.050724 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 14:16:56.050839 master-0 kubenswrapper[7440]: I0312 14:16:56.050736 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 12 14:16:56.050839 master-0 kubenswrapper[7440]: I0312 14:16:56.050746 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 14:16:56.051304 master-0 kubenswrapper[7440]: E0312 14:16:56.050862 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 14:16:56.051304 master-0 kubenswrapper[7440]: I0312 14:16:56.050872 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 14:16:56.051304 master-0 kubenswrapper[7440]: I0312 14:16:56.051016 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 14:16:56.051911 master-0 kubenswrapper[7440]: I0312 14:16:56.051862 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:16:56.131421 master-0 kubenswrapper[7440]: I0312 14:16:56.131372 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:16:56.131421 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:16:56.131421 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:16:56.131421 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:16:56.131693 master-0 kubenswrapper[7440]: I0312 14:16:56.131654 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:16:56.169669 master-0 kubenswrapper[7440]: I0312 14:16:56.169635 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7fed292c3d5a90a99bfee43e89190405-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7fed292c3d5a90a99bfee43e89190405\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:16:56.170133 master-0 kubenswrapper[7440]: I0312 14:16:56.169769 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7fed292c3d5a90a99bfee43e89190405-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7fed292c3d5a90a99bfee43e89190405\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:16:56.217808 master-0 kubenswrapper[7440]: I0312 14:16:56.217686 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:16:56.220317 master-0 kubenswrapper[7440]: I0312 14:16:56.219969 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 14:16:56.247231 master-0 kubenswrapper[7440]: I0312 14:16:56.247028 7440 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="d69c2287-2f67-434b-8615-4b40122dfab6" Mar 12 14:16:56.271093 master-0 kubenswrapper[7440]: I0312 14:16:56.270847 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7fed292c3d5a90a99bfee43e89190405-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7fed292c3d5a90a99bfee43e89190405\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:16:56.271093 master-0 kubenswrapper[7440]: I0312 14:16:56.271023 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7fed292c3d5a90a99bfee43e89190405-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7fed292c3d5a90a99bfee43e89190405\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:16:56.271203 master-0 kubenswrapper[7440]: I0312 14:16:56.271149 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7fed292c3d5a90a99bfee43e89190405-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7fed292c3d5a90a99bfee43e89190405\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:16:56.271295 master-0 kubenswrapper[7440]: I0312 14:16:56.271224 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7fed292c3d5a90a99bfee43e89190405-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7fed292c3d5a90a99bfee43e89190405\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:16:56.326259 master-0 kubenswrapper[7440]: I0312 14:16:56.326133 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" event={"ID":"941c0808-bbfd-467e-b733-3a8294163ee5","Type":"ContainerDied","Data":"b728e0e598b7cc096f35be929d43eb0ed111353285b0505a0f58ce9dbef5d088"} Mar 12 14:16:56.326259 master-0 kubenswrapper[7440]: I0312 14:16:56.326171 7440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b728e0e598b7cc096f35be929d43eb0ed111353285b0505a0f58ce9dbef5d088" Mar 12 14:16:56.326259 master-0 kubenswrapper[7440]: I0312 14:16:56.326165 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 12 14:16:56.328771 master-0 kubenswrapper[7440]: I0312 14:16:56.328731 7440 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="80ae1c45663433034e72c5c20f8723a435fbf83c810f99ce19145980cd404753" exitCode=0 Mar 12 14:16:56.328771 master-0 kubenswrapper[7440]: I0312 14:16:56.328764 7440 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="24ee3eeca5a94629f5c47b0ce9433577ce076c824acff7a3bc086c327eefa56a" exitCode=0 Mar 12 14:16:56.328867 master-0 kubenswrapper[7440]: I0312 14:16:56.328810 7440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1fca57791a870ac4ac75e7237e7b4e82aa4de3284ea9553565786a397ec7628" Mar 12 14:16:56.328867 master-0 kubenswrapper[7440]: I0312 14:16:56.328825 7440 scope.go:117] "RemoveContainer" containerID="c4d90f1c1d446b898ed50108e2482967a437ec5d999259ff9e991131aa20b54a" Mar 12 14:16:56.328964 master-0 kubenswrapper[7440]: I0312 14:16:56.328946 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:16:56.331646 master-0 kubenswrapper[7440]: I0312 14:16:56.331594 7440 generic.go:334] "Generic (PLEG): container finished" podID="0c8675d4-a0be-42a3-96af-e56f5fb02983" containerID="c501e9b39beb072c6b4373a31e843bee99560319d607f9fde7f18203290ac2ca" exitCode=0 Mar 12 14:16:56.331732 master-0 kubenswrapper[7440]: I0312 14:16:56.331703 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"0c8675d4-a0be-42a3-96af-e56f5fb02983","Type":"ContainerDied","Data":"c501e9b39beb072c6b4373a31e843bee99560319d607f9fde7f18203290ac2ca"} Mar 12 14:16:56.335200 master-0 kubenswrapper[7440]: I0312 14:16:56.335149 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"c6a711bc27e73e2efc239fb72d1184e6","Type":"ContainerStarted","Data":"338028102e5041c5f5cf79657b9c14128ab7afda445b15271f5d150bacb3bcde"} Mar 12 14:16:56.335200 master-0 kubenswrapper[7440]: I0312 14:16:56.335199 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"c6a711bc27e73e2efc239fb72d1184e6","Type":"ContainerStarted","Data":"2aee18625338d290a376474bbeead6c6bef3630d9c0a26ff9cffcf446662e724"} Mar 12 14:16:56.335343 master-0 kubenswrapper[7440]: I0312 14:16:56.335215 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"c6a711bc27e73e2efc239fb72d1184e6","Type":"ContainerStarted","Data":"b7832dc4839767f3cbfd92e515cd8bc243889013b3c5aafd8b213f8334c4b7db"} Mar 12 14:16:56.335456 master-0 kubenswrapper[7440]: I0312 14:16:56.335440 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:16:56.373402 master-0 kubenswrapper[7440]: I0312 14:16:56.372498 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 12 14:16:56.373402 master-0 kubenswrapper[7440]: I0312 14:16:56.372553 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 12 14:16:56.373402 master-0 kubenswrapper[7440]: I0312 14:16:56.372616 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 12 14:16:56.373402 master-0 kubenswrapper[7440]: I0312 14:16:56.372658 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 12 14:16:56.373402 master-0 kubenswrapper[7440]: I0312 14:16:56.372735 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 12 14:16:56.373402 master-0 kubenswrapper[7440]: I0312 14:16:56.372752 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config" (OuterVolumeSpecName: "config") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:16:56.373402 master-0 kubenswrapper[7440]: I0312 14:16:56.372789 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:16:56.373402 master-0 kubenswrapper[7440]: I0312 14:16:56.372810 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs" (OuterVolumeSpecName: "logs") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:16:56.373402 master-0 kubenswrapper[7440]: I0312 14:16:56.372826 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets" (OuterVolumeSpecName: "secrets") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:16:56.373402 master-0 kubenswrapper[7440]: I0312 14:16:56.372873 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:16:56.373402 master-0 kubenswrapper[7440]: I0312 14:16:56.373060 7440 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:56.373402 master-0 kubenswrapper[7440]: I0312 14:16:56.373079 7440 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:56.373402 master-0 kubenswrapper[7440]: I0312 14:16:56.373090 7440 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:56.373402 master-0 kubenswrapper[7440]: I0312 14:16:56.373101 7440 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:56.373402 master-0 kubenswrapper[7440]: I0312 14:16:56.373111 7440 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:56.384709 master-0 kubenswrapper[7440]: I0312 14:16:56.384636 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.38461861 podStartE2EDuration="2.38461861s" podCreationTimestamp="2026-03-12 14:16:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:16:56.38344795 +0000 UTC m=+276.718826509" watchObservedRunningTime="2026-03-12 14:16:56.38461861 +0000 UTC m=+276.719997169" Mar 12 14:16:56.516527 master-0 kubenswrapper[7440]: I0312 14:16:56.516466 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:16:56.535027 master-0 kubenswrapper[7440]: W0312 14:16:56.534979 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fed292c3d5a90a99bfee43e89190405.slice/crio-06b2e38b2912c9d15a5b2978f55eb051dd05aa588cbc81336019b954026e6207 WatchSource:0}: Error finding container 06b2e38b2912c9d15a5b2978f55eb051dd05aa588cbc81336019b954026e6207: Status 404 returned error can't find the container with id 06b2e38b2912c9d15a5b2978f55eb051dd05aa588cbc81336019b954026e6207 Mar 12 14:16:56.945127 master-0 kubenswrapper[7440]: I0312 14:16:56.945059 7440 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 12 14:16:56.945474 master-0 kubenswrapper[7440]: I0312 14:16:56.945441 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" containerID="cri-o://e16a62b4a09dc1bf1229b7f6c1c70a440164d0b0527802cf7ca0f10f946c47d1" gracePeriod=30 Mar 12 14:16:56.945555 master-0 kubenswrapper[7440]: I0312 14:16:56.945497 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" containerID="cri-o://f877e9e772e626aee6aab05c7ac905f2c4beb3f6e88c57c25b9eaeab3e18035d" gracePeriod=30 Mar 12 14:16:56.945617 master-0 kubenswrapper[7440]: I0312 14:16:56.945559 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" containerID="cri-o://19b7b4eaee1a852a9ccf6d4df36d726273012941b1ee088eb660f41b5b7c26b8" gracePeriod=30 Mar 12 14:16:56.945617 master-0 kubenswrapper[7440]: I0312 14:16:56.945595 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" containerID="cri-o://8c824a81227bbc4977bfae432c464a86a92fba843d33ea60db40b0306a18e201" gracePeriod=30 Mar 12 14:16:56.945734 master-0 kubenswrapper[7440]: I0312 14:16:56.945630 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" containerID="cri-o://ce4bbd63e68811b084a013b96af26d98956aa6df6255b0040e0ffbc96b8a34b0" gracePeriod=30 Mar 12 14:16:56.947257 master-0 kubenswrapper[7440]: I0312 14:16:56.947218 7440 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 12 14:16:56.947484 master-0 kubenswrapper[7440]: E0312 14:16:56.947464 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 12 14:16:56.947484 master-0 kubenswrapper[7440]: I0312 14:16:56.947478 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 12 14:16:56.947609 master-0 kubenswrapper[7440]: E0312 14:16:56.947492 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 12 14:16:56.947609 master-0 kubenswrapper[7440]: I0312 14:16:56.947499 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 12 14:16:56.947609 master-0 kubenswrapper[7440]: E0312 14:16:56.947508 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-ensure-env-vars" Mar 12 14:16:56.947609 master-0 kubenswrapper[7440]: I0312 14:16:56.947514 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-ensure-env-vars" Mar 12 14:16:56.947609 master-0 kubenswrapper[7440]: E0312 14:16:56.947528 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 12 14:16:56.947609 master-0 kubenswrapper[7440]: I0312 14:16:56.947533 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 12 14:16:56.947609 master-0 kubenswrapper[7440]: E0312 14:16:56.947542 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 12 14:16:56.947609 master-0 kubenswrapper[7440]: I0312 14:16:56.947548 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 12 14:16:56.947609 master-0 kubenswrapper[7440]: E0312 14:16:56.947557 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-resources-copy" Mar 12 14:16:56.947609 master-0 kubenswrapper[7440]: I0312 14:16:56.947562 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-resources-copy" Mar 12 14:16:56.947609 master-0 kubenswrapper[7440]: E0312 14:16:56.947572 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="setup" Mar 12 14:16:56.947609 master-0 kubenswrapper[7440]: I0312 14:16:56.947578 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="setup" Mar 12 14:16:56.947609 master-0 kubenswrapper[7440]: E0312 14:16:56.947586 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 12 14:16:56.947609 master-0 kubenswrapper[7440]: I0312 14:16:56.947592 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 12 14:16:56.948962 master-0 kubenswrapper[7440]: I0312 14:16:56.947692 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 12 14:16:56.948962 master-0 kubenswrapper[7440]: I0312 14:16:56.947710 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 12 14:16:56.948962 master-0 kubenswrapper[7440]: I0312 14:16:56.947721 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 12 14:16:56.948962 master-0 kubenswrapper[7440]: I0312 14:16:56.947731 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 12 14:16:56.948962 master-0 kubenswrapper[7440]: I0312 14:16:56.947739 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 12 14:16:57.081478 master-0 kubenswrapper[7440]: I0312 14:16:57.081436 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:16:57.081554 master-0 kubenswrapper[7440]: I0312 14:16:57.081483 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:16:57.081554 master-0 kubenswrapper[7440]: I0312 14:16:57.081511 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:16:57.081641 master-0 kubenswrapper[7440]: I0312 14:16:57.081578 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:16:57.081685 master-0 kubenswrapper[7440]: I0312 14:16:57.081639 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:16:57.081685 master-0 kubenswrapper[7440]: I0312 14:16:57.081671 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:16:57.131393 master-0 kubenswrapper[7440]: I0312 14:16:57.131302 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:16:57.131393 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:16:57.131393 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:16:57.131393 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:16:57.131393 master-0 kubenswrapper[7440]: I0312 14:16:57.131370 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:16:57.182697 master-0 kubenswrapper[7440]: I0312 14:16:57.182588 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:16:57.183190 master-0 kubenswrapper[7440]: I0312 14:16:57.182745 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:16:57.183190 master-0 kubenswrapper[7440]: I0312 14:16:57.182814 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:16:57.183190 master-0 kubenswrapper[7440]: I0312 14:16:57.182859 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:16:57.183190 master-0 kubenswrapper[7440]: I0312 14:16:57.182884 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:16:57.183190 master-0 kubenswrapper[7440]: I0312 14:16:57.182944 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:16:57.183190 master-0 kubenswrapper[7440]: I0312 14:16:57.182949 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:16:57.183190 master-0 kubenswrapper[7440]: I0312 14:16:57.183039 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:16:57.183190 master-0 kubenswrapper[7440]: I0312 14:16:57.183079 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:16:57.183190 master-0 kubenswrapper[7440]: I0312 14:16:57.183093 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:16:57.183190 master-0 kubenswrapper[7440]: I0312 14:16:57.183115 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:16:57.183547 master-0 kubenswrapper[7440]: I0312 14:16:57.183268 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:16:57.344232 master-0 kubenswrapper[7440]: I0312 14:16:57.344135 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7fed292c3d5a90a99bfee43e89190405","Type":"ContainerStarted","Data":"897e913e5a5888d39eecca73ba6606dae5753683c29db8129ecaf95abc7f3cbb"} Mar 12 14:16:57.344232 master-0 kubenswrapper[7440]: I0312 14:16:57.344223 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7fed292c3d5a90a99bfee43e89190405","Type":"ContainerStarted","Data":"bfb8925b65ca795f99c38fd98275a891cfe30f8e50ff7cdc4998c8b7134a6ec0"} Mar 12 14:16:57.344528 master-0 kubenswrapper[7440]: I0312 14:16:57.344255 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7fed292c3d5a90a99bfee43e89190405","Type":"ContainerStarted","Data":"cf5f8f103a771fcea458b305dc771a6ec643f8d62a671cc46fbc879cf21a71e2"} Mar 12 14:16:57.344528 master-0 kubenswrapper[7440]: I0312 14:16:57.344266 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7fed292c3d5a90a99bfee43e89190405","Type":"ContainerStarted","Data":"06b2e38b2912c9d15a5b2978f55eb051dd05aa588cbc81336019b954026e6207"} Mar 12 14:16:57.347791 master-0 kubenswrapper[7440]: I0312 14:16:57.347750 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 12 14:16:57.348799 master-0 kubenswrapper[7440]: I0312 14:16:57.348747 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 12 14:16:57.350500 master-0 kubenswrapper[7440]: I0312 14:16:57.350458 7440 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="ce4bbd63e68811b084a013b96af26d98956aa6df6255b0040e0ffbc96b8a34b0" exitCode=2 Mar 12 14:16:57.350500 master-0 kubenswrapper[7440]: I0312 14:16:57.350486 7440 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="f877e9e772e626aee6aab05c7ac905f2c4beb3f6e88c57c25b9eaeab3e18035d" exitCode=0 Mar 12 14:16:57.350500 master-0 kubenswrapper[7440]: I0312 14:16:57.350498 7440 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="19b7b4eaee1a852a9ccf6d4df36d726273012941b1ee088eb660f41b5b7c26b8" exitCode=2 Mar 12 14:16:57.631854 master-0 kubenswrapper[7440]: I0312 14:16:57.631805 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 12 14:16:57.688795 master-0 kubenswrapper[7440]: I0312 14:16:57.688653 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c8675d4-a0be-42a3-96af-e56f5fb02983-kubelet-dir\") pod \"0c8675d4-a0be-42a3-96af-e56f5fb02983\" (UID: \"0c8675d4-a0be-42a3-96af-e56f5fb02983\") " Mar 12 14:16:57.688795 master-0 kubenswrapper[7440]: I0312 14:16:57.688741 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0c8675d4-a0be-42a3-96af-e56f5fb02983-var-lock\") pod \"0c8675d4-a0be-42a3-96af-e56f5fb02983\" (UID: \"0c8675d4-a0be-42a3-96af-e56f5fb02983\") " Mar 12 14:16:57.688795 master-0 kubenswrapper[7440]: I0312 14:16:57.688768 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8675d4-a0be-42a3-96af-e56f5fb02983-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0c8675d4-a0be-42a3-96af-e56f5fb02983" (UID: "0c8675d4-a0be-42a3-96af-e56f5fb02983"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:16:57.689130 master-0 kubenswrapper[7440]: I0312 14:16:57.688792 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c8675d4-a0be-42a3-96af-e56f5fb02983-kube-api-access\") pod \"0c8675d4-a0be-42a3-96af-e56f5fb02983\" (UID: \"0c8675d4-a0be-42a3-96af-e56f5fb02983\") " Mar 12 14:16:57.689320 master-0 kubenswrapper[7440]: I0312 14:16:57.689282 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8675d4-a0be-42a3-96af-e56f5fb02983-var-lock" (OuterVolumeSpecName: "var-lock") pod "0c8675d4-a0be-42a3-96af-e56f5fb02983" (UID: "0c8675d4-a0be-42a3-96af-e56f5fb02983"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:16:57.689524 master-0 kubenswrapper[7440]: I0312 14:16:57.689493 7440 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c8675d4-a0be-42a3-96af-e56f5fb02983-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:57.689524 master-0 kubenswrapper[7440]: I0312 14:16:57.689520 7440 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0c8675d4-a0be-42a3-96af-e56f5fb02983-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:57.692837 master-0 kubenswrapper[7440]: I0312 14:16:57.692796 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c8675d4-a0be-42a3-96af-e56f5fb02983-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0c8675d4-a0be-42a3-96af-e56f5fb02983" (UID: "0c8675d4-a0be-42a3-96af-e56f5fb02983"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:16:57.791337 master-0 kubenswrapper[7440]: I0312 14:16:57.791264 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c8675d4-a0be-42a3-96af-e56f5fb02983-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 14:16:57.815466 master-0 kubenswrapper[7440]: I0312 14:16:57.815427 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f78c05e1499b533b83f091333d61f045" path="/var/lib/kubelet/pods/f78c05e1499b533b83f091333d61f045/volumes" Mar 12 14:16:57.816216 master-0 kubenswrapper[7440]: I0312 14:16:57.816195 7440 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 12 14:16:58.131329 master-0 kubenswrapper[7440]: I0312 14:16:58.131253 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:16:58.131329 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:16:58.131329 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:16:58.131329 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:16:58.131666 master-0 kubenswrapper[7440]: I0312 14:16:58.131349 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:16:58.361364 master-0 kubenswrapper[7440]: I0312 14:16:58.361317 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 12 14:16:59.131779 master-0 kubenswrapper[7440]: I0312 14:16:59.131711 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:16:59.131779 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:16:59.131779 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:16:59.131779 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:16:59.132053 master-0 kubenswrapper[7440]: I0312 14:16:59.131794 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:00.132104 master-0 kubenswrapper[7440]: I0312 14:17:00.132046 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:00.132104 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:00.132104 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:00.132104 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:00.132784 master-0 kubenswrapper[7440]: I0312 14:17:00.132106 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:01.131031 master-0 kubenswrapper[7440]: I0312 14:17:01.130978 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:01.131031 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:01.131031 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:01.131031 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:01.131375 master-0 kubenswrapper[7440]: I0312 14:17:01.131039 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:02.132195 master-0 kubenswrapper[7440]: I0312 14:17:02.132108 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:02.132195 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:02.132195 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:02.132195 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:02.132195 master-0 kubenswrapper[7440]: I0312 14:17:02.132180 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:03.131852 master-0 kubenswrapper[7440]: I0312 14:17:03.131742 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:03.131852 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:03.131852 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:03.131852 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:03.131852 master-0 kubenswrapper[7440]: I0312 14:17:03.131819 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:04.133000 master-0 kubenswrapper[7440]: I0312 14:17:04.132885 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:04.133000 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:04.133000 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:04.133000 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:04.134449 master-0 kubenswrapper[7440]: I0312 14:17:04.133021 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:05.132951 master-0 kubenswrapper[7440]: I0312 14:17:05.132650 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:05.132951 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:05.132951 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:05.132951 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:05.132951 master-0 kubenswrapper[7440]: I0312 14:17:05.132784 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:06.132812 master-0 kubenswrapper[7440]: I0312 14:17:06.132736 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:06.132812 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:06.132812 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:06.132812 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:06.133175 master-0 kubenswrapper[7440]: I0312 14:17:06.132870 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:07.132231 master-0 kubenswrapper[7440]: I0312 14:17:07.132155 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:07.132231 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:07.132231 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:07.132231 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:07.132231 master-0 kubenswrapper[7440]: I0312 14:17:07.132227 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:08.131641 master-0 kubenswrapper[7440]: I0312 14:17:08.131525 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:08.131641 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:08.131641 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:08.131641 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:08.131641 master-0 kubenswrapper[7440]: I0312 14:17:08.131632 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:09.131472 master-0 kubenswrapper[7440]: I0312 14:17:09.131378 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:09.131472 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:09.131472 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:09.131472 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:09.131980 master-0 kubenswrapper[7440]: I0312 14:17:09.131471 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:09.517141 master-0 kubenswrapper[7440]: I0312 14:17:09.517053 7440 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:17:09.517141 master-0 kubenswrapper[7440]: I0312 14:17:09.517139 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:17:10.132368 master-0 kubenswrapper[7440]: I0312 14:17:10.132291 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:10.132368 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:10.132368 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:10.132368 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:10.132891 master-0 kubenswrapper[7440]: I0312 14:17:10.132382 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:11.130420 master-0 kubenswrapper[7440]: I0312 14:17:11.130376 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:11.130420 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:11.130420 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:11.130420 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:11.130420 master-0 kubenswrapper[7440]: I0312 14:17:11.130429 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:11.455010 master-0 kubenswrapper[7440]: I0312 14:17:11.454866 7440 generic.go:334] "Generic (PLEG): container finished" podID="b2d8e6e9-c10f-4b43-8155-9addbfddba2e" containerID="6332902d5d84cf465484ab14dac64d9b60905fd555e191dc35b3857c84ea5469" exitCode=0 Mar 12 14:17:11.739887 master-0 kubenswrapper[7440]: E0312 14:17:11.739815 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:17:01Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:17:01Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:17:01Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:17:01Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": the server was unable to return a response in the time allotted, but may still be processing the request (patch nodes master-0)" Mar 12 14:17:12.131255 master-0 kubenswrapper[7440]: I0312 14:17:12.131142 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:12.131255 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:12.131255 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:12.131255 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:12.131255 master-0 kubenswrapper[7440]: I0312 14:17:12.131210 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:13.131116 master-0 kubenswrapper[7440]: I0312 14:17:13.131064 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:13.131116 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:13.131116 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:13.131116 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:13.131830 master-0 kubenswrapper[7440]: I0312 14:17:13.131135 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:14.131074 master-0 kubenswrapper[7440]: I0312 14:17:14.131018 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:14.131074 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:14.131074 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:14.131074 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:14.131390 master-0 kubenswrapper[7440]: I0312 14:17:14.131076 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:15.131179 master-0 kubenswrapper[7440]: I0312 14:17:15.131115 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:15.131179 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:15.131179 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:15.131179 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:15.131778 master-0 kubenswrapper[7440]: I0312 14:17:15.131203 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:16.132121 master-0 kubenswrapper[7440]: I0312 14:17:16.132057 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:16.132121 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:16.132121 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:16.132121 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:16.132633 master-0 kubenswrapper[7440]: I0312 14:17:16.132150 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:16.381853 master-0 kubenswrapper[7440]: E0312 14:17:16.381679 7440 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:17:17.132239 master-0 kubenswrapper[7440]: I0312 14:17:17.132161 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:17.132239 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:17.132239 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:17.132239 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:17.133436 master-0 kubenswrapper[7440]: I0312 14:17:17.132244 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:17.967886 master-0 kubenswrapper[7440]: I0312 14:17:17.967846 7440 patch_prober.go:28] interesting pod/machine-config-daemon-ngzc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:17:17.968184 master-0 kubenswrapper[7440]: I0312 14:17:17.968152 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" podUID="8e4d9407-ff79-4396-a37f-896617e024d4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:17:18.132095 master-0 kubenswrapper[7440]: I0312 14:17:18.132031 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:18.132095 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:18.132095 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:18.132095 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:18.132095 master-0 kubenswrapper[7440]: I0312 14:17:18.132094 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:19.131564 master-0 kubenswrapper[7440]: I0312 14:17:19.131486 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:19.131564 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:19.131564 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:19.131564 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:19.131990 master-0 kubenswrapper[7440]: I0312 14:17:19.131583 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:19.517200 master-0 kubenswrapper[7440]: I0312 14:17:19.517130 7440 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:17:19.517200 master-0 kubenswrapper[7440]: I0312 14:17:19.517191 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:17:20.131536 master-0 kubenswrapper[7440]: I0312 14:17:20.131458 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:20.131536 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:20.131536 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:20.131536 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:20.131860 master-0 kubenswrapper[7440]: I0312 14:17:20.131559 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:21.131073 master-0 kubenswrapper[7440]: I0312 14:17:21.131011 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:21.131073 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:21.131073 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:21.131073 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:21.131771 master-0 kubenswrapper[7440]: I0312 14:17:21.131087 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:21.740746 master-0 kubenswrapper[7440]: E0312 14:17:21.740666 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:17:22.132454 master-0 kubenswrapper[7440]: I0312 14:17:22.132301 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:22.132454 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:22.132454 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:22.132454 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:22.132454 master-0 kubenswrapper[7440]: I0312 14:17:22.132374 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:23.131952 master-0 kubenswrapper[7440]: I0312 14:17:23.131853 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:23.131952 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:23.131952 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:23.131952 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:23.132350 master-0 kubenswrapper[7440]: I0312 14:17:23.132004 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:24.131145 master-0 kubenswrapper[7440]: I0312 14:17:24.131085 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:24.131145 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:24.131145 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:24.131145 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:24.131656 master-0 kubenswrapper[7440]: I0312 14:17:24.131164 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:25.132835 master-0 kubenswrapper[7440]: I0312 14:17:25.132714 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:25.132835 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:25.132835 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:25.132835 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:25.132835 master-0 kubenswrapper[7440]: I0312 14:17:25.132804 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:26.130997 master-0 kubenswrapper[7440]: I0312 14:17:26.130931 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:26.130997 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:26.130997 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:26.130997 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:26.131279 master-0 kubenswrapper[7440]: I0312 14:17:26.131007 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:26.383099 master-0 kubenswrapper[7440]: E0312 14:17:26.382963 7440 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:17:27.132290 master-0 kubenswrapper[7440]: I0312 14:17:27.132224 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:27.132290 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:27.132290 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:27.132290 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:27.132616 master-0 kubenswrapper[7440]: I0312 14:17:27.132305 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:27.375160 master-0 kubenswrapper[7440]: I0312 14:17:27.375101 7440 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:52232->127.0.0.1:10357: read: connection reset by peer" start-of-body= Mar 12 14:17:27.375366 master-0 kubenswrapper[7440]: I0312 14:17:27.375167 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:52232->127.0.0.1:10357: read: connection reset by peer" Mar 12 14:17:27.537820 master-0 kubenswrapper[7440]: I0312 14:17:27.537741 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 12 14:17:27.538833 master-0 kubenswrapper[7440]: I0312 14:17:27.538794 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 12 14:17:27.539508 master-0 kubenswrapper[7440]: I0312 14:17:27.539477 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 12 14:17:27.539922 master-0 kubenswrapper[7440]: I0312 14:17:27.539891 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 12 14:17:27.540945 master-0 kubenswrapper[7440]: I0312 14:17:27.540916 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 12 14:17:27.563891 master-0 kubenswrapper[7440]: I0312 14:17:27.563818 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 12 14:17:27.564650 master-0 kubenswrapper[7440]: I0312 14:17:27.564610 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 12 14:17:27.565287 master-0 kubenswrapper[7440]: I0312 14:17:27.565247 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 12 14:17:27.565627 master-0 kubenswrapper[7440]: I0312 14:17:27.565594 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 12 14:17:27.566590 master-0 kubenswrapper[7440]: I0312 14:17:27.566544 7440 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="8c824a81227bbc4977bfae432c464a86a92fba843d33ea60db40b0306a18e201" exitCode=137 Mar 12 14:17:27.566590 master-0 kubenswrapper[7440]: I0312 14:17:27.566578 7440 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="e16a62b4a09dc1bf1229b7f6c1c70a440164d0b0527802cf7ca0f10f946c47d1" exitCode=137 Mar 12 14:17:27.566726 master-0 kubenswrapper[7440]: I0312 14:17:27.566691 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 12 14:17:27.568411 master-0 kubenswrapper[7440]: I0312 14:17:27.568369 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/0.log" Mar 12 14:17:27.568656 master-0 kubenswrapper[7440]: I0312 14:17:27.568618 7440 generic.go:334] "Generic (PLEG): container finished" podID="7fed292c3d5a90a99bfee43e89190405" containerID="bfb8925b65ca795f99c38fd98275a891cfe30f8e50ff7cdc4998c8b7134a6ec0" exitCode=255 Mar 12 14:17:27.648737 master-0 kubenswrapper[7440]: I0312 14:17:27.648547 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 12 14:17:27.648737 master-0 kubenswrapper[7440]: I0312 14:17:27.648645 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 12 14:17:27.648737 master-0 kubenswrapper[7440]: I0312 14:17:27.648715 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 12 14:17:27.649037 master-0 kubenswrapper[7440]: I0312 14:17:27.648765 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 12 14:17:27.649037 master-0 kubenswrapper[7440]: I0312 14:17:27.648795 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 12 14:17:27.649037 master-0 kubenswrapper[7440]: I0312 14:17:27.648817 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 12 14:17:27.649037 master-0 kubenswrapper[7440]: I0312 14:17:27.648884 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir" (OuterVolumeSpecName: "data-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:17:27.649037 master-0 kubenswrapper[7440]: I0312 14:17:27.648950 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:17:27.649037 master-0 kubenswrapper[7440]: I0312 14:17:27.648989 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir" (OuterVolumeSpecName: "log-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:17:27.649037 master-0 kubenswrapper[7440]: I0312 14:17:27.648997 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:17:27.649037 master-0 kubenswrapper[7440]: I0312 14:17:27.649020 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:17:27.649563 master-0 kubenswrapper[7440]: I0312 14:17:27.649040 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:17:27.649563 master-0 kubenswrapper[7440]: I0312 14:17:27.649148 7440 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:17:27.649563 master-0 kubenswrapper[7440]: I0312 14:17:27.649169 7440 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:17:27.649563 master-0 kubenswrapper[7440]: I0312 14:17:27.649184 7440 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:17:27.649563 master-0 kubenswrapper[7440]: I0312 14:17:27.649202 7440 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:17:27.649563 master-0 kubenswrapper[7440]: I0312 14:17:27.649257 7440 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Mar 12 14:17:27.649563 master-0 kubenswrapper[7440]: I0312 14:17:27.649270 7440 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:17:28.131978 master-0 kubenswrapper[7440]: I0312 14:17:28.131875 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:28.131978 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:28.131978 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:28.131978 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:28.131978 master-0 kubenswrapper[7440]: I0312 14:17:28.131973 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:29.132250 master-0 kubenswrapper[7440]: I0312 14:17:29.132120 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:29.132250 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:29.132250 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:29.132250 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:29.132250 master-0 kubenswrapper[7440]: I0312 14:17:29.132238 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:30.138224 master-0 kubenswrapper[7440]: I0312 14:17:30.138161 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:30.138224 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:30.138224 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:30.138224 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:30.138988 master-0 kubenswrapper[7440]: I0312 14:17:30.138961 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:31.054452 master-0 kubenswrapper[7440]: E0312 14:17:31.054309 7440 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189c1db520b04e55 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:7fed292c3d5a90a99bfee43e89190405,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:16:57.051844181 +0000 UTC m=+277.387222740,LastTimestamp:2026-03-12 14:16:57.051844181 +0000 UTC m=+277.387222740,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:17:31.131979 master-0 kubenswrapper[7440]: I0312 14:17:31.131851 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:31.131979 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:31.131979 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:31.131979 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:31.132355 master-0 kubenswrapper[7440]: I0312 14:17:31.131999 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:31.741936 master-0 kubenswrapper[7440]: E0312 14:17:31.741827 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:17:31.820205 master-0 kubenswrapper[7440]: E0312 14:17:31.820043 7440 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:17:31.820728 master-0 kubenswrapper[7440]: E0312 14:17:31.820684 7440 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.016s" Mar 12 14:17:31.820781 master-0 kubenswrapper[7440]: I0312 14:17:31.820739 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7fed292c3d5a90a99bfee43e89190405","Type":"ContainerStarted","Data":"d88b47b724ff96f583f2f5d18384ac675317e999c797b06ce407d3a96a3c0fcd"} Mar 12 14:17:31.821055 master-0 kubenswrapper[7440]: I0312 14:17:31.821015 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:17:31.821098 master-0 kubenswrapper[7440]: I0312 14:17:31.821064 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"0c8675d4-a0be-42a3-96af-e56f5fb02983","Type":"ContainerDied","Data":"3378bf89846b15560831731ea870867860116f550ee6cc7c8a063f8901a47bce"} Mar 12 14:17:31.821137 master-0 kubenswrapper[7440]: I0312 14:17:31.821093 7440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3378bf89846b15560831731ea870867860116f550ee6cc7c8a063f8901a47bce" Mar 12 14:17:31.821137 master-0 kubenswrapper[7440]: I0312 14:17:31.821122 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:17:31.824176 master-0 kubenswrapper[7440]: I0312 14:17:31.824126 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 12 14:17:31.827066 master-0 kubenswrapper[7440]: I0312 14:17:31.827013 7440 scope.go:117] "RemoveContainer" containerID="bfb8925b65ca795f99c38fd98275a891cfe30f8e50ff7cdc4998c8b7134a6ec0" Mar 12 14:17:31.831735 master-0 kubenswrapper[7440]: I0312 14:17:31.831224 7440 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8f8be4405a8d4e6b47e3984fee4354cff707b030f91ac3d80bc5aee09db3ea4a"} pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 14:17:31.831735 master-0 kubenswrapper[7440]: I0312 14:17:31.831344 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" podUID="8e4d9407-ff79-4396-a37f-896617e024d4" containerName="machine-config-daemon" containerID="cri-o://8f8be4405a8d4e6b47e3984fee4354cff707b030f91ac3d80bc5aee09db3ea4a" gracePeriod=600 Mar 12 14:17:31.839115 master-0 kubenswrapper[7440]: I0312 14:17:31.839050 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" path="/var/lib/kubelet/pods/8e52bef89f4b50e4590a1719bcc5d7e5/volumes" Mar 12 14:17:31.840772 master-0 kubenswrapper[7440]: I0312 14:17:31.840717 7440 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 12 14:17:31.925842 master-0 kubenswrapper[7440]: I0312 14:17:31.925760 7440 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="10b1bd98-beac-469c-9a65-abee3ca8a243" Mar 12 14:17:31.925842 master-0 kubenswrapper[7440]: I0312 14:17:31.925833 7440 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="10b1bd98-beac-469c-9a65-abee3ca8a243" Mar 12 14:17:32.132200 master-0 kubenswrapper[7440]: I0312 14:17:32.132106 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:32.132200 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:32.132200 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:32.132200 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:32.132200 master-0 kubenswrapper[7440]: I0312 14:17:32.132183 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:32.607927 master-0 kubenswrapper[7440]: I0312 14:17:32.607764 7440 generic.go:334] "Generic (PLEG): container finished" podID="8e4d9407-ff79-4396-a37f-896617e024d4" containerID="8f8be4405a8d4e6b47e3984fee4354cff707b030f91ac3d80bc5aee09db3ea4a" exitCode=0 Mar 12 14:17:32.611144 master-0 kubenswrapper[7440]: I0312 14:17:32.611090 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/0.log" Mar 12 14:17:33.131682 master-0 kubenswrapper[7440]: I0312 14:17:33.131602 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:33.131682 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:33.131682 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:33.131682 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:33.131682 master-0 kubenswrapper[7440]: I0312 14:17:33.131675 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:34.131794 master-0 kubenswrapper[7440]: I0312 14:17:34.131707 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:34.131794 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:34.131794 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:34.131794 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:34.131794 master-0 kubenswrapper[7440]: I0312 14:17:34.131779 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:34.199029 master-0 kubenswrapper[7440]: I0312 14:17:34.198963 7440 scope.go:117] "RemoveContainer" containerID="e918e5e1279bbcaf698142b1c788174be79639920e9232ace941582c175becab" Mar 12 14:17:34.213987 master-0 kubenswrapper[7440]: I0312 14:17:34.213947 7440 scope.go:117] "RemoveContainer" containerID="24ee3eeca5a94629f5c47b0ce9433577ce076c824acff7a3bc086c327eefa56a" Mar 12 14:17:34.625764 master-0 kubenswrapper[7440]: I0312 14:17:34.625697 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_05fc4965-b390-4edc-a407-d431b06d7612/installer/0.log" Mar 12 14:17:34.625764 master-0 kubenswrapper[7440]: I0312 14:17:34.625759 7440 generic.go:334] "Generic (PLEG): container finished" podID="05fc4965-b390-4edc-a407-d431b06d7612" containerID="6aa44e483ff3af56ade2c830f5190301f0a2aff21489693f95cab78436b2ad8d" exitCode=1 Mar 12 14:17:35.131927 master-0 kubenswrapper[7440]: I0312 14:17:35.131774 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:35.131927 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:35.131927 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:35.131927 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:35.131927 master-0 kubenswrapper[7440]: I0312 14:17:35.131863 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:36.131658 master-0 kubenswrapper[7440]: I0312 14:17:36.131570 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:36.131658 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:36.131658 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:36.131658 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:36.131658 master-0 kubenswrapper[7440]: I0312 14:17:36.131628 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:36.383496 master-0 kubenswrapper[7440]: E0312 14:17:36.383305 7440 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:17:37.132189 master-0 kubenswrapper[7440]: I0312 14:17:37.132094 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:37.132189 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:37.132189 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:37.132189 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:37.132859 master-0 kubenswrapper[7440]: I0312 14:17:37.132209 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:38.131498 master-0 kubenswrapper[7440]: I0312 14:17:38.131415 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:38.131498 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:38.131498 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:38.131498 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:38.132041 master-0 kubenswrapper[7440]: I0312 14:17:38.131498 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:39.132399 master-0 kubenswrapper[7440]: I0312 14:17:39.132310 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:39.132399 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:39.132399 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:39.132399 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:39.133470 master-0 kubenswrapper[7440]: I0312 14:17:39.132412 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:40.132722 master-0 kubenswrapper[7440]: I0312 14:17:40.132586 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:40.132722 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:40.132722 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:40.132722 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:40.132722 master-0 kubenswrapper[7440]: I0312 14:17:40.132680 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:41.131762 master-0 kubenswrapper[7440]: I0312 14:17:41.131680 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:41.131762 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:41.131762 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:41.131762 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:41.132049 master-0 kubenswrapper[7440]: I0312 14:17:41.131778 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:41.742255 master-0 kubenswrapper[7440]: E0312 14:17:41.742189 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Mar 12 14:17:42.132442 master-0 kubenswrapper[7440]: I0312 14:17:42.132305 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:42.132442 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:42.132442 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:42.132442 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:42.132442 master-0 kubenswrapper[7440]: I0312 14:17:42.132376 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:43.133024 master-0 kubenswrapper[7440]: I0312 14:17:43.132912 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:43.133024 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:43.133024 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:43.133024 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:43.133024 master-0 kubenswrapper[7440]: I0312 14:17:43.132988 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:44.131563 master-0 kubenswrapper[7440]: I0312 14:17:44.131507 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:44.131563 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:44.131563 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:44.131563 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:44.131833 master-0 kubenswrapper[7440]: I0312 14:17:44.131570 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:45.131574 master-0 kubenswrapper[7440]: I0312 14:17:45.131478 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:45.131574 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:45.131574 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:45.131574 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:45.131574 master-0 kubenswrapper[7440]: I0312 14:17:45.131541 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:46.132311 master-0 kubenswrapper[7440]: I0312 14:17:46.132253 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:46.132311 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:46.132311 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:46.132311 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:46.132311 master-0 kubenswrapper[7440]: I0312 14:17:46.132313 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:46.384921 master-0 kubenswrapper[7440]: E0312 14:17:46.384703 7440 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:17:47.132449 master-0 kubenswrapper[7440]: I0312 14:17:47.132379 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:47.132449 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:47.132449 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:47.132449 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:47.133024 master-0 kubenswrapper[7440]: I0312 14:17:47.132469 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:48.132770 master-0 kubenswrapper[7440]: I0312 14:17:48.132690 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:48.132770 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:48.132770 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:48.132770 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:48.133447 master-0 kubenswrapper[7440]: I0312 14:17:48.132771 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:49.131610 master-0 kubenswrapper[7440]: I0312 14:17:49.131509 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:49.131610 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:49.131610 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:49.131610 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:49.131610 master-0 kubenswrapper[7440]: I0312 14:17:49.131574 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:50.131697 master-0 kubenswrapper[7440]: I0312 14:17:50.131563 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:50.131697 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:50.131697 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:50.131697 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:50.131697 master-0 kubenswrapper[7440]: I0312 14:17:50.131675 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:51.132001 master-0 kubenswrapper[7440]: I0312 14:17:51.131912 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:51.132001 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:51.132001 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:51.132001 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:51.132001 master-0 kubenswrapper[7440]: I0312 14:17:51.131999 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:51.744246 master-0 kubenswrapper[7440]: E0312 14:17:51.744158 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes master-0)" Mar 12 14:17:51.744246 master-0 kubenswrapper[7440]: E0312 14:17:51.744201 7440 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 14:17:52.131604 master-0 kubenswrapper[7440]: I0312 14:17:52.131408 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:52.131604 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:52.131604 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:52.131604 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:52.131604 master-0 kubenswrapper[7440]: I0312 14:17:52.131500 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:53.131317 master-0 kubenswrapper[7440]: I0312 14:17:53.131243 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:53.131317 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:53.131317 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:53.131317 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:53.131934 master-0 kubenswrapper[7440]: I0312 14:17:53.131329 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:54.131101 master-0 kubenswrapper[7440]: I0312 14:17:54.131011 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:54.131101 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:54.131101 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:54.131101 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:54.131352 master-0 kubenswrapper[7440]: I0312 14:17:54.131103 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:55.132428 master-0 kubenswrapper[7440]: I0312 14:17:55.132318 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:55.132428 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:55.132428 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:55.132428 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:55.132428 master-0 kubenswrapper[7440]: I0312 14:17:55.132410 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:56.131183 master-0 kubenswrapper[7440]: I0312 14:17:56.131098 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:56.131183 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:56.131183 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:56.131183 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:56.131492 master-0 kubenswrapper[7440]: I0312 14:17:56.131181 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:56.385787 master-0 kubenswrapper[7440]: E0312 14:17:56.385630 7440 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:17:56.385787 master-0 kubenswrapper[7440]: I0312 14:17:56.385697 7440 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 12 14:17:57.132393 master-0 kubenswrapper[7440]: I0312 14:17:57.132296 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:57.132393 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:57.132393 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:57.132393 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:57.132393 master-0 kubenswrapper[7440]: I0312 14:17:57.132369 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:57.633715 master-0 kubenswrapper[7440]: I0312 14:17:57.633650 7440 status_manager.go:851] "Failed to get status for pod" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" pod="openshift-etcd/etcd-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)" Mar 12 14:17:58.131542 master-0 kubenswrapper[7440]: I0312 14:17:58.131457 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:58.131542 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:58.131542 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:58.131542 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:58.132016 master-0 kubenswrapper[7440]: I0312 14:17:58.131532 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:17:59.131678 master-0 kubenswrapper[7440]: I0312 14:17:59.131602 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:17:59.131678 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:17:59.131678 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:17:59.131678 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:17:59.132253 master-0 kubenswrapper[7440]: I0312 14:17:59.131684 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:00.131155 master-0 kubenswrapper[7440]: I0312 14:18:00.131031 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:00.131155 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:00.131155 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:00.131155 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:00.131418 master-0 kubenswrapper[7440]: I0312 14:18:00.131174 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:01.131625 master-0 kubenswrapper[7440]: I0312 14:18:01.131505 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:01.131625 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:01.131625 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:01.131625 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:01.132366 master-0 kubenswrapper[7440]: I0312 14:18:01.131624 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:02.132464 master-0 kubenswrapper[7440]: I0312 14:18:02.132381 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:02.132464 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:02.132464 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:02.132464 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:02.133491 master-0 kubenswrapper[7440]: I0312 14:18:02.132471 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:02.905270 master-0 kubenswrapper[7440]: I0312 14:18:02.905205 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/1.log" Mar 12 14:18:02.906554 master-0 kubenswrapper[7440]: I0312 14:18:02.906502 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/0.log" Mar 12 14:18:02.906959 master-0 kubenswrapper[7440]: I0312 14:18:02.906882 7440 generic.go:334] "Generic (PLEG): container finished" podID="7fed292c3d5a90a99bfee43e89190405" containerID="77d5ea8d3aeff7d8613d21bf451df4c494347c5824551bb22ccce9ec8f0d6a8d" exitCode=255 Mar 12 14:18:03.132271 master-0 kubenswrapper[7440]: I0312 14:18:03.132218 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:03.132271 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:03.132271 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:03.132271 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:03.133085 master-0 kubenswrapper[7440]: I0312 14:18:03.132282 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:04.131968 master-0 kubenswrapper[7440]: I0312 14:18:04.131806 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:04.131968 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:04.131968 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:04.131968 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:04.132481 master-0 kubenswrapper[7440]: I0312 14:18:04.132024 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:05.057630 master-0 kubenswrapper[7440]: E0312 14:18:05.057448 7440 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189c1db522516354 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:7fed292c3d5a90a99bfee43e89190405,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:16:57.079178068 +0000 UTC m=+277.414556627,LastTimestamp:2026-03-12 14:16:57.079178068 +0000 UTC m=+277.414556627,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:18:05.132957 master-0 kubenswrapper[7440]: I0312 14:18:05.132791 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:05.132957 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:05.132957 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:05.132957 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:05.133421 master-0 kubenswrapper[7440]: I0312 14:18:05.132969 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:05.844266 master-0 kubenswrapper[7440]: E0312 14:18:05.844138 7440 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:18:05.844569 master-0 kubenswrapper[7440]: E0312 14:18:05.844479 7440 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.023s" Mar 12 14:18:05.859265 master-0 kubenswrapper[7440]: I0312 14:18:05.859190 7440 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 12 14:18:05.930044 master-0 kubenswrapper[7440]: E0312 14:18:05.929879 7440 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 12 14:18:05.930917 master-0 kubenswrapper[7440]: I0312 14:18:05.930859 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 12 14:18:05.961806 master-0 kubenswrapper[7440]: W0312 14:18:05.961721 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29c709c82970b529e7b9b895aa92ef05.slice/crio-d546c5397e398d2fa2328f65fedfe1cce52498d31ad5c371f9043b0bc9f34f16 WatchSource:0}: Error finding container d546c5397e398d2fa2328f65fedfe1cce52498d31ad5c371f9043b0bc9f34f16: Status 404 returned error can't find the container with id d546c5397e398d2fa2328f65fedfe1cce52498d31ad5c371f9043b0bc9f34f16 Mar 12 14:18:06.132615 master-0 kubenswrapper[7440]: I0312 14:18:06.132470 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:06.132615 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:06.132615 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:06.132615 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:06.132615 master-0 kubenswrapper[7440]: I0312 14:18:06.132563 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:06.387015 master-0 kubenswrapper[7440]: E0312 14:18:06.386829 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 12 14:18:06.937725 master-0 kubenswrapper[7440]: I0312 14:18:06.937665 7440 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="7ad0044b2389b9999007ceef7cd4808d51c84380e6314ac6db787dc5a548f095" exitCode=0 Mar 12 14:18:07.132306 master-0 kubenswrapper[7440]: I0312 14:18:07.132227 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:07.132306 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:07.132306 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:07.132306 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:07.132306 master-0 kubenswrapper[7440]: I0312 14:18:07.132302 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:08.132770 master-0 kubenswrapper[7440]: I0312 14:18:08.132667 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:08.132770 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:08.132770 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:08.132770 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:08.132770 master-0 kubenswrapper[7440]: I0312 14:18:08.132767 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:09.132956 master-0 kubenswrapper[7440]: I0312 14:18:09.132849 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:09.132956 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:09.132956 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:09.132956 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:09.133802 master-0 kubenswrapper[7440]: I0312 14:18:09.132979 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:10.132969 master-0 kubenswrapper[7440]: I0312 14:18:10.132846 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:10.132969 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:10.132969 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:10.132969 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:10.134005 master-0 kubenswrapper[7440]: I0312 14:18:10.132995 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:11.131161 master-0 kubenswrapper[7440]: I0312 14:18:11.131091 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:11.131161 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:11.131161 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:11.131161 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:11.131460 master-0 kubenswrapper[7440]: I0312 14:18:11.131167 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:11.972119 master-0 kubenswrapper[7440]: I0312 14:18:11.972061 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/1.log" Mar 12 14:18:11.973393 master-0 kubenswrapper[7440]: I0312 14:18:11.973343 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/0.log" Mar 12 14:18:11.973458 master-0 kubenswrapper[7440]: I0312 14:18:11.973422 7440 generic.go:334] "Generic (PLEG): container finished" podID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" containerID="cb41f5989ad50bdc5ae078b167c9bb559590c0f507a4b8b3d5d90309a6eca4b7" exitCode=1 Mar 12 14:18:12.084848 master-0 kubenswrapper[7440]: E0312 14:18:12.084454 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:18:02Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:18:02Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:18:02Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:18:02Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:0d4c830b2653f2eeffebd09537afb06afb5ae827adbc03f224ab7269f399c05c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d6065909bc521a3f9a85174276fdbceafad02a276449a7dd1952a1f689b0d362\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1735807445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:185237e125a9d710a58d4b588ea6b75eb361e4e99d979c1acd193de3b5d787f1\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:746054bb64fa0b27b1a696cd5db508bb9ee883a94969e4c1c4b5d35a93da8ef5\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1281521882},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:419c6163a23c12fa8884122764fc9055f901e98f35811ea7b5af57f8a71cdb3c\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bbd5afda20f052626b7914c319e3b44721ac442a05724cfe4199e8736319dcf1\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221789390},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032\\\"],\\\"sizeBytes\\\":487151732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e\\\"],\\\"sizeBytes\\\":480534195},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f\\\"],\\\"sizeBytes\\\":471430788}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:18:12.133445 master-0 kubenswrapper[7440]: I0312 14:18:12.133361 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:12.133445 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:12.133445 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:12.133445 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:12.133445 master-0 kubenswrapper[7440]: I0312 14:18:12.133444 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:13.132095 master-0 kubenswrapper[7440]: I0312 14:18:13.132004 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:13.132095 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:13.132095 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:13.132095 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:13.132813 master-0 kubenswrapper[7440]: I0312 14:18:13.132116 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:14.131409 master-0 kubenswrapper[7440]: I0312 14:18:14.131325 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:14.131409 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:14.131409 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:14.131409 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:14.131749 master-0 kubenswrapper[7440]: I0312 14:18:14.131412 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:15.132018 master-0 kubenswrapper[7440]: I0312 14:18:15.131941 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:15.132018 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:15.132018 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:15.132018 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:15.132924 master-0 kubenswrapper[7440]: I0312 14:18:15.132029 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:16.131712 master-0 kubenswrapper[7440]: I0312 14:18:16.131651 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:16.131712 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:16.131712 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:16.131712 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:16.132008 master-0 kubenswrapper[7440]: I0312 14:18:16.131723 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:16.588730 master-0 kubenswrapper[7440]: E0312 14:18:16.588633 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 12 14:18:17.131745 master-0 kubenswrapper[7440]: I0312 14:18:17.131695 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:17.131745 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:17.131745 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:17.131745 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:17.132091 master-0 kubenswrapper[7440]: I0312 14:18:17.131754 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:18.132416 master-0 kubenswrapper[7440]: I0312 14:18:18.132324 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:18.132416 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:18.132416 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:18.132416 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:18.133217 master-0 kubenswrapper[7440]: I0312 14:18:18.132433 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:19.131877 master-0 kubenswrapper[7440]: I0312 14:18:19.131783 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:19.131877 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:19.131877 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:19.131877 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:19.132224 master-0 kubenswrapper[7440]: I0312 14:18:19.131970 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:20.132341 master-0 kubenswrapper[7440]: I0312 14:18:20.132085 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:20.132341 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:20.132341 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:20.132341 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:20.132341 master-0 kubenswrapper[7440]: I0312 14:18:20.132167 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:21.132604 master-0 kubenswrapper[7440]: I0312 14:18:21.132490 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:21.132604 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:21.132604 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:21.132604 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:21.132604 master-0 kubenswrapper[7440]: I0312 14:18:21.132579 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:22.085579 master-0 kubenswrapper[7440]: E0312 14:18:22.085494 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:18:22.132588 master-0 kubenswrapper[7440]: I0312 14:18:22.132475 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:22.132588 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:22.132588 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:22.132588 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:22.132588 master-0 kubenswrapper[7440]: I0312 14:18:22.132565 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:23.135762 master-0 kubenswrapper[7440]: I0312 14:18:23.135638 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:23.135762 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:23.135762 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:23.135762 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:23.136802 master-0 kubenswrapper[7440]: I0312 14:18:23.135772 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:24.131171 master-0 kubenswrapper[7440]: I0312 14:18:24.131106 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:24.131171 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:24.131171 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:24.131171 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:24.131425 master-0 kubenswrapper[7440]: I0312 14:18:24.131169 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:25.131991 master-0 kubenswrapper[7440]: I0312 14:18:25.131945 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:25.131991 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:25.131991 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:25.131991 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:25.132602 master-0 kubenswrapper[7440]: I0312 14:18:25.132575 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:26.132203 master-0 kubenswrapper[7440]: I0312 14:18:26.132118 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:26.132203 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:26.132203 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:26.132203 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:26.132969 master-0 kubenswrapper[7440]: I0312 14:18:26.132224 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:26.989758 master-0 kubenswrapper[7440]: E0312 14:18:26.989683 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 12 14:18:27.131116 master-0 kubenswrapper[7440]: I0312 14:18:27.131027 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:27.131116 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:27.131116 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:27.131116 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:27.131116 master-0 kubenswrapper[7440]: I0312 14:18:27.131092 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:28.132264 master-0 kubenswrapper[7440]: I0312 14:18:28.132139 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:28.132264 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:28.132264 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:28.132264 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:28.133336 master-0 kubenswrapper[7440]: I0312 14:18:28.133050 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:29.131663 master-0 kubenswrapper[7440]: I0312 14:18:29.131602 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:29.131663 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:29.131663 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:29.131663 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:29.131976 master-0 kubenswrapper[7440]: I0312 14:18:29.131678 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:30.132110 master-0 kubenswrapper[7440]: I0312 14:18:30.132047 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:30.132110 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:30.132110 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:30.132110 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:30.132744 master-0 kubenswrapper[7440]: I0312 14:18:30.132118 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:31.132324 master-0 kubenswrapper[7440]: I0312 14:18:31.132223 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:31.132324 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:31.132324 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:31.132324 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:31.132324 master-0 kubenswrapper[7440]: I0312 14:18:31.132306 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:32.086062 master-0 kubenswrapper[7440]: E0312 14:18:32.085937 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:18:32.132415 master-0 kubenswrapper[7440]: I0312 14:18:32.132332 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:32.132415 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:32.132415 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:32.132415 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:32.133053 master-0 kubenswrapper[7440]: I0312 14:18:32.132439 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:33.131328 master-0 kubenswrapper[7440]: I0312 14:18:33.131267 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:18:33.131328 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:18:33.131328 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:18:33.131328 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:18:33.131601 master-0 kubenswrapper[7440]: I0312 14:18:33.131344 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:18:37.790795 master-0 kubenswrapper[7440]: E0312 14:18:37.790697 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 12 14:18:39.061433 master-0 kubenswrapper[7440]: E0312 14:18:39.061223 7440 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189c1db5225e52e5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:7fed292c3d5a90a99bfee43e89190405,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:16:57.080025829 +0000 UTC m=+277.415404388,LastTimestamp:2026-03-12 14:16:57.080025829 +0000 UTC m=+277.415404388,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:18:39.862712 master-0 kubenswrapper[7440]: E0312 14:18:39.862622 7440 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:18:39.862991 master-0 kubenswrapper[7440]: E0312 14:18:39.862948 7440 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.018s" Mar 12 14:18:39.863050 master-0 kubenswrapper[7440]: I0312 14:18:39.862982 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"b2d8e6e9-c10f-4b43-8155-9addbfddba2e","Type":"ContainerDied","Data":"6332902d5d84cf465484ab14dac64d9b60905fd555e191dc35b3857c84ea5469"} Mar 12 14:18:39.863173 master-0 kubenswrapper[7440]: I0312 14:18:39.863138 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:18:39.863261 master-0 kubenswrapper[7440]: I0312 14:18:39.863229 7440 scope.go:117] "RemoveContainer" containerID="ce4bbd63e68811b084a013b96af26d98956aa6df6255b0040e0ffbc96b8a34b0" Mar 12 14:18:39.865524 master-0 kubenswrapper[7440]: I0312 14:18:39.865214 7440 scope.go:117] "RemoveContainer" containerID="77d5ea8d3aeff7d8613d21bf451df4c494347c5824551bb22ccce9ec8f0d6a8d" Mar 12 14:18:39.873075 master-0 kubenswrapper[7440]: I0312 14:18:39.873039 7440 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 12 14:18:39.898866 master-0 kubenswrapper[7440]: I0312 14:18:39.898831 7440 scope.go:117] "RemoveContainer" containerID="f877e9e772e626aee6aab05c7ac905f2c4beb3f6e88c57c25b9eaeab3e18035d" Mar 12 14:18:39.930824 master-0 kubenswrapper[7440]: I0312 14:18:39.930783 7440 scope.go:117] "RemoveContainer" containerID="19b7b4eaee1a852a9ccf6d4df36d726273012941b1ee088eb660f41b5b7c26b8" Mar 12 14:18:39.951944 master-0 kubenswrapper[7440]: I0312 14:18:39.951883 7440 scope.go:117] "RemoveContainer" containerID="8c824a81227bbc4977bfae432c464a86a92fba843d33ea60db40b0306a18e201" Mar 12 14:18:39.975702 master-0 kubenswrapper[7440]: I0312 14:18:39.975659 7440 scope.go:117] "RemoveContainer" containerID="e16a62b4a09dc1bf1229b7f6c1c70a440164d0b0527802cf7ca0f10f946c47d1" Mar 12 14:18:39.992246 master-0 kubenswrapper[7440]: I0312 14:18:39.992180 7440 scope.go:117] "RemoveContainer" containerID="89d8d59bf6fa2a26b3f43dce31271bb83151aa62ec4c71c1e3cb8e9ec9a4453c" Mar 12 14:18:40.007724 master-0 kubenswrapper[7440]: I0312 14:18:40.007692 7440 scope.go:117] "RemoveContainer" containerID="70f8a10f08775f9ef9b766aaa2353e10257f6f7a64d18cef4a9ce779cf9930f3" Mar 12 14:18:40.024997 master-0 kubenswrapper[7440]: I0312 14:18:40.024960 7440 scope.go:117] "RemoveContainer" containerID="257ef0c1a29111b804b93df184b1276c19040c3b46129a42b1e503f5e1905151" Mar 12 14:18:40.165831 master-0 kubenswrapper[7440]: I0312 14:18:40.165652 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/1.log" Mar 12 14:18:40.167247 master-0 kubenswrapper[7440]: I0312 14:18:40.167215 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/0.log" Mar 12 14:18:40.430832 master-0 kubenswrapper[7440]: I0312 14:18:40.430350 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 12 14:18:40.596596 master-0 kubenswrapper[7440]: I0312 14:18:40.596527 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b2d8e6e9-c10f-4b43-8155-9addbfddba2e-var-lock\") pod \"b2d8e6e9-c10f-4b43-8155-9addbfddba2e\" (UID: \"b2d8e6e9-c10f-4b43-8155-9addbfddba2e\") " Mar 12 14:18:40.596823 master-0 kubenswrapper[7440]: I0312 14:18:40.596597 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2d8e6e9-c10f-4b43-8155-9addbfddba2e-var-lock" (OuterVolumeSpecName: "var-lock") pod "b2d8e6e9-c10f-4b43-8155-9addbfddba2e" (UID: "b2d8e6e9-c10f-4b43-8155-9addbfddba2e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:18:40.596823 master-0 kubenswrapper[7440]: I0312 14:18:40.596706 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b2d8e6e9-c10f-4b43-8155-9addbfddba2e-kubelet-dir\") pod \"b2d8e6e9-c10f-4b43-8155-9addbfddba2e\" (UID: \"b2d8e6e9-c10f-4b43-8155-9addbfddba2e\") " Mar 12 14:18:40.596823 master-0 kubenswrapper[7440]: I0312 14:18:40.596750 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2d8e6e9-c10f-4b43-8155-9addbfddba2e-kube-api-access\") pod \"b2d8e6e9-c10f-4b43-8155-9addbfddba2e\" (UID: \"b2d8e6e9-c10f-4b43-8155-9addbfddba2e\") " Mar 12 14:18:40.597002 master-0 kubenswrapper[7440]: I0312 14:18:40.596817 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2d8e6e9-c10f-4b43-8155-9addbfddba2e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b2d8e6e9-c10f-4b43-8155-9addbfddba2e" (UID: "b2d8e6e9-c10f-4b43-8155-9addbfddba2e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:18:40.597046 master-0 kubenswrapper[7440]: I0312 14:18:40.597003 7440 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b2d8e6e9-c10f-4b43-8155-9addbfddba2e-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 14:18:40.597046 master-0 kubenswrapper[7440]: I0312 14:18:40.597017 7440 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b2d8e6e9-c10f-4b43-8155-9addbfddba2e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:18:40.600363 master-0 kubenswrapper[7440]: I0312 14:18:40.600299 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2d8e6e9-c10f-4b43-8155-9addbfddba2e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b2d8e6e9-c10f-4b43-8155-9addbfddba2e" (UID: "b2d8e6e9-c10f-4b43-8155-9addbfddba2e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:18:40.698787 master-0 kubenswrapper[7440]: I0312 14:18:40.698648 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2d8e6e9-c10f-4b43-8155-9addbfddba2e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 14:18:41.187584 master-0 kubenswrapper[7440]: I0312 14:18:41.187527 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 12 14:18:42.087545 master-0 kubenswrapper[7440]: E0312 14:18:42.087471 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:18:42.194817 master-0 kubenswrapper[7440]: I0312 14:18:42.194685 7440 generic.go:334] "Generic (PLEG): container finished" podID="1bc0d552-01c7-4212-a551-d16419f2dc80" containerID="d4f5f31cb9b13fbf54308c119403bf09d2d0acf82b48cd71b5bda3672a1ed049" exitCode=0 Mar 12 14:18:43.262685 master-0 kubenswrapper[7440]: I0312 14:18:43.262607 7440 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-qzdff container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" start-of-body= Mar 12 14:18:43.264965 master-0 kubenswrapper[7440]: I0312 14:18:43.262688 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" podUID="1bc0d552-01c7-4212-a551-d16419f2dc80" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" Mar 12 14:18:43.264965 master-0 kubenswrapper[7440]: I0312 14:18:43.262607 7440 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-qzdff container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" start-of-body= Mar 12 14:18:43.264965 master-0 kubenswrapper[7440]: I0312 14:18:43.262822 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" podUID="1bc0d552-01c7-4212-a551-d16419f2dc80" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" Mar 12 14:18:49.392183 master-0 kubenswrapper[7440]: E0312 14:18:49.391999 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 12 14:18:52.088196 master-0 kubenswrapper[7440]: E0312 14:18:52.088127 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:18:52.088196 master-0 kubenswrapper[7440]: E0312 14:18:52.088184 7440 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 14:18:53.262948 master-0 kubenswrapper[7440]: I0312 14:18:53.262824 7440 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-qzdff container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" start-of-body= Mar 12 14:18:53.263485 master-0 kubenswrapper[7440]: I0312 14:18:53.262957 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" podUID="1bc0d552-01c7-4212-a551-d16419f2dc80" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" Mar 12 14:18:53.263485 master-0 kubenswrapper[7440]: I0312 14:18:53.263091 7440 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-qzdff container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" start-of-body= Mar 12 14:18:53.263485 master-0 kubenswrapper[7440]: I0312 14:18:53.263196 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" podUID="1bc0d552-01c7-4212-a551-d16419f2dc80" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" Mar 12 14:18:55.299699 master-0 kubenswrapper[7440]: I0312 14:18:55.299626 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/1.log" Mar 12 14:18:55.300727 master-0 kubenswrapper[7440]: I0312 14:18:55.300081 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/0.log" Mar 12 14:18:55.300727 master-0 kubenswrapper[7440]: I0312 14:18:55.300112 7440 generic.go:334] "Generic (PLEG): container finished" podID="d56089bf-177c-492d-8964-73a45574e7ed" containerID="7bd65ca4e680b5333dd47dc3da6564b9ecb4961327d3c93643808daf9a4c8812" exitCode=1 Mar 12 14:18:57.634843 master-0 kubenswrapper[7440]: I0312 14:18:57.634722 7440 status_manager.go:851] "Failed to get status for pod" podUID="0c8675d4-a0be-42a3-96af-e56f5fb02983" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-retry-1-master-0)" Mar 12 14:19:02.592938 master-0 kubenswrapper[7440]: E0312 14:19:02.592837 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 12 14:19:03.261952 master-0 kubenswrapper[7440]: I0312 14:19:03.261833 7440 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-qzdff container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" start-of-body= Mar 12 14:19:03.262716 master-0 kubenswrapper[7440]: I0312 14:19:03.261961 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" podUID="1bc0d552-01c7-4212-a551-d16419f2dc80" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" Mar 12 14:19:03.262716 master-0 kubenswrapper[7440]: I0312 14:19:03.262176 7440 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-qzdff container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" start-of-body= Mar 12 14:19:03.262716 master-0 kubenswrapper[7440]: I0312 14:19:03.262267 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" podUID="1bc0d552-01c7-4212-a551-d16419f2dc80" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" Mar 12 14:19:11.412389 master-0 kubenswrapper[7440]: I0312 14:19:11.412342 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/2.log" Mar 12 14:19:11.413392 master-0 kubenswrapper[7440]: I0312 14:19:11.413371 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/1.log" Mar 12 14:19:11.414243 master-0 kubenswrapper[7440]: I0312 14:19:11.414224 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/0.log" Mar 12 14:19:11.414652 master-0 kubenswrapper[7440]: I0312 14:19:11.414624 7440 generic.go:334] "Generic (PLEG): container finished" podID="7fed292c3d5a90a99bfee43e89190405" containerID="41658f62545b7d9b7450bbc8dac7589cb3b2a123f8c6b156d2fe20c54741e987" exitCode=255 Mar 12 14:19:12.130359 master-0 kubenswrapper[7440]: E0312 14:19:12.130193 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:19:02Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:19:02Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:19:02Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:19:02Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:0d4c830b2653f2eeffebd09537afb06afb5ae827adbc03f224ab7269f399c05c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d6065909bc521a3f9a85174276fdbceafad02a276449a7dd1952a1f689b0d362\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1735807445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:185237e125a9d710a58d4b588ea6b75eb361e4e99d979c1acd193de3b5d787f1\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:746054bb64fa0b27b1a696cd5db508bb9ee883a94969e4c1c4b5d35a93da8ef5\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1281521882},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:419c6163a23c12fa8884122764fc9055f901e98f35811ea7b5af57f8a71cdb3c\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bbd5afda20f052626b7914c319e3b44721ac442a05724cfe4199e8736319dcf1\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221789390},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032\\\"],\\\"sizeBytes\\\":487151732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e\\\"],\\\"sizeBytes\\\":480534195},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f\\\"],\\\"sizeBytes\\\":471430788}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:19:13.065736 master-0 kubenswrapper[7440]: E0312 14:19:13.065550 7440 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189c1db52c6eb5be openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:7fed292c3d5a90a99bfee43e89190405,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:16:57.24887187 +0000 UTC m=+277.584250429,LastTimestamp:2026-03-12 14:16:57.24887187 +0000 UTC m=+277.584250429,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:19:13.262857 master-0 kubenswrapper[7440]: I0312 14:19:13.262816 7440 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-qzdff container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" start-of-body= Mar 12 14:19:13.263190 master-0 kubenswrapper[7440]: I0312 14:19:13.263154 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" podUID="1bc0d552-01c7-4212-a551-d16419f2dc80" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" Mar 12 14:19:13.875338 master-0 kubenswrapper[7440]: E0312 14:19:13.875298 7440 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:19:13.875769 master-0 kubenswrapper[7440]: E0312 14:19:13.875748 7440 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.013s" Mar 12 14:19:13.875927 master-0 kubenswrapper[7440]: I0312 14:19:13.875883 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:19:13.876066 master-0 kubenswrapper[7440]: I0312 14:19:13.876049 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:19:13.876169 master-0 kubenswrapper[7440]: I0312 14:19:13.876153 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:19:13.890490 master-0 kubenswrapper[7440]: I0312 14:19:13.889647 7440 scope.go:117] "RemoveContainer" containerID="41658f62545b7d9b7450bbc8dac7589cb3b2a123f8c6b156d2fe20c54741e987" Mar 12 14:19:13.891209 master-0 kubenswrapper[7440]: E0312 14:19:13.891132 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:19:13.899781 master-0 kubenswrapper[7440]: I0312 14:19:13.899715 7440 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 12 14:19:18.994201 master-0 kubenswrapper[7440]: E0312 14:19:18.994139 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:19:22.130819 master-0 kubenswrapper[7440]: E0312 14:19:22.130742 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:19:23.262624 master-0 kubenswrapper[7440]: I0312 14:19:23.262545 7440 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-qzdff container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" start-of-body= Mar 12 14:19:23.262624 master-0 kubenswrapper[7440]: I0312 14:19:23.262609 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" podUID="1bc0d552-01c7-4212-a551-d16419f2dc80" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" Mar 12 14:19:32.131796 master-0 kubenswrapper[7440]: E0312 14:19:32.131694 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:19:33.263952 master-0 kubenswrapper[7440]: I0312 14:19:33.262098 7440 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-qzdff container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" start-of-body= Mar 12 14:19:33.263952 master-0 kubenswrapper[7440]: I0312 14:19:33.262193 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" podUID="1bc0d552-01c7-4212-a551-d16419f2dc80" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" Mar 12 14:19:33.992859 master-0 kubenswrapper[7440]: I0312 14:19:33.992788 7440 generic.go:334] "Generic (PLEG): container finished" podID="6defef79-6058-466a-ae0b-8eb9258126be" containerID="e09e9528f2e667c7ca5a54a2f40134d7a65389dd5410fb6f666432c3167149ba" exitCode=0 Mar 12 14:19:33.994564 master-0 kubenswrapper[7440]: I0312 14:19:33.994503 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hs6mc_3edaa533-ecbb-443e-a270-4cb4f923daf6/cluster-baremetal-operator/0.log" Mar 12 14:19:33.994564 master-0 kubenswrapper[7440]: I0312 14:19:33.994546 7440 generic.go:334] "Generic (PLEG): container finished" podID="3edaa533-ecbb-443e-a270-4cb4f923daf6" containerID="b7d782d5bb2308ec609e902e46de5e46198bee9122afbefc61233a9ba61991af" exitCode=1 Mar 12 14:19:33.996206 master-0 kubenswrapper[7440]: I0312 14:19:33.996160 7440 generic.go:334] "Generic (PLEG): container finished" podID="99433993-93cf-46cb-bb66-485672cb2554" containerID="942edb2086b196730f2050c8c10e7943616ea284812689341f08412925b12705" exitCode=0 Mar 12 14:19:33.998139 master-0 kubenswrapper[7440]: I0312 14:19:33.998111 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-7s8fj_f3c13c5f-3d1f-4e0a-b77b-732255680086/control-plane-machine-set-operator/0.log" Mar 12 14:19:33.998206 master-0 kubenswrapper[7440]: I0312 14:19:33.998147 7440 generic.go:334] "Generic (PLEG): container finished" podID="f3c13c5f-3d1f-4e0a-b77b-732255680086" containerID="c67f823638be00e0ed74a2579b7dd1b4da80134d340ad18f11466d7e3913888f" exitCode=1 Mar 12 14:19:35.995861 master-0 kubenswrapper[7440]: E0312 14:19:35.995751 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:19:39.763763 master-0 kubenswrapper[7440]: I0312 14:19:39.763691 7440 patch_prober.go:28] interesting pod/controller-manager-6689dcd7fd-vw9vd container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.65:8443/healthz\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Mar 12 14:19:39.764281 master-0 kubenswrapper[7440]: I0312 14:19:39.763761 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" podUID="99433993-93cf-46cb-bb66-485672cb2554" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.65:8443/healthz\": dial tcp 10.128.0.65:8443: connect: connection refused" Mar 12 14:19:39.764281 master-0 kubenswrapper[7440]: I0312 14:19:39.763694 7440 patch_prober.go:28] interesting pod/controller-manager-6689dcd7fd-vw9vd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/healthz\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Mar 12 14:19:39.764281 master-0 kubenswrapper[7440]: I0312 14:19:39.763867 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" podUID="99433993-93cf-46cb-bb66-485672cb2554" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.65:8443/healthz\": dial tcp 10.128.0.65:8443: connect: connection refused" Mar 12 14:19:42.132642 master-0 kubenswrapper[7440]: E0312 14:19:42.132575 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:19:43.059316 master-0 kubenswrapper[7440]: I0312 14:19:43.059237 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-754bdc9f9d-44b6s_40912d56-8288-4d58-ad91-7455bd460887/machine-approver-controller/0.log" Mar 12 14:19:43.059757 master-0 kubenswrapper[7440]: I0312 14:19:43.059682 7440 generic.go:334] "Generic (PLEG): container finished" podID="40912d56-8288-4d58-ad91-7455bd460887" containerID="6b815065f5b803f6446ee0525693bbd7ee720d608451c165c93b259f6a7e3184" exitCode=255 Mar 12 14:19:43.263093 master-0 kubenswrapper[7440]: I0312 14:19:43.263006 7440 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-qzdff container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" start-of-body= Mar 12 14:19:43.263093 master-0 kubenswrapper[7440]: I0312 14:19:43.263087 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" podUID="1bc0d552-01c7-4212-a551-d16419f2dc80" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" Mar 12 14:19:44.068839 master-0 kubenswrapper[7440]: I0312 14:19:44.068752 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_c6a711bc27e73e2efc239fb72d1184e6/kube-scheduler/0.log" Mar 12 14:19:44.069197 master-0 kubenswrapper[7440]: I0312 14:19:44.069155 7440 generic.go:334] "Generic (PLEG): container finished" podID="c6a711bc27e73e2efc239fb72d1184e6" containerID="b7832dc4839767f3cbfd92e515cd8bc243889013b3c5aafd8b213f8334c4b7db" exitCode=1 Mar 12 14:19:44.520010 master-0 kubenswrapper[7440]: I0312 14:19:44.519974 7440 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Mar 12 14:19:44.520525 master-0 kubenswrapper[7440]: I0312 14:19:44.520011 7440 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Mar 12 14:19:44.520525 master-0 kubenswrapper[7440]: I0312 14:19:44.520075 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Mar 12 14:19:44.520525 master-0 kubenswrapper[7440]: I0312 14:19:44.520020 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Mar 12 14:19:47.067688 master-0 kubenswrapper[7440]: E0312 14:19:47.067543 7440 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189c1db52cdeb9f0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:7fed292c3d5a90a99bfee43e89190405,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:16:57.256212976 +0000 UTC m=+277.591591535,LastTimestamp:2026-03-12 14:16:57.256212976 +0000 UTC m=+277.591591535,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:19:47.903206 master-0 kubenswrapper[7440]: E0312 14:19:47.902990 7440 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:19:47.903646 master-0 kubenswrapper[7440]: E0312 14:19:47.903382 7440 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.027s" Mar 12 14:19:47.903646 master-0 kubenswrapper[7440]: I0312 14:19:47.903409 7440 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerID="cri-o://bfb8925b65ca795f99c38fd98275a891cfe30f8e50ff7cdc4998c8b7134a6ec0" Mar 12 14:19:47.903646 master-0 kubenswrapper[7440]: I0312 14:19:47.903419 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:19:47.904175 master-0 kubenswrapper[7440]: I0312 14:19:47.904132 7440 scope.go:117] "RemoveContainer" containerID="41658f62545b7d9b7450bbc8dac7589cb3b2a123f8c6b156d2fe20c54741e987" Mar 12 14:19:47.915271 master-0 kubenswrapper[7440]: I0312 14:19:47.915216 7440 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 12 14:19:48.116310 master-0 kubenswrapper[7440]: I0312 14:19:48.116267 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/2.log" Mar 12 14:19:48.117610 master-0 kubenswrapper[7440]: I0312 14:19:48.117589 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/1.log" Mar 12 14:19:48.119117 master-0 kubenswrapper[7440]: I0312 14:19:48.119079 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/0.log" Mar 12 14:19:49.763416 master-0 kubenswrapper[7440]: I0312 14:19:49.763334 7440 patch_prober.go:28] interesting pod/controller-manager-6689dcd7fd-vw9vd container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.65:8443/healthz\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Mar 12 14:19:49.764060 master-0 kubenswrapper[7440]: I0312 14:19:49.763434 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" podUID="99433993-93cf-46cb-bb66-485672cb2554" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.65:8443/healthz\": dial tcp 10.128.0.65:8443: connect: connection refused" Mar 12 14:19:49.764060 master-0 kubenswrapper[7440]: I0312 14:19:49.763620 7440 patch_prober.go:28] interesting pod/controller-manager-6689dcd7fd-vw9vd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/healthz\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Mar 12 14:19:49.764060 master-0 kubenswrapper[7440]: I0312 14:19:49.763688 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" podUID="99433993-93cf-46cb-bb66-485672cb2554" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.65:8443/healthz\": dial tcp 10.128.0.65:8443: connect: connection refused" Mar 12 14:19:52.133607 master-0 kubenswrapper[7440]: E0312 14:19:52.133526 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:19:52.133607 master-0 kubenswrapper[7440]: E0312 14:19:52.133576 7440 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 14:19:52.997240 master-0 kubenswrapper[7440]: E0312 14:19:52.997193 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:19:53.262498 master-0 kubenswrapper[7440]: I0312 14:19:53.262332 7440 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-qzdff container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" start-of-body= Mar 12 14:19:53.262498 master-0 kubenswrapper[7440]: I0312 14:19:53.262398 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" podUID="1bc0d552-01c7-4212-a551-d16419f2dc80" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" Mar 12 14:19:54.519690 master-0 kubenswrapper[7440]: I0312 14:19:54.519628 7440 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Mar 12 14:19:54.520181 master-0 kubenswrapper[7440]: I0312 14:19:54.519693 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Mar 12 14:19:54.520181 master-0 kubenswrapper[7440]: I0312 14:19:54.519773 7440 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Mar 12 14:19:54.520181 master-0 kubenswrapper[7440]: I0312 14:19:54.519839 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Mar 12 14:19:57.651264 master-0 kubenswrapper[7440]: I0312 14:19:57.651170 7440 status_manager.go:851] "Failed to get status for pod" podUID="7fed292c3d5a90a99bfee43e89190405" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)" Mar 12 14:19:59.763696 master-0 kubenswrapper[7440]: I0312 14:19:59.763634 7440 patch_prober.go:28] interesting pod/controller-manager-6689dcd7fd-vw9vd container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.65:8443/healthz\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Mar 12 14:19:59.763696 master-0 kubenswrapper[7440]: I0312 14:19:59.763662 7440 patch_prober.go:28] interesting pod/controller-manager-6689dcd7fd-vw9vd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/healthz\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Mar 12 14:19:59.763696 master-0 kubenswrapper[7440]: I0312 14:19:59.763690 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" podUID="99433993-93cf-46cb-bb66-485672cb2554" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.65:8443/healthz\": dial tcp 10.128.0.65:8443: connect: connection refused" Mar 12 14:19:59.764362 master-0 kubenswrapper[7440]: I0312 14:19:59.763709 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" podUID="99433993-93cf-46cb-bb66-485672cb2554" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.65:8443/healthz\": dial tcp 10.128.0.65:8443: connect: connection refused" Mar 12 14:20:03.263025 master-0 kubenswrapper[7440]: I0312 14:20:03.262270 7440 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-qzdff container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" start-of-body= Mar 12 14:20:03.264864 master-0 kubenswrapper[7440]: I0312 14:20:03.264245 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" podUID="1bc0d552-01c7-4212-a551-d16419f2dc80" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" Mar 12 14:20:04.519393 master-0 kubenswrapper[7440]: I0312 14:20:04.519342 7440 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Mar 12 14:20:04.519855 master-0 kubenswrapper[7440]: I0312 14:20:04.519398 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Mar 12 14:20:04.519855 master-0 kubenswrapper[7440]: I0312 14:20:04.519445 7440 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Mar 12 14:20:04.519855 master-0 kubenswrapper[7440]: I0312 14:20:04.519514 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Mar 12 14:20:09.763775 master-0 kubenswrapper[7440]: I0312 14:20:09.763531 7440 patch_prober.go:28] interesting pod/controller-manager-6689dcd7fd-vw9vd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/healthz\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Mar 12 14:20:09.763775 master-0 kubenswrapper[7440]: I0312 14:20:09.763769 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" podUID="99433993-93cf-46cb-bb66-485672cb2554" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.65:8443/healthz\": dial tcp 10.128.0.65:8443: connect: connection refused" Mar 12 14:20:09.999033 master-0 kubenswrapper[7440]: E0312 14:20:09.998666 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:20:12.377458 master-0 kubenswrapper[7440]: E0312 14:20:12.377202 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:20:02Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:20:02Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:20:02Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:20:02Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:0d4c830b2653f2eeffebd09537afb06afb5ae827adbc03f224ab7269f399c05c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d6065909bc521a3f9a85174276fdbceafad02a276449a7dd1952a1f689b0d362\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1735807445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:185237e125a9d710a58d4b588ea6b75eb361e4e99d979c1acd193de3b5d787f1\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:746054bb64fa0b27b1a696cd5db508bb9ee883a94969e4c1c4b5d35a93da8ef5\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1281521882},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:419c6163a23c12fa8884122764fc9055f901e98f35811ea7b5af57f8a71cdb3c\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bbd5afda20f052626b7914c319e3b44721ac442a05724cfe4199e8736319dcf1\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221789390},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032\\\"],\\\"sizeBytes\\\":487151732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e\\\"],\\\"sizeBytes\\\":480534195},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f\\\"],\\\"sizeBytes\\\":471430788}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:20:13.262669 master-0 kubenswrapper[7440]: I0312 14:20:13.262581 7440 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-qzdff container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" start-of-body= Mar 12 14:20:13.262669 master-0 kubenswrapper[7440]: I0312 14:20:13.262657 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" podUID="1bc0d552-01c7-4212-a551-d16419f2dc80" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" Mar 12 14:20:14.519685 master-0 kubenswrapper[7440]: I0312 14:20:14.519626 7440 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Mar 12 14:20:14.520178 master-0 kubenswrapper[7440]: I0312 14:20:14.519699 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Mar 12 14:20:19.315651 master-0 kubenswrapper[7440]: I0312 14:20:19.315605 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/3.log" Mar 12 14:20:19.316262 master-0 kubenswrapper[7440]: I0312 14:20:19.316084 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/2.log" Mar 12 14:20:19.316550 master-0 kubenswrapper[7440]: I0312 14:20:19.316515 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/1.log" Mar 12 14:20:19.317672 master-0 kubenswrapper[7440]: I0312 14:20:19.317651 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/0.log" Mar 12 14:20:19.318089 master-0 kubenswrapper[7440]: I0312 14:20:19.318056 7440 generic.go:334] "Generic (PLEG): container finished" podID="7fed292c3d5a90a99bfee43e89190405" containerID="5129e658d4f38f219309b50d5fba03618805b0cabc3e08b4d6c2ce7c8973f8b3" exitCode=255 Mar 12 14:20:19.763029 master-0 kubenswrapper[7440]: I0312 14:20:19.762955 7440 patch_prober.go:28] interesting pod/controller-manager-6689dcd7fd-vw9vd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/healthz\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Mar 12 14:20:19.763264 master-0 kubenswrapper[7440]: I0312 14:20:19.763031 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" podUID="99433993-93cf-46cb-bb66-485672cb2554" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.65:8443/healthz\": dial tcp 10.128.0.65:8443: connect: connection refused" Mar 12 14:20:21.069756 master-0 kubenswrapper[7440]: E0312 14:20:21.069572 7440 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189c1db52cec5b5d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:7fed292c3d5a90a99bfee43e89190405,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:16:57.257106269 +0000 UTC m=+277.592484828,LastTimestamp:2026-03-12 14:16:57.257106269 +0000 UTC m=+277.592484828,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:20:21.917417 master-0 kubenswrapper[7440]: E0312 14:20:21.917316 7440 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:20:21.918138 master-0 kubenswrapper[7440]: E0312 14:20:21.917634 7440 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.014s" Mar 12 14:20:21.934223 master-0 kubenswrapper[7440]: I0312 14:20:21.934162 7440 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 12 14:20:22.379440 master-0 kubenswrapper[7440]: E0312 14:20:22.379240 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:20:23.263460 master-0 kubenswrapper[7440]: I0312 14:20:23.263217 7440 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-qzdff container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" start-of-body= Mar 12 14:20:23.263460 master-0 kubenswrapper[7440]: I0312 14:20:23.263294 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" podUID="1bc0d552-01c7-4212-a551-d16419f2dc80" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" Mar 12 14:20:24.519981 master-0 kubenswrapper[7440]: I0312 14:20:24.519940 7440 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Mar 12 14:20:24.520391 master-0 kubenswrapper[7440]: I0312 14:20:24.519985 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Mar 12 14:20:27.000758 master-0 kubenswrapper[7440]: E0312 14:20:27.000628 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:20:29.763236 master-0 kubenswrapper[7440]: I0312 14:20:29.763165 7440 patch_prober.go:28] interesting pod/controller-manager-6689dcd7fd-vw9vd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/healthz\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Mar 12 14:20:29.765492 master-0 kubenswrapper[7440]: I0312 14:20:29.763244 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" podUID="99433993-93cf-46cb-bb66-485672cb2554" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.65:8443/healthz\": dial tcp 10.128.0.65:8443: connect: connection refused" Mar 12 14:20:32.380621 master-0 kubenswrapper[7440]: E0312 14:20:32.380569 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:20:33.263178 master-0 kubenswrapper[7440]: I0312 14:20:33.263124 7440 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-qzdff container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" start-of-body= Mar 12 14:20:33.263535 master-0 kubenswrapper[7440]: I0312 14:20:33.263503 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" podUID="1bc0d552-01c7-4212-a551-d16419f2dc80" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" Mar 12 14:20:34.286593 master-0 kubenswrapper[7440]: I0312 14:20:34.286530 7440 scope.go:117] "RemoveContainer" containerID="03429e462f0622cfb4b81f008568fcb386a658560e44c8b3a80cc0aa9bf08473" Mar 12 14:20:34.301475 master-0 kubenswrapper[7440]: I0312 14:20:34.301442 7440 scope.go:117] "RemoveContainer" containerID="95ba11fc8a440b0f75fb1a6bf90aed334dc73dd1799f7af488f9efe94a5e77b1" Mar 12 14:20:34.520180 master-0 kubenswrapper[7440]: I0312 14:20:34.520136 7440 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Mar 12 14:20:34.520365 master-0 kubenswrapper[7440]: I0312 14:20:34.520208 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Mar 12 14:20:39.762848 master-0 kubenswrapper[7440]: I0312 14:20:39.762784 7440 patch_prober.go:28] interesting pod/controller-manager-6689dcd7fd-vw9vd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/healthz\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Mar 12 14:20:39.763954 master-0 kubenswrapper[7440]: I0312 14:20:39.763822 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" podUID="99433993-93cf-46cb-bb66-485672cb2554" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.65:8443/healthz\": dial tcp 10.128.0.65:8443: connect: connection refused" Mar 12 14:20:42.381534 master-0 kubenswrapper[7440]: E0312 14:20:42.381414 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:20:43.263364 master-0 kubenswrapper[7440]: I0312 14:20:43.262059 7440 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-qzdff container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" start-of-body= Mar 12 14:20:43.263364 master-0 kubenswrapper[7440]: I0312 14:20:43.262393 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" podUID="1bc0d552-01c7-4212-a551-d16419f2dc80" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" Mar 12 14:20:44.001370 master-0 kubenswrapper[7440]: E0312 14:20:44.001299 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:20:44.519426 master-0 kubenswrapper[7440]: I0312 14:20:44.519377 7440 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Mar 12 14:20:44.519657 master-0 kubenswrapper[7440]: I0312 14:20:44.519441 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Mar 12 14:20:49.762797 master-0 kubenswrapper[7440]: I0312 14:20:49.762736 7440 patch_prober.go:28] interesting pod/controller-manager-6689dcd7fd-vw9vd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.65:8443/healthz\": dial tcp 10.128.0.65:8443: connect: connection refused" start-of-body= Mar 12 14:20:49.764029 master-0 kubenswrapper[7440]: I0312 14:20:49.763975 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" podUID="99433993-93cf-46cb-bb66-485672cb2554" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.65:8443/healthz\": dial tcp 10.128.0.65:8443: connect: connection refused" Mar 12 14:20:52.382317 master-0 kubenswrapper[7440]: E0312 14:20:52.382192 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:20:52.382317 master-0 kubenswrapper[7440]: E0312 14:20:52.382258 7440 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 14:20:53.262376 master-0 kubenswrapper[7440]: I0312 14:20:53.262291 7440 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-qzdff container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" start-of-body= Mar 12 14:20:53.262653 master-0 kubenswrapper[7440]: I0312 14:20:53.262388 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" podUID="1bc0d552-01c7-4212-a551-d16419f2dc80" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.23:8080/healthz\": dial tcp 10.128.0.23:8080: connect: connection refused" Mar 12 14:20:54.519516 master-0 kubenswrapper[7440]: I0312 14:20:54.519459 7440 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Mar 12 14:20:54.520105 master-0 kubenswrapper[7440]: I0312 14:20:54.519517 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Mar 12 14:20:55.073401 master-0 kubenswrapper[7440]: E0312 14:20:55.073189 7440 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189c1db53930aaaa openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:7fed292c3d5a90a99bfee43e89190405,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:16:57.46290961 +0000 UTC m=+277.798288169,LastTimestamp:2026-03-12 14:16:57.46290961 +0000 UTC m=+277.798288169,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:20:55.937114 master-0 kubenswrapper[7440]: E0312 14:20:55.937008 7440 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:20:55.938010 master-0 kubenswrapper[7440]: E0312 14:20:55.937278 7440 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.02s" Mar 12 14:20:55.938010 master-0 kubenswrapper[7440]: I0312 14:20:55.937349 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:20:55.939967 master-0 kubenswrapper[7440]: I0312 14:20:55.939858 7440 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="10b1bd98-beac-469c-9a65-abee3ca8a243" Mar 12 14:20:55.940238 master-0 kubenswrapper[7440]: I0312 14:20:55.940215 7440 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="10b1bd98-beac-469c-9a65-abee3ca8a243" Mar 12 14:20:55.940822 master-0 kubenswrapper[7440]: I0312 14:20:55.940733 7440 scope.go:117] "RemoveContainer" containerID="cb41f5989ad50bdc5ae078b167c9bb559590c0f507a4b8b3d5d90309a6eca4b7" Mar 12 14:20:55.942979 master-0 kubenswrapper[7440]: I0312 14:20:55.942926 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:20:55.943091 master-0 kubenswrapper[7440]: I0312 14:20:55.943069 7440 scope.go:117] "RemoveContainer" containerID="e09e9528f2e667c7ca5a54a2f40134d7a65389dd5410fb6f666432c3167149ba" Mar 12 14:20:55.943659 master-0 kubenswrapper[7440]: I0312 14:20:55.943610 7440 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"c7f808f64216bac6165a91847cbe1e04c9cbb2e41a6946684e87039fd940bcf1"} pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" containerMessage="Container router failed startup probe, will be restarted" Mar 12 14:20:55.943709 master-0 kubenswrapper[7440]: I0312 14:20:55.943682 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" containerID="cri-o://c7f808f64216bac6165a91847cbe1e04c9cbb2e41a6946684e87039fd940bcf1" gracePeriod=3600 Mar 12 14:20:55.945498 master-0 kubenswrapper[7440]: I0312 14:20:55.945387 7440 scope.go:117] "RemoveContainer" containerID="b7832dc4839767f3cbfd92e515cd8bc243889013b3c5aafd8b213f8334c4b7db" Mar 12 14:20:55.945695 master-0 kubenswrapper[7440]: I0312 14:20:55.945643 7440 scope.go:117] "RemoveContainer" containerID="c67f823638be00e0ed74a2579b7dd1b4da80134d340ad18f11466d7e3913888f" Mar 12 14:20:55.945868 master-0 kubenswrapper[7440]: I0312 14:20:55.945701 7440 scope.go:117] "RemoveContainer" containerID="b7d782d5bb2308ec609e902e46de5e46198bee9122afbefc61233a9ba61991af" Mar 12 14:20:55.946172 master-0 kubenswrapper[7440]: I0312 14:20:55.946022 7440 scope.go:117] "RemoveContainer" containerID="7bd65ca4e680b5333dd47dc3da6564b9ecb4961327d3c93643808daf9a4c8812" Mar 12 14:20:55.946267 master-0 kubenswrapper[7440]: I0312 14:20:55.946191 7440 scope.go:117] "RemoveContainer" containerID="942edb2086b196730f2050c8c10e7943616ea284812689341f08412925b12705" Mar 12 14:20:55.946267 master-0 kubenswrapper[7440]: I0312 14:20:55.946204 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:20:55.946491 master-0 kubenswrapper[7440]: I0312 14:20:55.946403 7440 scope.go:117] "RemoveContainer" containerID="6b815065f5b803f6446ee0525693bbd7ee720d608451c165c93b259f6a7e3184" Mar 12 14:20:55.946736 master-0 kubenswrapper[7440]: I0312 14:20:55.946669 7440 scope.go:117] "RemoveContainer" containerID="5129e658d4f38f219309b50d5fba03618805b0cabc3e08b4d6c2ce7c8973f8b3" Mar 12 14:20:55.947447 master-0 kubenswrapper[7440]: E0312 14:20:55.946932 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:20:55.948606 master-0 kubenswrapper[7440]: I0312 14:20:55.948488 7440 scope.go:117] "RemoveContainer" containerID="d4f5f31cb9b13fbf54308c119403bf09d2d0acf82b48cd71b5bda3672a1ed049" Mar 12 14:20:55.959346 master-0 kubenswrapper[7440]: I0312 14:20:55.958938 7440 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 12 14:20:56.568556 master-0 kubenswrapper[7440]: I0312 14:20:56.568504 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-754bdc9f9d-44b6s_40912d56-8288-4d58-ad91-7455bd460887/machine-approver-controller/0.log" Mar 12 14:20:56.573290 master-0 kubenswrapper[7440]: I0312 14:20:56.573252 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hs6mc_3edaa533-ecbb-443e-a270-4cb4f923daf6/cluster-baremetal-operator/0.log" Mar 12 14:20:56.578768 master-0 kubenswrapper[7440]: I0312 14:20:56.578745 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-7s8fj_f3c13c5f-3d1f-4e0a-b77b-732255680086/control-plane-machine-set-operator/0.log" Mar 12 14:20:56.581053 master-0 kubenswrapper[7440]: I0312 14:20:56.581015 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_c6a711bc27e73e2efc239fb72d1184e6/kube-scheduler/0.log" Mar 12 14:20:56.584100 master-0 kubenswrapper[7440]: I0312 14:20:56.584071 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/1.log" Mar 12 14:20:56.585153 master-0 kubenswrapper[7440]: I0312 14:20:56.585115 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/0.log" Mar 12 14:20:56.587993 master-0 kubenswrapper[7440]: I0312 14:20:56.587969 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/1.log" Mar 12 14:20:56.588525 master-0 kubenswrapper[7440]: I0312 14:20:56.588495 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/0.log" Mar 12 14:20:56.589128 master-0 kubenswrapper[7440]: I0312 14:20:56.589101 7440 scope.go:117] "RemoveContainer" containerID="5129e658d4f38f219309b50d5fba03618805b0cabc3e08b4d6c2ce7c8973f8b3" Mar 12 14:20:56.589395 master-0 kubenswrapper[7440]: E0312 14:20:56.589369 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:20:56.831804 master-0 kubenswrapper[7440]: I0312 14:20:56.831763 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_05fc4965-b390-4edc-a407-d431b06d7612/installer/0.log" Mar 12 14:20:56.831987 master-0 kubenswrapper[7440]: I0312 14:20:56.831836 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 14:20:56.832692 master-0 kubenswrapper[7440]: I0312 14:20:56.832656 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05fc4965-b390-4edc-a407-d431b06d7612-kube-api-access\") pod \"05fc4965-b390-4edc-a407-d431b06d7612\" (UID: \"05fc4965-b390-4edc-a407-d431b06d7612\") " Mar 12 14:20:56.832757 master-0 kubenswrapper[7440]: I0312 14:20:56.832695 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05fc4965-b390-4edc-a407-d431b06d7612-var-lock\") pod \"05fc4965-b390-4edc-a407-d431b06d7612\" (UID: \"05fc4965-b390-4edc-a407-d431b06d7612\") " Mar 12 14:20:56.832757 master-0 kubenswrapper[7440]: I0312 14:20:56.832754 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05fc4965-b390-4edc-a407-d431b06d7612-kubelet-dir\") pod \"05fc4965-b390-4edc-a407-d431b06d7612\" (UID: \"05fc4965-b390-4edc-a407-d431b06d7612\") " Mar 12 14:20:56.832879 master-0 kubenswrapper[7440]: I0312 14:20:56.832846 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05fc4965-b390-4edc-a407-d431b06d7612-var-lock" (OuterVolumeSpecName: "var-lock") pod "05fc4965-b390-4edc-a407-d431b06d7612" (UID: "05fc4965-b390-4edc-a407-d431b06d7612"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:20:56.832952 master-0 kubenswrapper[7440]: I0312 14:20:56.832911 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05fc4965-b390-4edc-a407-d431b06d7612-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "05fc4965-b390-4edc-a407-d431b06d7612" (UID: "05fc4965-b390-4edc-a407-d431b06d7612"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:20:56.833157 master-0 kubenswrapper[7440]: I0312 14:20:56.833126 7440 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05fc4965-b390-4edc-a407-d431b06d7612-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:20:56.833157 master-0 kubenswrapper[7440]: I0312 14:20:56.833151 7440 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05fc4965-b390-4edc-a407-d431b06d7612-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 14:20:56.835252 master-0 kubenswrapper[7440]: I0312 14:20:56.835191 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05fc4965-b390-4edc-a407-d431b06d7612-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "05fc4965-b390-4edc-a407-d431b06d7612" (UID: "05fc4965-b390-4edc-a407-d431b06d7612"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:20:56.935973 master-0 kubenswrapper[7440]: I0312 14:20:56.935913 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05fc4965-b390-4edc-a407-d431b06d7612-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 14:20:57.596030 master-0 kubenswrapper[7440]: I0312 14:20:57.595973 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_05fc4965-b390-4edc-a407-d431b06d7612/installer/0.log" Mar 12 14:20:57.596738 master-0 kubenswrapper[7440]: I0312 14:20:57.596070 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 14:20:57.653200 master-0 kubenswrapper[7440]: I0312 14:20:57.653095 7440 status_manager.go:851] "Failed to get status for pod" podUID="29c709c82970b529e7b9b895aa92ef05" pod="openshift-etcd/etcd-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)" Mar 12 14:21:01.001714 master-0 kubenswrapper[7440]: E0312 14:21:01.001569 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Mar 12 14:21:12.505105 master-0 kubenswrapper[7440]: E0312 14:21:12.504958 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:21:02Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:21:02Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:21:02Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:21:02Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:0d4c830b2653f2eeffebd09537afb06afb5ae827adbc03f224ab7269f399c05c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d6065909bc521a3f9a85174276fdbceafad02a276449a7dd1952a1f689b0d362\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1735807445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:185237e125a9d710a58d4b588ea6b75eb361e4e99d979c1acd193de3b5d787f1\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:746054bb64fa0b27b1a696cd5db508bb9ee883a94969e4c1c4b5d35a93da8ef5\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1281521882},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:419c6163a23c12fa8884122764fc9055f901e98f35811ea7b5af57f8a71cdb3c\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bbd5afda20f052626b7914c319e3b44721ac442a05724cfe4199e8736319dcf1\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221789390},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032\\\"],\\\"sizeBytes\\\":487151732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e\\\"],\\\"sizeBytes\\\":480534195},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f\\\"],\\\"sizeBytes\\\":471430788}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:21:17.967825 master-0 kubenswrapper[7440]: I0312 14:21:17.967739 7440 patch_prober.go:28] interesting pod/machine-config-daemon-ngzc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:21:17.967825 master-0 kubenswrapper[7440]: I0312 14:21:17.967801 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" podUID="8e4d9407-ff79-4396-a37f-896617e024d4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:21:18.003073 master-0 kubenswrapper[7440]: E0312 14:21:18.002994 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:21:22.506117 master-0 kubenswrapper[7440]: E0312 14:21:22.506044 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:21:26.793713 master-0 kubenswrapper[7440]: I0312 14:21:26.793665 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/2.log" Mar 12 14:21:26.794342 master-0 kubenswrapper[7440]: I0312 14:21:26.794075 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/1.log" Mar 12 14:21:26.794576 master-0 kubenswrapper[7440]: I0312 14:21:26.794543 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/0.log" Mar 12 14:21:26.794628 master-0 kubenswrapper[7440]: I0312 14:21:26.794587 7440 generic.go:334] "Generic (PLEG): container finished" podID="d56089bf-177c-492d-8964-73a45574e7ed" containerID="8ae9516d1bad64d2b36bf66ae5496f784cbde176fd71bcff31926fef9dd2ff15" exitCode=1 Mar 12 14:21:29.076363 master-0 kubenswrapper[7440]: E0312 14:21:29.076153 7440 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189c1db539d62012 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:7fed292c3d5a90a99bfee43e89190405,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:16:57.473753106 +0000 UTC m=+277.809131665,LastTimestamp:2026-03-12 14:16:57.473753106 +0000 UTC m=+277.809131665,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:21:29.943644 master-0 kubenswrapper[7440]: E0312 14:21:29.943581 7440 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 12 14:21:29.961727 master-0 kubenswrapper[7440]: E0312 14:21:29.961667 7440 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:21:29.961983 master-0 kubenswrapper[7440]: E0312 14:21:29.961926 7440 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.016s" Mar 12 14:21:29.961983 master-0 kubenswrapper[7440]: I0312 14:21:29.961960 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:21:29.961983 master-0 kubenswrapper[7440]: I0312 14:21:29.961978 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:21:29.962134 master-0 kubenswrapper[7440]: I0312 14:21:29.961993 7440 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerID="cri-o://77d5ea8d3aeff7d8613d21bf451df4c494347c5824551bb22ccce9ec8f0d6a8d" Mar 12 14:21:29.962134 master-0 kubenswrapper[7440]: I0312 14:21:29.962002 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:21:29.963196 master-0 kubenswrapper[7440]: I0312 14:21:29.963157 7440 scope.go:117] "RemoveContainer" containerID="5129e658d4f38f219309b50d5fba03618805b0cabc3e08b4d6c2ce7c8973f8b3" Mar 12 14:21:29.970227 master-0 kubenswrapper[7440]: I0312 14:21:29.970170 7440 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 12 14:21:30.836041 master-0 kubenswrapper[7440]: I0312 14:21:30.835974 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/3.log" Mar 12 14:21:30.836513 master-0 kubenswrapper[7440]: I0312 14:21:30.836473 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/2.log" Mar 12 14:21:30.836942 master-0 kubenswrapper[7440]: I0312 14:21:30.836918 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/1.log" Mar 12 14:21:30.838070 master-0 kubenswrapper[7440]: I0312 14:21:30.838039 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/0.log" Mar 12 14:21:30.839944 master-0 kubenswrapper[7440]: I0312 14:21:30.839853 7440 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="557e5767b6a5906fd35802d8cc7a729030365600bcb6aca559cdc1d58e816deb" exitCode=0 Mar 12 14:21:32.507044 master-0 kubenswrapper[7440]: E0312 14:21:32.506982 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:21:35.004866 master-0 kubenswrapper[7440]: E0312 14:21:35.004620 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:21:42.507660 master-0 kubenswrapper[7440]: E0312 14:21:42.507598 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:21:42.942089 master-0 kubenswrapper[7440]: I0312 14:21:42.941878 7440 generic.go:334] "Generic (PLEG): container finished" podID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerID="c7f808f64216bac6165a91847cbe1e04c9cbb2e41a6946684e87039fd940bcf1" exitCode=0 Mar 12 14:21:45.520287 master-0 kubenswrapper[7440]: I0312 14:21:45.520173 7440 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:21:45.520287 master-0 kubenswrapper[7440]: I0312 14:21:45.520265 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:21:45.521265 master-0 kubenswrapper[7440]: I0312 14:21:45.520420 7440 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:21:45.521265 master-0 kubenswrapper[7440]: I0312 14:21:45.520454 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:21:47.968558 master-0 kubenswrapper[7440]: I0312 14:21:47.968445 7440 patch_prober.go:28] interesting pod/machine-config-daemon-ngzc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:21:47.968558 master-0 kubenswrapper[7440]: I0312 14:21:47.968532 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" podUID="8e4d9407-ff79-4396-a37f-896617e024d4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:21:52.006057 master-0 kubenswrapper[7440]: E0312 14:21:52.005960 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:21:52.508845 master-0 kubenswrapper[7440]: E0312 14:21:52.508727 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:21:52.508845 master-0 kubenswrapper[7440]: E0312 14:21:52.508786 7440 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 14:21:55.520138 master-0 kubenswrapper[7440]: I0312 14:21:55.519956 7440 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:21:55.520138 master-0 kubenswrapper[7440]: I0312 14:21:55.520028 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:21:55.520138 master-0 kubenswrapper[7440]: I0312 14:21:55.519960 7440 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:21:55.520138 master-0 kubenswrapper[7440]: I0312 14:21:55.520097 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:21:57.037119 master-0 kubenswrapper[7440]: I0312 14:21:57.037052 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hs6mc_3edaa533-ecbb-443e-a270-4cb4f923daf6/cluster-baremetal-operator/1.log" Mar 12 14:21:57.038366 master-0 kubenswrapper[7440]: I0312 14:21:57.038312 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hs6mc_3edaa533-ecbb-443e-a270-4cb4f923daf6/cluster-baremetal-operator/0.log" Mar 12 14:21:57.038496 master-0 kubenswrapper[7440]: I0312 14:21:57.038375 7440 generic.go:334] "Generic (PLEG): container finished" podID="3edaa533-ecbb-443e-a270-4cb4f923daf6" containerID="76d5f71d0b9e07fd3636c963c1e496e4449c72c239decd6092cccc9fe18dbb61" exitCode=1 Mar 12 14:21:57.655158 master-0 kubenswrapper[7440]: I0312 14:21:57.655091 7440 status_manager.go:851] "Failed to get status for pod" podUID="addf66af-4d97-4c1e-960d-ace98c27961b" pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods metrics-server-85b44c7984-pzbfq)" Mar 12 14:22:01.070886 master-0 kubenswrapper[7440]: I0312 14:22:01.070774 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/4.log" Mar 12 14:22:01.072212 master-0 kubenswrapper[7440]: I0312 14:22:01.071251 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/3.log" Mar 12 14:22:01.072212 master-0 kubenswrapper[7440]: I0312 14:22:01.071645 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/2.log" Mar 12 14:22:01.072212 master-0 kubenswrapper[7440]: I0312 14:22:01.072044 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/1.log" Mar 12 14:22:01.073195 master-0 kubenswrapper[7440]: I0312 14:22:01.073147 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/0.log" Mar 12 14:22:01.073587 master-0 kubenswrapper[7440]: I0312 14:22:01.073515 7440 generic.go:334] "Generic (PLEG): container finished" podID="7fed292c3d5a90a99bfee43e89190405" containerID="a946cdc53167780579891b144ae4c01088126bb42ef45317938bc8fe5fe26cbb" exitCode=255 Mar 12 14:22:03.081730 master-0 kubenswrapper[7440]: E0312 14:22:03.081425 7440 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Mar 12 14:22:03.081730 master-0 kubenswrapper[7440]: &Event{ObjectMeta:{kube-controller-manager-master-0.189c1db807ad5717 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:7fed292c3d5a90a99bfee43e89190405,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 12 14:22:03.081730 master-0 kubenswrapper[7440]: body: Mar 12 14:22:03.081730 master-0 kubenswrapper[7440]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:17:09.517121303 +0000 UTC m=+289.852499862,LastTimestamp:2026-03-12 14:17:09.517121303 +0000 UTC m=+289.852499862,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Mar 12 14:22:03.081730 master-0 kubenswrapper[7440]: > Mar 12 14:22:03.973070 master-0 kubenswrapper[7440]: E0312 14:22:03.973003 7440 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:22:03.973299 master-0 kubenswrapper[7440]: E0312 14:22:03.973227 7440 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.011s" Mar 12 14:22:03.973299 master-0 kubenswrapper[7440]: I0312 14:22:03.973253 7440 status_manager.go:379] "Container startup changed for unknown container" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerID="cri-o://bfb8925b65ca795f99c38fd98275a891cfe30f8e50ff7cdc4998c8b7134a6ec0" Mar 12 14:22:03.973299 master-0 kubenswrapper[7440]: I0312 14:22:03.973261 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:22:03.973299 master-0 kubenswrapper[7440]: I0312 14:22:03.973283 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:22:03.973299 master-0 kubenswrapper[7440]: I0312 14:22:03.973300 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:22:03.973547 master-0 kubenswrapper[7440]: I0312 14:22:03.973316 7440 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerID="cri-o://41658f62545b7d9b7450bbc8dac7589cb3b2a123f8c6b156d2fe20c54741e987" Mar 12 14:22:03.973547 master-0 kubenswrapper[7440]: I0312 14:22:03.973323 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:22:03.973547 master-0 kubenswrapper[7440]: I0312 14:22:03.973340 7440 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" containerID="cri-o://b7832dc4839767f3cbfd92e515cd8bc243889013b3c5aafd8b213f8334c4b7db" Mar 12 14:22:03.973547 master-0 kubenswrapper[7440]: I0312 14:22:03.973347 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:22:03.973547 master-0 kubenswrapper[7440]: I0312 14:22:03.973415 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7fed292c3d5a90a99bfee43e89190405","Type":"ContainerDied","Data":"bfb8925b65ca795f99c38fd98275a891cfe30f8e50ff7cdc4998c8b7134a6ec0"} Mar 12 14:22:03.973547 master-0 kubenswrapper[7440]: I0312 14:22:03.973437 7440 status_manager.go:379] "Container startup changed for unknown container" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerID="cri-o://77d5ea8d3aeff7d8613d21bf451df4c494347c5824551bb22ccce9ec8f0d6a8d" Mar 12 14:22:03.973547 master-0 kubenswrapper[7440]: I0312 14:22:03.973448 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:22:03.973789 master-0 kubenswrapper[7440]: I0312 14:22:03.973664 7440 scope.go:117] "RemoveContainer" containerID="5129e658d4f38f219309b50d5fba03618805b0cabc3e08b4d6c2ce7c8973f8b3" Mar 12 14:22:03.976599 master-0 kubenswrapper[7440]: I0312 14:22:03.976516 7440 scope.go:117] "RemoveContainer" containerID="a946cdc53167780579891b144ae4c01088126bb42ef45317938bc8fe5fe26cbb" Mar 12 14:22:03.977180 master-0 kubenswrapper[7440]: E0312 14:22:03.977138 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:22:03.977263 master-0 kubenswrapper[7440]: I0312 14:22:03.977223 7440 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerID="cri-o://5129e658d4f38f219309b50d5fba03618805b0cabc3e08b4d6c2ce7c8973f8b3" Mar 12 14:22:03.977263 master-0 kubenswrapper[7440]: I0312 14:22:03.977253 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:22:03.986918 master-0 kubenswrapper[7440]: I0312 14:22:03.986846 7440 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 12 14:22:03.996048 master-0 kubenswrapper[7440]: I0312 14:22:03.996007 7440 scope.go:117] "RemoveContainer" containerID="41658f62545b7d9b7450bbc8dac7589cb3b2a123f8c6b156d2fe20c54741e987" Mar 12 14:22:04.016117 master-0 kubenswrapper[7440]: I0312 14:22:04.016057 7440 scope.go:117] "RemoveContainer" containerID="77d5ea8d3aeff7d8613d21bf451df4c494347c5824551bb22ccce9ec8f0d6a8d" Mar 12 14:22:04.034576 master-0 kubenswrapper[7440]: I0312 14:22:04.034516 7440 scope.go:117] "RemoveContainer" containerID="bfb8925b65ca795f99c38fd98275a891cfe30f8e50ff7cdc4998c8b7134a6ec0" Mar 12 14:22:04.094768 master-0 kubenswrapper[7440]: I0312 14:22:04.094689 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/4.log" Mar 12 14:22:04.096672 master-0 kubenswrapper[7440]: I0312 14:22:04.096632 7440 scope.go:117] "RemoveContainer" containerID="a946cdc53167780579891b144ae4c01088126bb42ef45317938bc8fe5fe26cbb" Mar 12 14:22:04.097092 master-0 kubenswrapper[7440]: E0312 14:22:04.097047 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:22:04.131471 master-0 kubenswrapper[7440]: I0312 14:22:04.131399 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:04.131471 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:04.131471 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:04.131471 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:04.131827 master-0 kubenswrapper[7440]: I0312 14:22:04.131471 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:05.104623 master-0 kubenswrapper[7440]: I0312 14:22:05.104500 7440 generic.go:334] "Generic (PLEG): container finished" podID="61d829d7-38e1-4826-942c-f7317c4a4bec" containerID="952a4e5cff72cd7499151126b7d570c4e426b0316c7d3f1d9462b433d44d34b6" exitCode=0 Mar 12 14:22:05.105205 master-0 kubenswrapper[7440]: I0312 14:22:05.105176 7440 scope.go:117] "RemoveContainer" containerID="a946cdc53167780579891b144ae4c01088126bb42ef45317938bc8fe5fe26cbb" Mar 12 14:22:05.105506 master-0 kubenswrapper[7440]: E0312 14:22:05.105430 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:22:05.132244 master-0 kubenswrapper[7440]: I0312 14:22:05.132109 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:05.132244 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:05.132244 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:05.132244 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:05.132506 master-0 kubenswrapper[7440]: I0312 14:22:05.132238 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:06.131165 master-0 kubenswrapper[7440]: I0312 14:22:06.131091 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:06.131165 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:06.131165 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:06.131165 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:06.132073 master-0 kubenswrapper[7440]: I0312 14:22:06.131193 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:07.132489 master-0 kubenswrapper[7440]: I0312 14:22:07.132417 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:07.132489 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:07.132489 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:07.132489 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:07.133171 master-0 kubenswrapper[7440]: I0312 14:22:07.132493 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:08.131678 master-0 kubenswrapper[7440]: I0312 14:22:08.131595 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:08.131678 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:08.131678 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:08.131678 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:08.132025 master-0 kubenswrapper[7440]: I0312 14:22:08.131699 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:09.008127 master-0 kubenswrapper[7440]: E0312 14:22:09.007952 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:22:09.131732 master-0 kubenswrapper[7440]: I0312 14:22:09.131654 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:09.131732 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:09.131732 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:09.131732 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:09.132081 master-0 kubenswrapper[7440]: I0312 14:22:09.131758 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:10.133188 master-0 kubenswrapper[7440]: I0312 14:22:10.133063 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:10.133188 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:10.133188 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:10.133188 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:10.133188 master-0 kubenswrapper[7440]: I0312 14:22:10.133173 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:11.131985 master-0 kubenswrapper[7440]: I0312 14:22:11.131937 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:11.131985 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:11.131985 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:11.131985 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:11.132312 master-0 kubenswrapper[7440]: I0312 14:22:11.132006 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:12.132088 master-0 kubenswrapper[7440]: I0312 14:22:12.132023 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:12.132088 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:12.132088 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:12.132088 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:12.132758 master-0 kubenswrapper[7440]: I0312 14:22:12.132109 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:12.586118 master-0 kubenswrapper[7440]: E0312 14:22:12.585959 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:22:02Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:22:02Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:22:02Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:22:02Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:0d4c830b2653f2eeffebd09537afb06afb5ae827adbc03f224ab7269f399c05c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d6065909bc521a3f9a85174276fdbceafad02a276449a7dd1952a1f689b0d362\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1735807445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:185237e125a9d710a58d4b588ea6b75eb361e4e99d979c1acd193de3b5d787f1\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:746054bb64fa0b27b1a696cd5db508bb9ee883a94969e4c1c4b5d35a93da8ef5\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1281521882},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:419c6163a23c12fa8884122764fc9055f901e98f35811ea7b5af57f8a71cdb3c\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bbd5afda20f052626b7914c319e3b44721ac442a05724cfe4199e8736319dcf1\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221789390},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032\\\"],\\\"sizeBytes\\\":487151732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e\\\"],\\\"sizeBytes\\\":480534195},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f\\\"],\\\"sizeBytes\\\":471430788}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:22:13.132603 master-0 kubenswrapper[7440]: I0312 14:22:13.132513 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:13.132603 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:13.132603 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:13.132603 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:13.132603 master-0 kubenswrapper[7440]: I0312 14:22:13.132578 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:14.131765 master-0 kubenswrapper[7440]: I0312 14:22:14.131704 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:14.131765 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:14.131765 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:14.131765 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:14.131765 master-0 kubenswrapper[7440]: I0312 14:22:14.131770 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:15.131940 master-0 kubenswrapper[7440]: I0312 14:22:15.131867 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:15.131940 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:15.131940 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:15.131940 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:15.132606 master-0 kubenswrapper[7440]: I0312 14:22:15.131961 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:16.133155 master-0 kubenswrapper[7440]: I0312 14:22:16.133086 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:16.133155 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:16.133155 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:16.133155 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:16.134172 master-0 kubenswrapper[7440]: I0312 14:22:16.133179 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:17.132987 master-0 kubenswrapper[7440]: I0312 14:22:17.132833 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:17.132987 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:17.132987 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:17.132987 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:17.133668 master-0 kubenswrapper[7440]: I0312 14:22:17.133009 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:17.968103 master-0 kubenswrapper[7440]: I0312 14:22:17.968011 7440 patch_prober.go:28] interesting pod/machine-config-daemon-ngzc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:22:17.968509 master-0 kubenswrapper[7440]: I0312 14:22:17.968114 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" podUID="8e4d9407-ff79-4396-a37f-896617e024d4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:22:18.131040 master-0 kubenswrapper[7440]: I0312 14:22:18.130955 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:18.131040 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:18.131040 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:18.131040 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:18.131373 master-0 kubenswrapper[7440]: I0312 14:22:18.131059 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:19.131184 master-0 kubenswrapper[7440]: I0312 14:22:19.131117 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:19.131184 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:19.131184 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:19.131184 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:19.131184 master-0 kubenswrapper[7440]: I0312 14:22:19.131183 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:20.131276 master-0 kubenswrapper[7440]: I0312 14:22:20.131209 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:20.131276 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:20.131276 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:20.131276 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:20.131860 master-0 kubenswrapper[7440]: I0312 14:22:20.131320 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:21.132000 master-0 kubenswrapper[7440]: I0312 14:22:21.131929 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:21.132000 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:21.132000 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:21.132000 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:21.132649 master-0 kubenswrapper[7440]: I0312 14:22:21.132021 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:22.131508 master-0 kubenswrapper[7440]: I0312 14:22:22.131441 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:22.131508 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:22.131508 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:22.131508 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:22.131796 master-0 kubenswrapper[7440]: I0312 14:22:22.131513 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:22.586421 master-0 kubenswrapper[7440]: E0312 14:22:22.586313 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:22:23.131641 master-0 kubenswrapper[7440]: I0312 14:22:23.131573 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:23.131641 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:23.131641 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:23.131641 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:23.131920 master-0 kubenswrapper[7440]: I0312 14:22:23.131653 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:24.132125 master-0 kubenswrapper[7440]: I0312 14:22:24.132011 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:24.132125 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:24.132125 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:24.132125 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:24.132125 master-0 kubenswrapper[7440]: I0312 14:22:24.132087 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:25.132660 master-0 kubenswrapper[7440]: I0312 14:22:25.132541 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:25.132660 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:25.132660 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:25.132660 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:25.133703 master-0 kubenswrapper[7440]: I0312 14:22:25.132665 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:26.009600 master-0 kubenswrapper[7440]: E0312 14:22:26.009506 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:22:26.132487 master-0 kubenswrapper[7440]: I0312 14:22:26.132399 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:26.132487 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:26.132487 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:26.132487 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:26.133286 master-0 kubenswrapper[7440]: I0312 14:22:26.132498 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:27.132120 master-0 kubenswrapper[7440]: I0312 14:22:27.132041 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:27.132120 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:27.132120 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:27.132120 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:27.132395 master-0 kubenswrapper[7440]: I0312 14:22:27.132134 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:28.131621 master-0 kubenswrapper[7440]: I0312 14:22:28.131584 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:28.131621 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:28.131621 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:28.131621 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:28.132270 master-0 kubenswrapper[7440]: I0312 14:22:28.132244 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:29.130827 master-0 kubenswrapper[7440]: I0312 14:22:29.130706 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:29.130827 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:29.130827 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:29.130827 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:29.130827 master-0 kubenswrapper[7440]: I0312 14:22:29.130793 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:29.259408 master-0 kubenswrapper[7440]: I0312 14:22:29.259352 7440 generic.go:334] "Generic (PLEG): container finished" podID="de61e1fe-294c-48a6-8cf3-aeb4637ef2cc" containerID="1da1f692fe7f463fbb1c0cbb755fdd4e259885377082c810ee0f69c91f679d04" exitCode=0 Mar 12 14:22:29.261008 master-0 kubenswrapper[7440]: I0312 14:22:29.260976 7440 generic.go:334] "Generic (PLEG): container finished" podID="76d596c0-6a41-43e1-9516-aee9ad834ec2" containerID="3229df69e2e642a1705181c6aea965ce680072f14717e055b2a989c42f067dc0" exitCode=0 Mar 12 14:22:29.262073 master-0 kubenswrapper[7440]: I0312 14:22:29.262050 7440 generic.go:334] "Generic (PLEG): container finished" podID="61de099a-410b-4d30-83e8-19cf5901cb27" containerID="b53df61802c76275e2ee152b7486584e46a40bc0a811c6ed0a3e9d62b01955be" exitCode=0 Mar 12 14:22:29.263332 master-0 kubenswrapper[7440]: I0312 14:22:29.263305 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-b7296_9757edbb-8ce2-4513-9b32-a552df50634c/cluster-autoscaler-operator/0.log" Mar 12 14:22:29.263687 master-0 kubenswrapper[7440]: I0312 14:22:29.263668 7440 generic.go:334] "Generic (PLEG): container finished" podID="9757edbb-8ce2-4513-9b32-a552df50634c" containerID="1f6d2570897da6801ddcca5ad1dff41b4e29f16cbcc5ab930745b1a932963f31" exitCode=255 Mar 12 14:22:29.265466 master-0 kubenswrapper[7440]: I0312 14:22:29.265441 7440 generic.go:334] "Generic (PLEG): container finished" podID="7433d9bf-4edf-4787-a7a1-e5102c7264c7" containerID="93fc043f83fd1d3afac8895480948677e740498aeff368b3ec9e23d75ce7f261" exitCode=0 Mar 12 14:22:29.267044 master-0 kubenswrapper[7440]: I0312 14:22:29.267023 7440 generic.go:334] "Generic (PLEG): container finished" podID="57930a54-89ab-4ec8-a504-74035bb74d63" containerID="926a040435e0968b248eb5c7123d8465f49b77a778c24d92b17563fbe0da4bd1" exitCode=0 Mar 12 14:22:29.268372 master-0 kubenswrapper[7440]: I0312 14:22:29.268352 7440 generic.go:334] "Generic (PLEG): container finished" podID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerID="f3d9c730da43b24ec075e5b126409b0c8c7273cecb83802d3e5610d1f61d4571" exitCode=0 Mar 12 14:22:29.269755 master-0 kubenswrapper[7440]: I0312 14:22:29.269724 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-dvv78_85459175-2c9c-425d-bdfb-0a79c92ed110/package-server-manager/0.log" Mar 12 14:22:29.270136 master-0 kubenswrapper[7440]: I0312 14:22:29.270104 7440 generic.go:334] "Generic (PLEG): container finished" podID="85459175-2c9c-425d-bdfb-0a79c92ed110" containerID="e509fdc6496e2a91ab75938ff7600d03685ac240f8fb3c3d670f376d905b17ab" exitCode=1 Mar 12 14:22:29.271689 master-0 kubenswrapper[7440]: I0312 14:22:29.271657 7440 generic.go:334] "Generic (PLEG): container finished" podID="08ea0d9f-0635-4759-803e-572eca2f2d34" containerID="d27cef2ffd951ac8b7af825674c33be11e2853a2bd3265c01b885bcdafe8ff3f" exitCode=0 Mar 12 14:22:29.273702 master-0 kubenswrapper[7440]: I0312 14:22:29.273669 7440 generic.go:334] "Generic (PLEG): container finished" podID="0a898118-6d01-4211-92f0-43967b75405c" containerID="10e2670e6ab6b47f07948c60e7e3a46c3f0ed3468cba558c9fc231e5dc2ca43a" exitCode=0 Mar 12 14:22:29.275221 master-0 kubenswrapper[7440]: I0312 14:22:29.275192 7440 generic.go:334] "Generic (PLEG): container finished" podID="6b77ad35-2fff-47bb-ad34-abb3868b09a9" containerID="b8d113d4078bf75e05e20466c91ff71f4f6b488f7676b497a0a45f5dab626d36" exitCode=0 Mar 12 14:22:29.276800 master-0 kubenswrapper[7440]: I0312 14:22:29.276774 7440 generic.go:334] "Generic (PLEG): container finished" podID="3f72fbbe-69f0-4622-be05-b839ff9b4d45" containerID="e7dea74eb883602f1f3d133f192958f321d40672d5572126aaddfb68d54ed527" exitCode=0 Mar 12 14:22:29.278149 master-0 kubenswrapper[7440]: I0312 14:22:29.278121 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-qtx2d_6f5cd3ff-ced6-47e3-8054-d83053d87680/machine-api-operator/0.log" Mar 12 14:22:29.278468 master-0 kubenswrapper[7440]: I0312 14:22:29.278439 7440 generic.go:334] "Generic (PLEG): container finished" podID="6f5cd3ff-ced6-47e3-8054-d83053d87680" containerID="d0767e3a40f949712be9170d0b8f7cd2c338fed5faee0a7ad41873676dd6e5ae" exitCode=255 Mar 12 14:22:29.280408 master-0 kubenswrapper[7440]: I0312 14:22:29.280376 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-zwdgk_d00a8cc7-7774-40bd-94a1-9ac2d0f63234/openshift-controller-manager-operator/0.log" Mar 12 14:22:29.280477 master-0 kubenswrapper[7440]: I0312 14:22:29.280420 7440 generic.go:334] "Generic (PLEG): container finished" podID="d00a8cc7-7774-40bd-94a1-9ac2d0f63234" containerID="9187f76670a738ddd581636a016ef4d6741503d5745e898edf219cba574d1307" exitCode=0 Mar 12 14:22:29.282155 master-0 kubenswrapper[7440]: I0312 14:22:29.282128 7440 generic.go:334] "Generic (PLEG): container finished" podID="a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2" containerID="ea065bab14dca0766dced510f8f192078bd28fcc445355d287138a674e19946f" exitCode=0 Mar 12 14:22:29.283691 master-0 kubenswrapper[7440]: I0312 14:22:29.283667 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-zghs6_879e9bf1-ce4a-40b7-a72c-fe4c61e96cea/cluster-node-tuning-operator/0.log" Mar 12 14:22:29.283750 master-0 kubenswrapper[7440]: I0312 14:22:29.283707 7440 generic.go:334] "Generic (PLEG): container finished" podID="879e9bf1-ce4a-40b7-a72c-fe4c61e96cea" containerID="84cd4dda4ef244649d072d7fb3ef07cda0fc4acab308d3a457899758e508ea9b" exitCode=1 Mar 12 14:22:29.285239 master-0 kubenswrapper[7440]: I0312 14:22:29.285209 7440 generic.go:334] "Generic (PLEG): container finished" podID="06eb9f4b-167e-435b-8ef6-ae44fc0b85a9" containerID="10ebd0ad67dc09a94de6455e90b725a93074cf336ebd90eea3f8574d71ab8322" exitCode=0 Mar 12 14:22:29.286984 master-0 kubenswrapper[7440]: I0312 14:22:29.286959 7440 generic.go:334] "Generic (PLEG): container finished" podID="1bba274a-38c7-4d13-88a5-6bc39228416c" containerID="b98815f2940c407dcd2edaca0a185078f6d9c591becb207f34495f0ed682e5be" exitCode=0 Mar 12 14:22:29.288616 master-0 kubenswrapper[7440]: I0312 14:22:29.288593 7440 generic.go:334] "Generic (PLEG): container finished" podID="8d775283-2696-4411-8ddf-d4e6000f0a0c" containerID="0eed999a49dbae8cddba70df11741d86114a7456650eda2650c12101e15de11f" exitCode=0 Mar 12 14:22:29.289865 master-0 kubenswrapper[7440]: I0312 14:22:29.289849 7440 generic.go:334] "Generic (PLEG): container finished" podID="8660cef9-0ab3-453e-a4b9-c243daa6ddb0" containerID="fa444aaa7916a9b8ce7bfb85bc927673df9636ab7f0f10b61e757d7a6e637d9d" exitCode=0 Mar 12 14:22:29.291157 master-0 kubenswrapper[7440]: I0312 14:22:29.291135 7440 generic.go:334] "Generic (PLEG): container finished" podID="a2435b91-86d6-415b-a978-34cc859e74f2" containerID="875a6bda6b71188c64ac2ab0648f7976d1deadab74df54ad54a3c4c6e3e8c152" exitCode=0 Mar 12 14:22:29.293142 master-0 kubenswrapper[7440]: I0312 14:22:29.293118 7440 generic.go:334] "Generic (PLEG): container finished" podID="8106d14a-b448-4dd1-bccd-926f85394b5d" containerID="d09193ab64fa4ad5898ed40452f50720dec8c982d5f7eb0df7950d928c3d3534" exitCode=0 Mar 12 14:22:29.295128 master-0 kubenswrapper[7440]: I0312 14:22:29.295092 7440 generic.go:334] "Generic (PLEG): container finished" podID="3dc73c14-852d-4957-b6ac-84366ba0594f" containerID="e69ae5e560439e8be83727200f3f70b72e784d09cd8dbceed926d8123583ce1c" exitCode=0 Mar 12 14:22:29.788146 master-0 kubenswrapper[7440]: I0312 14:22:29.788100 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" start-of-body= Mar 12 14:22:29.788287 master-0 kubenswrapper[7440]: I0312 14:22:29.788100 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" start-of-body= Mar 12 14:22:29.788287 master-0 kubenswrapper[7440]: I0312 14:22:29.788197 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" Mar 12 14:22:29.788287 master-0 kubenswrapper[7440]: I0312 14:22:29.788162 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" Mar 12 14:22:29.953756 master-0 kubenswrapper[7440]: I0312 14:22:29.953643 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:22:29.953756 master-0 kubenswrapper[7440]: I0312 14:22:29.953742 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:22:30.131430 master-0 kubenswrapper[7440]: I0312 14:22:30.131258 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:30.131430 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:30.131430 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:30.131430 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:30.131430 master-0 kubenswrapper[7440]: I0312 14:22:30.131349 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:30.455357 master-0 kubenswrapper[7440]: I0312 14:22:30.455265 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:22:30.455357 master-0 kubenswrapper[7440]: I0312 14:22:30.455335 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:22:31.132938 master-0 kubenswrapper[7440]: I0312 14:22:31.132818 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:31.132938 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:31.132938 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:31.132938 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:31.133311 master-0 kubenswrapper[7440]: I0312 14:22:31.132942 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:32.131608 master-0 kubenswrapper[7440]: I0312 14:22:32.131483 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:32.131608 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:32.131608 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:32.131608 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:32.131608 master-0 kubenswrapper[7440]: I0312 14:22:32.131593 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:32.587345 master-0 kubenswrapper[7440]: E0312 14:22:32.587244 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:22:32.953791 master-0 kubenswrapper[7440]: I0312 14:22:32.953674 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:22:32.954112 master-0 kubenswrapper[7440]: I0312 14:22:32.954079 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:22:33.132134 master-0 kubenswrapper[7440]: I0312 14:22:33.132085 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:33.132134 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:33.132134 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:33.132134 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:33.132645 master-0 kubenswrapper[7440]: I0312 14:22:33.132152 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:33.261999 master-0 kubenswrapper[7440]: I0312 14:22:33.261964 7440 patch_prober.go:28] interesting pod/package-server-manager-854648ff6d-dvv78 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.128.0.12:8080/healthz\": dial tcp 10.128.0.12:8080: connect: connection refused" start-of-body= Mar 12 14:22:33.262441 master-0 kubenswrapper[7440]: I0312 14:22:33.262405 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" podUID="85459175-2c9c-425d-bdfb-0a79c92ed110" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.12:8080/healthz\": dial tcp 10.128.0.12:8080: connect: connection refused" Mar 12 14:22:33.262774 master-0 kubenswrapper[7440]: I0312 14:22:33.262017 7440 patch_prober.go:28] interesting pod/package-server-manager-854648ff6d-dvv78 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.128.0.12:8080/healthz\": dial tcp 10.128.0.12:8080: connect: connection refused" start-of-body= Mar 12 14:22:33.262943 master-0 kubenswrapper[7440]: I0312 14:22:33.262781 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" podUID="85459175-2c9c-425d-bdfb-0a79c92ed110" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.12:8080/healthz\": dial tcp 10.128.0.12:8080: connect: connection refused" Mar 12 14:22:33.455471 master-0 kubenswrapper[7440]: I0312 14:22:33.455366 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:22:33.455471 master-0 kubenswrapper[7440]: I0312 14:22:33.455443 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:22:34.131864 master-0 kubenswrapper[7440]: I0312 14:22:34.131791 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:34.131864 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:34.131864 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:34.131864 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:34.132934 master-0 kubenswrapper[7440]: I0312 14:22:34.131880 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:34.354544 master-0 kubenswrapper[7440]: I0312 14:22:34.354444 7440 scope.go:117] "RemoveContainer" containerID="80ae1c45663433034e72c5c20f8723a435fbf83c810f99ce19145980cd404753" Mar 12 14:22:34.376656 master-0 kubenswrapper[7440]: I0312 14:22:34.376569 7440 scope.go:117] "RemoveContainer" containerID="241aab17123596d30cb151981c1709611449c7907327ce4b19c53019951ff0d7" Mar 12 14:22:34.398564 master-0 kubenswrapper[7440]: I0312 14:22:34.398516 7440 scope.go:117] "RemoveContainer" containerID="4bcb9b48cc8fca228497ac0b2a61db8d6fd6ac7df91adf72143bbed36d3bb12a" Mar 12 14:22:34.424459 master-0 kubenswrapper[7440]: I0312 14:22:34.424393 7440 scope.go:117] "RemoveContainer" containerID="4767c99ca8b14443f1382cd9b5a19a4aba786928a26c41b8fce765c6d6383500" Mar 12 14:22:34.453465 master-0 kubenswrapper[7440]: I0312 14:22:34.453415 7440 scope.go:117] "RemoveContainer" containerID="7a2823c237ff92e61d73f497473360f5c4e92a6a6cb9f9ef1530c99732f22a88" Mar 12 14:22:34.488377 master-0 kubenswrapper[7440]: I0312 14:22:34.488186 7440 scope.go:117] "RemoveContainer" containerID="fa8693b6924bc011b2e5ff580645ad5ee2dc963897660400a6b7a2add716cfc2" Mar 12 14:22:34.519471 master-0 kubenswrapper[7440]: I0312 14:22:34.519424 7440 scope.go:117] "RemoveContainer" containerID="9ba513db643889b41a810dd1c7684949b6c126d71f8ce738dd6a0c0db835816a" Mar 12 14:22:34.538111 master-0 kubenswrapper[7440]: I0312 14:22:34.538048 7440 scope.go:117] "RemoveContainer" containerID="1841efbfaab3b877f3dc66a0b9aac7bcfbfafdb9f154e9dca3b878d156db51a3" Mar 12 14:22:34.559096 master-0 kubenswrapper[7440]: I0312 14:22:34.559055 7440 scope.go:117] "RemoveContainer" containerID="8b008968de598692f915807264f6e75fa5d1e6328d1b0539e40f5fbef6013982" Mar 12 14:22:34.577283 master-0 kubenswrapper[7440]: I0312 14:22:34.577243 7440 scope.go:117] "RemoveContainer" containerID="b4956129e01655acfb40ce60e009de2d9707827560481d924db590d2b05e8343" Mar 12 14:22:34.593264 master-0 kubenswrapper[7440]: I0312 14:22:34.593200 7440 scope.go:117] "RemoveContainer" containerID="1abac70444f37ebc5d0a9feab691c5f95fb4db1e5c3e7cd1fedbd5970be25447" Mar 12 14:22:34.608682 master-0 kubenswrapper[7440]: I0312 14:22:34.608634 7440 scope.go:117] "RemoveContainer" containerID="6ba212567515d3f9436de59fb6dea21c7df5a57d0a71d8f4512b348613929a0b" Mar 12 14:22:34.623583 master-0 kubenswrapper[7440]: I0312 14:22:34.623548 7440 scope.go:117] "RemoveContainer" containerID="73cc9d119c3cd4081058d9ad935f90baed6fe86111a2b8950fb3e1c100feb5fb" Mar 12 14:22:34.642618 master-0 kubenswrapper[7440]: I0312 14:22:34.642579 7440 scope.go:117] "RemoveContainer" containerID="35b73de7804cd72eded0d5a260eb4f658c50b3bf884978dd585c75921ee17b06" Mar 12 14:22:34.664786 master-0 kubenswrapper[7440]: I0312 14:22:34.664742 7440 scope.go:117] "RemoveContainer" containerID="d53adb45a67056ee01b81331e65f41973a39210d835cc7c159b8fe9b81f06549" Mar 12 14:22:34.681795 master-0 kubenswrapper[7440]: I0312 14:22:34.681746 7440 scope.go:117] "RemoveContainer" containerID="7066c3f8af944b7c30200b6b3afe942d0daf91534e053c2a5abd37ae5b0f3435" Mar 12 14:22:35.133029 master-0 kubenswrapper[7440]: I0312 14:22:35.132876 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:35.133029 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:35.133029 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:35.133029 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:35.134083 master-0 kubenswrapper[7440]: I0312 14:22:35.133021 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:35.360091 master-0 kubenswrapper[7440]: I0312 14:22:35.360058 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/1.log" Mar 12 14:22:35.369076 master-0 kubenswrapper[7440]: I0312 14:22:35.369031 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/2.log" Mar 12 14:22:35.369788 master-0 kubenswrapper[7440]: I0312 14:22:35.369744 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/1.log" Mar 12 14:22:35.954159 master-0 kubenswrapper[7440]: I0312 14:22:35.954022 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:22:35.954159 master-0 kubenswrapper[7440]: I0312 14:22:35.954130 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:22:36.132002 master-0 kubenswrapper[7440]: I0312 14:22:36.131892 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:36.132002 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:36.132002 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:36.132002 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:36.132002 master-0 kubenswrapper[7440]: I0312 14:22:36.131984 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:36.454955 master-0 kubenswrapper[7440]: I0312 14:22:36.454781 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:22:36.455845 master-0 kubenswrapper[7440]: I0312 14:22:36.454886 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:22:37.084870 master-0 kubenswrapper[7440]: E0312 14:22:37.084663 7440 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189c1db807adf6a0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:7fed292c3d5a90a99bfee43e89190405,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:17:09.517162144 +0000 UTC m=+289.852540703,LastTimestamp:2026-03-12 14:17:09.517162144 +0000 UTC m=+289.852540703,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:22:37.132553 master-0 kubenswrapper[7440]: I0312 14:22:37.132464 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:37.132553 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:37.132553 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:37.132553 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:37.132830 master-0 kubenswrapper[7440]: I0312 14:22:37.132555 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:37.808020 master-0 kubenswrapper[7440]: I0312 14:22:37.807955 7440 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-jpf47 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Mar 12 14:22:37.808493 master-0 kubenswrapper[7440]: I0312 14:22:37.808024 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" podUID="57930a54-89ab-4ec8-a504-74035bb74d63" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Mar 12 14:22:37.991225 master-0 kubenswrapper[7440]: E0312 14:22:37.990836 7440 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:22:37.991620 master-0 kubenswrapper[7440]: E0312 14:22:37.991579 7440 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.014s" Mar 12 14:22:37.991706 master-0 kubenswrapper[7440]: I0312 14:22:37.991642 7440 status_manager.go:379] "Container startup changed for unknown container" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerID="cri-o://77d5ea8d3aeff7d8613d21bf451df4c494347c5824551bb22ccce9ec8f0d6a8d" Mar 12 14:22:37.991706 master-0 kubenswrapper[7440]: I0312 14:22:37.991661 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:22:37.991706 master-0 kubenswrapper[7440]: I0312 14:22:37.991703 7440 status_manager.go:379] "Container startup changed for unknown container" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerID="cri-o://41658f62545b7d9b7450bbc8dac7589cb3b2a123f8c6b156d2fe20c54741e987" Mar 12 14:22:37.991885 master-0 kubenswrapper[7440]: I0312 14:22:37.991718 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:22:37.991885 master-0 kubenswrapper[7440]: I0312 14:22:37.991741 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:22:37.991885 master-0 kubenswrapper[7440]: I0312 14:22:37.991766 7440 status_manager.go:379] "Container startup changed for unknown container" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerID="cri-o://41658f62545b7d9b7450bbc8dac7589cb3b2a123f8c6b156d2fe20c54741e987" Mar 12 14:22:37.991885 master-0 kubenswrapper[7440]: I0312 14:22:37.991780 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:22:37.991885 master-0 kubenswrapper[7440]: I0312 14:22:37.991798 7440 status_manager.go:379] "Container startup changed for unknown container" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerID="cri-o://5129e658d4f38f219309b50d5fba03618805b0cabc3e08b4d6c2ce7c8973f8b3" Mar 12 14:22:37.991885 master-0 kubenswrapper[7440]: I0312 14:22:37.991811 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:22:37.992344 master-0 kubenswrapper[7440]: I0312 14:22:37.991956 7440 status_manager.go:379] "Container startup changed for unknown container" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerID="cri-o://5129e658d4f38f219309b50d5fba03618805b0cabc3e08b4d6c2ce7c8973f8b3" Mar 12 14:22:37.992344 master-0 kubenswrapper[7440]: I0312 14:22:37.991990 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:22:37.992981 master-0 kubenswrapper[7440]: I0312 14:22:37.992866 7440 scope.go:117] "RemoveContainer" containerID="a946cdc53167780579891b144ae4c01088126bb42ef45317938bc8fe5fe26cbb" Mar 12 14:22:37.993485 master-0 kubenswrapper[7440]: E0312 14:22:37.993408 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:22:38.005788 master-0 kubenswrapper[7440]: I0312 14:22:38.005682 7440 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 12 14:22:38.134009 master-0 kubenswrapper[7440]: I0312 14:22:38.133791 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:38.134009 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:38.134009 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:38.134009 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:38.134009 master-0 kubenswrapper[7440]: I0312 14:22:38.133882 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:38.393454 master-0 kubenswrapper[7440]: I0312 14:22:38.393282 7440 scope.go:117] "RemoveContainer" containerID="a946cdc53167780579891b144ae4c01088126bb42ef45317938bc8fe5fe26cbb" Mar 12 14:22:38.393806 master-0 kubenswrapper[7440]: E0312 14:22:38.393750 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:22:39.131597 master-0 kubenswrapper[7440]: I0312 14:22:39.131495 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:39.131597 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:39.131597 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:39.131597 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:39.132340 master-0 kubenswrapper[7440]: I0312 14:22:39.131612 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:39.455177 master-0 kubenswrapper[7440]: I0312 14:22:39.455014 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:22:39.455177 master-0 kubenswrapper[7440]: I0312 14:22:39.455077 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:22:39.787541 master-0 kubenswrapper[7440]: I0312 14:22:39.787462 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" start-of-body= Mar 12 14:22:39.787541 master-0 kubenswrapper[7440]: I0312 14:22:39.787496 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" start-of-body= Mar 12 14:22:39.787814 master-0 kubenswrapper[7440]: I0312 14:22:39.787545 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" Mar 12 14:22:39.787814 master-0 kubenswrapper[7440]: I0312 14:22:39.787575 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" Mar 12 14:22:40.133191 master-0 kubenswrapper[7440]: I0312 14:22:40.133037 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:40.133191 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:40.133191 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:40.133191 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:40.133191 master-0 kubenswrapper[7440]: I0312 14:22:40.133125 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:41.132847 master-0 kubenswrapper[7440]: I0312 14:22:41.132760 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:41.132847 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:41.132847 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:41.132847 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:41.133240 master-0 kubenswrapper[7440]: I0312 14:22:41.132865 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:41.416469 master-0 kubenswrapper[7440]: I0312 14:22:41.416288 7440 generic.go:334] "Generic (PLEG): container finished" podID="dd29b21c-7a0e-4311-952f-427b00468e66" containerID="5c0e8a37f9d56e49ba600123779ab452255e4d506e12df3758cc982e1da22f30" exitCode=0 Mar 12 14:22:42.131461 master-0 kubenswrapper[7440]: I0312 14:22:42.131371 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:42.131461 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:42.131461 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:42.131461 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:42.131461 master-0 kubenswrapper[7440]: I0312 14:22:42.131432 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:42.455085 master-0 kubenswrapper[7440]: I0312 14:22:42.454929 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:22:42.455085 master-0 kubenswrapper[7440]: I0312 14:22:42.454998 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:22:42.588048 master-0 kubenswrapper[7440]: E0312 14:22:42.587980 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:22:43.011491 master-0 kubenswrapper[7440]: E0312 14:22:43.011412 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:22:43.132752 master-0 kubenswrapper[7440]: I0312 14:22:43.132682 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:43.132752 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:43.132752 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:43.132752 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:43.133073 master-0 kubenswrapper[7440]: I0312 14:22:43.132760 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:43.262537 master-0 kubenswrapper[7440]: I0312 14:22:43.262297 7440 patch_prober.go:28] interesting pod/package-server-manager-854648ff6d-dvv78 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.128.0.12:8080/healthz\": dial tcp 10.128.0.12:8080: connect: connection refused" start-of-body= Mar 12 14:22:43.262537 master-0 kubenswrapper[7440]: I0312 14:22:43.262297 7440 patch_prober.go:28] interesting pod/package-server-manager-854648ff6d-dvv78 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.128.0.12:8080/healthz\": dial tcp 10.128.0.12:8080: connect: connection refused" start-of-body= Mar 12 14:22:43.262537 master-0 kubenswrapper[7440]: I0312 14:22:43.262381 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" podUID="85459175-2c9c-425d-bdfb-0a79c92ed110" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.12:8080/healthz\": dial tcp 10.128.0.12:8080: connect: connection refused" Mar 12 14:22:43.262537 master-0 kubenswrapper[7440]: I0312 14:22:43.262454 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" podUID="85459175-2c9c-425d-bdfb-0a79c92ed110" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.12:8080/healthz\": dial tcp 10.128.0.12:8080: connect: connection refused" Mar 12 14:22:44.132008 master-0 kubenswrapper[7440]: I0312 14:22:44.131943 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:44.132008 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:44.132008 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:44.132008 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:44.132008 master-0 kubenswrapper[7440]: I0312 14:22:44.132003 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:44.650504 master-0 kubenswrapper[7440]: I0312 14:22:44.650434 7440 patch_prober.go:28] interesting pod/etcd-operator-5884b9cd56-mjxsv container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" start-of-body= Mar 12 14:22:44.650504 master-0 kubenswrapper[7440]: I0312 14:22:44.650497 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" podUID="8d775283-2696-4411-8ddf-d4e6000f0a0c" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" Mar 12 14:22:45.138773 master-0 kubenswrapper[7440]: I0312 14:22:45.138674 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:45.138773 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:45.138773 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:45.138773 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:45.138773 master-0 kubenswrapper[7440]: I0312 14:22:45.138768 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:45.455139 master-0 kubenswrapper[7440]: I0312 14:22:45.454990 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:22:45.455139 master-0 kubenswrapper[7440]: I0312 14:22:45.455051 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:22:46.132357 master-0 kubenswrapper[7440]: I0312 14:22:46.132268 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:46.132357 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:46.132357 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:46.132357 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:46.132702 master-0 kubenswrapper[7440]: I0312 14:22:46.132388 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:47.132794 master-0 kubenswrapper[7440]: I0312 14:22:47.132737 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:47.132794 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:47.132794 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:47.132794 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:47.134009 master-0 kubenswrapper[7440]: I0312 14:22:47.133924 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:47.808055 master-0 kubenswrapper[7440]: I0312 14:22:47.807950 7440 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-jpf47 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Mar 12 14:22:47.808055 master-0 kubenswrapper[7440]: I0312 14:22:47.808041 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" podUID="57930a54-89ab-4ec8-a504-74035bb74d63" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Mar 12 14:22:48.133016 master-0 kubenswrapper[7440]: I0312 14:22:48.132760 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:48.133016 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:48.133016 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:48.133016 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:48.133016 master-0 kubenswrapper[7440]: I0312 14:22:48.132873 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:48.454722 master-0 kubenswrapper[7440]: I0312 14:22:48.454570 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:22:48.454722 master-0 kubenswrapper[7440]: I0312 14:22:48.454672 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:22:49.132345 master-0 kubenswrapper[7440]: I0312 14:22:49.132270 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:49.132345 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:49.132345 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:49.132345 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:49.132662 master-0 kubenswrapper[7440]: I0312 14:22:49.132351 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:49.788233 master-0 kubenswrapper[7440]: I0312 14:22:49.788172 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" start-of-body= Mar 12 14:22:49.788878 master-0 kubenswrapper[7440]: I0312 14:22:49.788247 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" Mar 12 14:22:49.789165 master-0 kubenswrapper[7440]: I0312 14:22:49.789093 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" start-of-body= Mar 12 14:22:49.789227 master-0 kubenswrapper[7440]: I0312 14:22:49.789194 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" Mar 12 14:22:50.134307 master-0 kubenswrapper[7440]: I0312 14:22:50.134148 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:50.134307 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:50.134307 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:50.134307 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:50.134307 master-0 kubenswrapper[7440]: I0312 14:22:50.134244 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:51.132227 master-0 kubenswrapper[7440]: I0312 14:22:51.132139 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:51.132227 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:51.132227 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:51.132227 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:51.133244 master-0 kubenswrapper[7440]: I0312 14:22:51.132266 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:51.455163 master-0 kubenswrapper[7440]: I0312 14:22:51.455022 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:22:51.455507 master-0 kubenswrapper[7440]: I0312 14:22:51.455463 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:22:52.131845 master-0 kubenswrapper[7440]: I0312 14:22:52.131789 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:52.131845 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:52.131845 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:52.131845 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:52.132282 master-0 kubenswrapper[7440]: I0312 14:22:52.131847 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:52.589249 master-0 kubenswrapper[7440]: E0312 14:22:52.589113 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:22:52.589249 master-0 kubenswrapper[7440]: E0312 14:22:52.589246 7440 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 14:22:53.131778 master-0 kubenswrapper[7440]: I0312 14:22:53.131730 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:53.131778 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:53.131778 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:53.131778 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:53.132100 master-0 kubenswrapper[7440]: I0312 14:22:53.131805 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:53.262778 master-0 kubenswrapper[7440]: I0312 14:22:53.262680 7440 patch_prober.go:28] interesting pod/package-server-manager-854648ff6d-dvv78 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.128.0.12:8080/healthz\": dial tcp 10.128.0.12:8080: connect: connection refused" start-of-body= Mar 12 14:22:53.262778 master-0 kubenswrapper[7440]: I0312 14:22:53.262677 7440 patch_prober.go:28] interesting pod/package-server-manager-854648ff6d-dvv78 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.128.0.12:8080/healthz\": dial tcp 10.128.0.12:8080: connect: connection refused" start-of-body= Mar 12 14:22:53.262778 master-0 kubenswrapper[7440]: I0312 14:22:53.262734 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" podUID="85459175-2c9c-425d-bdfb-0a79c92ed110" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.12:8080/healthz\": dial tcp 10.128.0.12:8080: connect: connection refused" Mar 12 14:22:53.262778 master-0 kubenswrapper[7440]: I0312 14:22:53.262772 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" podUID="85459175-2c9c-425d-bdfb-0a79c92ed110" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.12:8080/healthz\": dial tcp 10.128.0.12:8080: connect: connection refused" Mar 12 14:22:54.132206 master-0 kubenswrapper[7440]: I0312 14:22:54.132152 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:54.132206 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:54.132206 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:54.132206 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:54.132697 master-0 kubenswrapper[7440]: I0312 14:22:54.132658 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:54.455548 master-0 kubenswrapper[7440]: I0312 14:22:54.455415 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:22:54.455548 master-0 kubenswrapper[7440]: I0312 14:22:54.455512 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:22:55.132296 master-0 kubenswrapper[7440]: I0312 14:22:55.132224 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:55.132296 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:55.132296 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:55.132296 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:55.132593 master-0 kubenswrapper[7440]: I0312 14:22:55.132324 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:56.132112 master-0 kubenswrapper[7440]: I0312 14:22:56.132055 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:56.132112 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:56.132112 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:56.132112 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:56.133186 master-0 kubenswrapper[7440]: I0312 14:22:56.133130 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:57.132013 master-0 kubenswrapper[7440]: I0312 14:22:57.131925 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:57.132013 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:57.132013 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:57.132013 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:57.132551 master-0 kubenswrapper[7440]: I0312 14:22:57.132046 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:57.455725 master-0 kubenswrapper[7440]: I0312 14:22:57.455550 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:22:57.455725 master-0 kubenswrapper[7440]: I0312 14:22:57.455618 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:22:57.532465 master-0 kubenswrapper[7440]: I0312 14:22:57.532383 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/2.log" Mar 12 14:22:57.533373 master-0 kubenswrapper[7440]: I0312 14:22:57.533330 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/1.log" Mar 12 14:22:57.533779 master-0 kubenswrapper[7440]: I0312 14:22:57.533729 7440 generic.go:334] "Generic (PLEG): container finished" podID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" containerID="45abcab2b6c821296572dad37b9e6f9ba63e552dbae8db16db31cb4dc1b36a86" exitCode=1 Mar 12 14:22:57.656566 master-0 kubenswrapper[7440]: I0312 14:22:57.656463 7440 status_manager.go:851] "Failed to get status for pod" podUID="0c8675d4-a0be-42a3-96af-e56f5fb02983" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-retry-1-master-0)" Mar 12 14:22:57.807049 master-0 kubenswrapper[7440]: I0312 14:22:57.806980 7440 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-jpf47 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Mar 12 14:22:57.807256 master-0 kubenswrapper[7440]: I0312 14:22:57.807046 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" podUID="57930a54-89ab-4ec8-a504-74035bb74d63" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Mar 12 14:22:58.133249 master-0 kubenswrapper[7440]: I0312 14:22:58.133030 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:58.133249 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:58.133249 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:58.133249 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:58.133249 master-0 kubenswrapper[7440]: I0312 14:22:58.133166 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:59.133586 master-0 kubenswrapper[7440]: I0312 14:22:59.133342 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:22:59.133586 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:22:59.133586 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:22:59.133586 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:22:59.133586 master-0 kubenswrapper[7440]: I0312 14:22:59.133453 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:22:59.789795 master-0 kubenswrapper[7440]: I0312 14:22:59.789040 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" start-of-body= Mar 12 14:22:59.789795 master-0 kubenswrapper[7440]: I0312 14:22:59.789193 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" Mar 12 14:23:00.012835 master-0 kubenswrapper[7440]: E0312 14:23:00.012720 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:23:00.133115 master-0 kubenswrapper[7440]: I0312 14:23:00.132957 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:00.133115 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:00.133115 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:00.133115 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:00.133115 master-0 kubenswrapper[7440]: I0312 14:23:00.133059 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:00.455764 master-0 kubenswrapper[7440]: I0312 14:23:00.455694 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:23:00.455885 master-0 kubenswrapper[7440]: I0312 14:23:00.455779 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:23:01.131548 master-0 kubenswrapper[7440]: I0312 14:23:01.131468 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:01.131548 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:01.131548 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:01.131548 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:01.132646 master-0 kubenswrapper[7440]: I0312 14:23:01.131559 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:02.131415 master-0 kubenswrapper[7440]: I0312 14:23:02.131322 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:02.131415 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:02.131415 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:02.131415 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:02.132040 master-0 kubenswrapper[7440]: I0312 14:23:02.131452 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:03.132763 master-0 kubenswrapper[7440]: I0312 14:23:03.132676 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:03.132763 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:03.132763 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:03.132763 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:03.132763 master-0 kubenswrapper[7440]: I0312 14:23:03.132757 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:03.263320 master-0 kubenswrapper[7440]: I0312 14:23:03.263213 7440 patch_prober.go:28] interesting pod/package-server-manager-854648ff6d-dvv78 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.128.0.12:8080/healthz\": dial tcp 10.128.0.12:8080: connect: connection refused" start-of-body= Mar 12 14:23:03.264712 master-0 kubenswrapper[7440]: I0312 14:23:03.263325 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" podUID="85459175-2c9c-425d-bdfb-0a79c92ed110" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.12:8080/healthz\": dial tcp 10.128.0.12:8080: connect: connection refused" Mar 12 14:23:03.455517 master-0 kubenswrapper[7440]: I0312 14:23:03.455300 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:23:03.455517 master-0 kubenswrapper[7440]: I0312 14:23:03.455405 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:23:04.132261 master-0 kubenswrapper[7440]: I0312 14:23:04.132181 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:04.132261 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:04.132261 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:04.132261 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:04.132261 master-0 kubenswrapper[7440]: I0312 14:23:04.132268 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:05.131694 master-0 kubenswrapper[7440]: I0312 14:23:05.131584 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:05.131694 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:05.131694 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:05.131694 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:05.131694 master-0 kubenswrapper[7440]: I0312 14:23:05.131679 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:06.132687 master-0 kubenswrapper[7440]: I0312 14:23:06.132592 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:06.132687 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:06.132687 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:06.132687 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:06.132687 master-0 kubenswrapper[7440]: I0312 14:23:06.132687 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:06.455798 master-0 kubenswrapper[7440]: I0312 14:23:06.455615 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:23:06.455798 master-0 kubenswrapper[7440]: I0312 14:23:06.455715 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:23:07.131999 master-0 kubenswrapper[7440]: I0312 14:23:07.131867 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:07.131999 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:07.131999 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:07.131999 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:07.131999 master-0 kubenswrapper[7440]: I0312 14:23:07.131971 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:08.132348 master-0 kubenswrapper[7440]: I0312 14:23:08.132273 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:08.132348 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:08.132348 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:08.132348 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:08.132348 master-0 kubenswrapper[7440]: I0312 14:23:08.132340 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:09.134728 master-0 kubenswrapper[7440]: I0312 14:23:09.134650 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:09.134728 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:09.134728 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:09.134728 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:09.135509 master-0 kubenswrapper[7440]: I0312 14:23:09.134784 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:09.455723 master-0 kubenswrapper[7440]: I0312 14:23:09.455514 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:23:09.455723 master-0 kubenswrapper[7440]: I0312 14:23:09.455609 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:23:09.788202 master-0 kubenswrapper[7440]: I0312 14:23:09.788121 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" start-of-body= Mar 12 14:23:09.788202 master-0 kubenswrapper[7440]: I0312 14:23:09.788180 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" Mar 12 14:23:10.132511 master-0 kubenswrapper[7440]: I0312 14:23:10.132346 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:10.132511 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:10.132511 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:10.132511 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:10.132511 master-0 kubenswrapper[7440]: I0312 14:23:10.132406 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:11.087416 master-0 kubenswrapper[7440]: E0312 14:23:11.087243 7440 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Mar 12 14:23:11.087416 master-0 kubenswrapper[7440]: &Event{ObjectMeta:{router-default-79f8cd6fdd-gjwhp.189c1dafcad31f81 openshift-ingress 11565 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress,Name:router-default-79f8cd6fdd-gjwhp,UID:e7f6ebd3-98c8-457c-a88c-7e81270f01b5,APIVersion:v1,ResourceVersion:11065,FieldPath:spec.containers{router},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Mar 12 14:23:11.087416 master-0 kubenswrapper[7440]: body: [-]backend-http failed: reason withheld Mar 12 14:23:11.087416 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:11.087416 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:11.087416 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:11.087416 master-0 kubenswrapper[7440]: Mar 12 14:23:11.087416 master-0 kubenswrapper[7440]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:16:34 +0000 UTC,LastTimestamp:2026-03-12 14:17:10.13235348 +0000 UTC m=+290.467732079,Count:37,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Mar 12 14:23:11.087416 master-0 kubenswrapper[7440]: > Mar 12 14:23:11.130869 master-0 kubenswrapper[7440]: I0312 14:23:11.130781 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:11.130869 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:11.130869 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:11.130869 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:11.130869 master-0 kubenswrapper[7440]: I0312 14:23:11.130849 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:12.009122 master-0 kubenswrapper[7440]: E0312 14:23:12.009024 7440 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:23:12.009480 master-0 kubenswrapper[7440]: E0312 14:23:12.009410 7440 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.017s" Mar 12 14:23:12.011121 master-0 kubenswrapper[7440]: I0312 14:23:12.010861 7440 scope.go:117] "RemoveContainer" containerID="875a6bda6b71188c64ac2ab0648f7976d1deadab74df54ad54a3c4c6e3e8c152" Mar 12 14:23:12.011121 master-0 kubenswrapper[7440]: I0312 14:23:12.010962 7440 scope.go:117] "RemoveContainer" containerID="ea065bab14dca0766dced510f8f192078bd28fcc445355d287138a674e19946f" Mar 12 14:23:12.011720 master-0 kubenswrapper[7440]: I0312 14:23:12.011597 7440 scope.go:117] "RemoveContainer" containerID="a946cdc53167780579891b144ae4c01088126bb42ef45317938bc8fe5fe26cbb" Mar 12 14:23:12.011720 master-0 kubenswrapper[7440]: I0312 14:23:12.011619 7440 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="10b1bd98-beac-469c-9a65-abee3ca8a243" Mar 12 14:23:12.011720 master-0 kubenswrapper[7440]: I0312 14:23:12.011659 7440 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="10b1bd98-beac-469c-9a65-abee3ca8a243" Mar 12 14:23:12.012712 master-0 kubenswrapper[7440]: I0312 14:23:12.012047 7440 scope.go:117] "RemoveContainer" containerID="9187f76670a738ddd581636a016ef4d6741503d5745e898edf219cba574d1307" Mar 12 14:23:12.012712 master-0 kubenswrapper[7440]: E0312 14:23:12.012084 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:23:12.012712 master-0 kubenswrapper[7440]: I0312 14:23:12.012572 7440 scope.go:117] "RemoveContainer" containerID="e509fdc6496e2a91ab75938ff7600d03685ac240f8fb3c3d670f376d905b17ab" Mar 12 14:23:12.013021 master-0 kubenswrapper[7440]: I0312 14:23:12.012839 7440 scope.go:117] "RemoveContainer" containerID="b98815f2940c407dcd2edaca0a185078f6d9c591becb207f34495f0ed682e5be" Mar 12 14:23:12.013483 master-0 kubenswrapper[7440]: I0312 14:23:12.013441 7440 scope.go:117] "RemoveContainer" containerID="76d5f71d0b9e07fd3636c963c1e496e4449c72c239decd6092cccc9fe18dbb61" Mar 12 14:23:12.014023 master-0 kubenswrapper[7440]: I0312 14:23:12.013455 7440 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f87f3196293c0cde53119456354d52266c897c928bf77795c604874d22ff9dfd"} pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 14:23:12.014023 master-0 kubenswrapper[7440]: I0312 14:23:12.014004 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" podUID="8e4d9407-ff79-4396-a37f-896617e024d4" containerName="machine-config-daemon" containerID="cri-o://f87f3196293c0cde53119456354d52266c897c928bf77795c604874d22ff9dfd" gracePeriod=600 Mar 12 14:23:12.015412 master-0 kubenswrapper[7440]: I0312 14:23:12.015375 7440 scope.go:117] "RemoveContainer" containerID="93fc043f83fd1d3afac8895480948677e740498aeff368b3ec9e23d75ce7f261" Mar 12 14:23:12.016019 master-0 kubenswrapper[7440]: I0312 14:23:12.015969 7440 scope.go:117] "RemoveContainer" containerID="b53df61802c76275e2ee152b7486584e46a40bc0a811c6ed0a3e9d62b01955be" Mar 12 14:23:12.018407 master-0 kubenswrapper[7440]: I0312 14:23:12.018045 7440 scope.go:117] "RemoveContainer" containerID="45abcab2b6c821296572dad37b9e6f9ba63e552dbae8db16db31cb4dc1b36a86" Mar 12 14:23:12.018407 master-0 kubenswrapper[7440]: I0312 14:23:12.018267 7440 scope.go:117] "RemoveContainer" containerID="0eed999a49dbae8cddba70df11741d86114a7456650eda2650c12101e15de11f" Mar 12 14:23:12.018638 master-0 kubenswrapper[7440]: E0312 14:23:12.018284 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-44hhf_openshift-ingress-operator(4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" podUID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" Mar 12 14:23:12.018953 master-0 kubenswrapper[7440]: I0312 14:23:12.018882 7440 scope.go:117] "RemoveContainer" containerID="926a040435e0968b248eb5c7123d8465f49b77a778c24d92b17563fbe0da4bd1" Mar 12 14:23:12.019989 master-0 kubenswrapper[7440]: I0312 14:23:12.019957 7440 scope.go:117] "RemoveContainer" containerID="e69ae5e560439e8be83727200f3f70b72e784d09cd8dbceed926d8123583ce1c" Mar 12 14:23:12.020083 master-0 kubenswrapper[7440]: I0312 14:23:12.020050 7440 scope.go:117] "RemoveContainer" containerID="5c0e8a37f9d56e49ba600123779ab452255e4d506e12df3758cc982e1da22f30" Mar 12 14:23:12.020286 master-0 kubenswrapper[7440]: I0312 14:23:12.020242 7440 scope.go:117] "RemoveContainer" containerID="8ae9516d1bad64d2b36bf66ae5496f784cbde176fd71bcff31926fef9dd2ff15" Mar 12 14:23:12.021099 master-0 kubenswrapper[7440]: I0312 14:23:12.021071 7440 scope.go:117] "RemoveContainer" containerID="1da1f692fe7f463fbb1c0cbb755fdd4e259885377082c810ee0f69c91f679d04" Mar 12 14:23:12.021949 master-0 kubenswrapper[7440]: I0312 14:23:12.021924 7440 scope.go:117] "RemoveContainer" containerID="84cd4dda4ef244649d072d7fb3ef07cda0fc4acab308d3a457899758e508ea9b" Mar 12 14:23:12.022421 master-0 kubenswrapper[7440]: I0312 14:23:12.022372 7440 scope.go:117] "RemoveContainer" containerID="fa444aaa7916a9b8ce7bfb85bc927673df9636ab7f0f10b61e757d7a6e637d9d" Mar 12 14:23:12.022686 master-0 kubenswrapper[7440]: I0312 14:23:12.022587 7440 scope.go:117] "RemoveContainer" containerID="1f6d2570897da6801ddcca5ad1dff41b4e29f16cbcc5ab930745b1a932963f31" Mar 12 14:23:12.022845 master-0 kubenswrapper[7440]: I0312 14:23:12.022814 7440 scope.go:117] "RemoveContainer" containerID="b8d113d4078bf75e05e20466c91ff71f4f6b488f7676b497a0a45f5dab626d36" Mar 12 14:23:12.024383 master-0 kubenswrapper[7440]: I0312 14:23:12.024344 7440 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 12 14:23:12.133186 master-0 kubenswrapper[7440]: I0312 14:23:12.133117 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:12.133186 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:12.133186 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:12.133186 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:12.133965 master-0 kubenswrapper[7440]: I0312 14:23:12.133203 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:12.457163 master-0 kubenswrapper[7440]: I0312 14:23:12.457038 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:23:12.457163 master-0 kubenswrapper[7440]: I0312 14:23:12.457100 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:23:12.640546 master-0 kubenswrapper[7440]: I0312 14:23:12.639278 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-zghs6_879e9bf1-ce4a-40b7-a72c-fe4c61e96cea/cluster-node-tuning-operator/0.log" Mar 12 14:23:12.649195 master-0 kubenswrapper[7440]: I0312 14:23:12.649126 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/2.log" Mar 12 14:23:12.649578 master-0 kubenswrapper[7440]: I0312 14:23:12.649516 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/1.log" Mar 12 14:23:12.654700 master-0 kubenswrapper[7440]: I0312 14:23:12.654669 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hs6mc_3edaa533-ecbb-443e-a270-4cb4f923daf6/cluster-baremetal-operator/1.log" Mar 12 14:23:12.655621 master-0 kubenswrapper[7440]: I0312 14:23:12.655366 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hs6mc_3edaa533-ecbb-443e-a270-4cb4f923daf6/cluster-baremetal-operator/0.log" Mar 12 14:23:12.657601 master-0 kubenswrapper[7440]: I0312 14:23:12.657382 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-dvv78_85459175-2c9c-425d-bdfb-0a79c92ed110/package-server-manager/0.log" Mar 12 14:23:12.679167 master-0 kubenswrapper[7440]: I0312 14:23:12.679119 7440 generic.go:334] "Generic (PLEG): container finished" podID="8e4d9407-ff79-4396-a37f-896617e024d4" containerID="f87f3196293c0cde53119456354d52266c897c928bf77795c604874d22ff9dfd" exitCode=0 Mar 12 14:23:12.688298 master-0 kubenswrapper[7440]: I0312 14:23:12.687561 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-b7296_9757edbb-8ce2-4513-9b32-a552df50634c/cluster-autoscaler-operator/0.log" Mar 12 14:23:12.875171 master-0 kubenswrapper[7440]: E0312 14:23:12.875054 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:23:02Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:23:02Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:23:02Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:23:02Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:0d4c830b2653f2eeffebd09537afb06afb5ae827adbc03f224ab7269f399c05c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d6065909bc521a3f9a85174276fdbceafad02a276449a7dd1952a1f689b0d362\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1735807445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:185237e125a9d710a58d4b588ea6b75eb361e4e99d979c1acd193de3b5d787f1\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:746054bb64fa0b27b1a696cd5db508bb9ee883a94969e4c1c4b5d35a93da8ef5\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1281521882},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:419c6163a23c12fa8884122764fc9055f901e98f35811ea7b5af57f8a71cdb3c\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bbd5afda20f052626b7914c319e3b44721ac442a05724cfe4199e8736319dcf1\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221789390},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032\\\"],\\\"sizeBytes\\\":487151732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e\\\"],\\\"sizeBytes\\\":480534195},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f\\\"],\\\"sizeBytes\\\":471430788}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:23:13.132275 master-0 kubenswrapper[7440]: I0312 14:23:13.132140 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:13.132275 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:13.132275 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:13.132275 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:13.132275 master-0 kubenswrapper[7440]: I0312 14:23:13.132202 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:14.132401 master-0 kubenswrapper[7440]: I0312 14:23:14.132328 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:14.132401 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:14.132401 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:14.132401 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:14.132401 master-0 kubenswrapper[7440]: I0312 14:23:14.132398 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:14.714819 master-0 kubenswrapper[7440]: I0312 14:23:14.714765 7440 generic.go:334] "Generic (PLEG): container finished" podID="a35674af-162c-4a4a-8605-158b2326267e" containerID="74c768e9e11582adc0014bc840fea327d7f38cf0f676db2b9e0edea0c24915ce" exitCode=0 Mar 12 14:23:15.132940 master-0 kubenswrapper[7440]: I0312 14:23:15.132832 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:15.132940 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:15.132940 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:15.132940 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:15.133622 master-0 kubenswrapper[7440]: I0312 14:23:15.132993 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:15.455030 master-0 kubenswrapper[7440]: I0312 14:23:15.454717 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:23:15.455030 master-0 kubenswrapper[7440]: I0312 14:23:15.454871 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:23:16.131780 master-0 kubenswrapper[7440]: I0312 14:23:16.131698 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:16.131780 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:16.131780 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:16.131780 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:16.132130 master-0 kubenswrapper[7440]: I0312 14:23:16.131783 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:17.014532 master-0 kubenswrapper[7440]: E0312 14:23:17.013867 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:23:17.131987 master-0 kubenswrapper[7440]: I0312 14:23:17.131942 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:17.131987 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:17.131987 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:17.131987 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:17.132348 master-0 kubenswrapper[7440]: I0312 14:23:17.132315 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:18.131369 master-0 kubenswrapper[7440]: I0312 14:23:18.131270 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:18.131369 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:18.131369 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:18.131369 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:18.131369 master-0 kubenswrapper[7440]: I0312 14:23:18.131342 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:18.454601 master-0 kubenswrapper[7440]: I0312 14:23:18.454420 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:23:18.454601 master-0 kubenswrapper[7440]: I0312 14:23:18.454493 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:23:19.132253 master-0 kubenswrapper[7440]: I0312 14:23:19.132101 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:19.132253 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:19.132253 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:19.132253 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:19.132253 master-0 kubenswrapper[7440]: I0312 14:23:19.132221 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:19.788309 master-0 kubenswrapper[7440]: I0312 14:23:19.788200 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" start-of-body= Mar 12 14:23:19.788609 master-0 kubenswrapper[7440]: I0312 14:23:19.788341 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" Mar 12 14:23:20.131867 master-0 kubenswrapper[7440]: I0312 14:23:20.131680 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:20.131867 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:20.131867 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:20.131867 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:20.131867 master-0 kubenswrapper[7440]: I0312 14:23:20.131751 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:21.132258 master-0 kubenswrapper[7440]: I0312 14:23:21.132169 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:21.132258 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:21.132258 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:21.132258 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:21.133032 master-0 kubenswrapper[7440]: I0312 14:23:21.132265 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:21.455076 master-0 kubenswrapper[7440]: I0312 14:23:21.454935 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:23:21.455076 master-0 kubenswrapper[7440]: I0312 14:23:21.455033 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:23:22.131349 master-0 kubenswrapper[7440]: I0312 14:23:22.131312 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:22.131349 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:22.131349 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:22.131349 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:22.131685 master-0 kubenswrapper[7440]: I0312 14:23:22.131661 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:22.877416 master-0 kubenswrapper[7440]: E0312 14:23:22.877305 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:23:23.132091 master-0 kubenswrapper[7440]: I0312 14:23:23.131890 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:23.132091 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:23.132091 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:23.132091 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:23.132091 master-0 kubenswrapper[7440]: I0312 14:23:23.131996 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:24.131512 master-0 kubenswrapper[7440]: I0312 14:23:24.131434 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:24.131512 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:24.131512 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:24.131512 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:24.132264 master-0 kubenswrapper[7440]: I0312 14:23:24.131521 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:24.454868 master-0 kubenswrapper[7440]: I0312 14:23:24.454759 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:23:24.454868 master-0 kubenswrapper[7440]: I0312 14:23:24.454825 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:23:25.131438 master-0 kubenswrapper[7440]: I0312 14:23:25.131349 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:25.131438 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:25.131438 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:25.131438 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:25.132125 master-0 kubenswrapper[7440]: I0312 14:23:25.131466 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:26.131354 master-0 kubenswrapper[7440]: I0312 14:23:26.131295 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:26.131354 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:26.131354 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:26.131354 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:26.131756 master-0 kubenswrapper[7440]: I0312 14:23:26.131364 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:27.131631 master-0 kubenswrapper[7440]: I0312 14:23:27.131576 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:27.131631 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:27.131631 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:27.131631 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:27.132426 master-0 kubenswrapper[7440]: I0312 14:23:27.131645 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:27.455779 master-0 kubenswrapper[7440]: I0312 14:23:27.455556 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:23:27.455779 master-0 kubenswrapper[7440]: I0312 14:23:27.455638 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:23:28.131221 master-0 kubenswrapper[7440]: I0312 14:23:28.131159 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:28.131221 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:28.131221 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:28.131221 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:28.131653 master-0 kubenswrapper[7440]: I0312 14:23:28.131621 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:29.131761 master-0 kubenswrapper[7440]: I0312 14:23:29.131681 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:29.131761 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:29.131761 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:29.131761 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:29.132313 master-0 kubenswrapper[7440]: I0312 14:23:29.131778 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:29.788372 master-0 kubenswrapper[7440]: I0312 14:23:29.788286 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" start-of-body= Mar 12 14:23:29.789148 master-0 kubenswrapper[7440]: I0312 14:23:29.789039 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" Mar 12 14:23:30.132004 master-0 kubenswrapper[7440]: I0312 14:23:30.131816 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:30.132004 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:30.132004 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:30.132004 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:30.132004 master-0 kubenswrapper[7440]: I0312 14:23:30.131937 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:30.455036 master-0 kubenswrapper[7440]: I0312 14:23:30.454796 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:23:30.455036 master-0 kubenswrapper[7440]: I0312 14:23:30.454885 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:23:31.131550 master-0 kubenswrapper[7440]: I0312 14:23:31.131460 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:31.131550 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:31.131550 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:31.131550 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:31.132037 master-0 kubenswrapper[7440]: I0312 14:23:31.131559 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:32.131061 master-0 kubenswrapper[7440]: I0312 14:23:32.131010 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:32.131061 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:32.131061 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:32.131061 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:32.131061 master-0 kubenswrapper[7440]: I0312 14:23:32.131060 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:32.878519 master-0 kubenswrapper[7440]: E0312 14:23:32.878343 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:23:33.132504 master-0 kubenswrapper[7440]: I0312 14:23:33.132308 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:33.132504 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:33.132504 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:33.132504 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:33.132504 master-0 kubenswrapper[7440]: I0312 14:23:33.132458 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:33.455202 master-0 kubenswrapper[7440]: I0312 14:23:33.455070 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:23:33.455202 master-0 kubenswrapper[7440]: I0312 14:23:33.455122 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:23:34.017859 master-0 kubenswrapper[7440]: E0312 14:23:34.017224 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:23:34.132143 master-0 kubenswrapper[7440]: I0312 14:23:34.132062 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:34.132143 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:34.132143 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:34.132143 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:34.132523 master-0 kubenswrapper[7440]: I0312 14:23:34.132159 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:34.729991 master-0 kubenswrapper[7440]: I0312 14:23:34.729884 7440 scope.go:117] "RemoveContainer" containerID="8f8be4405a8d4e6b47e3984fee4354cff707b030f91ac3d80bc5aee09db3ea4a" Mar 12 14:23:34.749232 master-0 kubenswrapper[7440]: I0312 14:23:34.749188 7440 scope.go:117] "RemoveContainer" containerID="91a8f5c51245c9c31ad9e34f814e801c26cbe6ecd3a5aedc09c0fc9965981075" Mar 12 14:23:35.132343 master-0 kubenswrapper[7440]: I0312 14:23:35.132292 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:35.132343 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:35.132343 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:35.132343 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:35.133001 master-0 kubenswrapper[7440]: I0312 14:23:35.132366 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:36.132542 master-0 kubenswrapper[7440]: I0312 14:23:36.132466 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:36.132542 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:36.132542 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:36.132542 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:36.133617 master-0 kubenswrapper[7440]: I0312 14:23:36.132558 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:36.455311 master-0 kubenswrapper[7440]: I0312 14:23:36.455135 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:23:36.455311 master-0 kubenswrapper[7440]: I0312 14:23:36.455241 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:23:37.131705 master-0 kubenswrapper[7440]: I0312 14:23:37.131630 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:37.131705 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:37.131705 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:37.131705 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:37.131705 master-0 kubenswrapper[7440]: I0312 14:23:37.131679 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:38.133509 master-0 kubenswrapper[7440]: I0312 14:23:38.133407 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:38.133509 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:38.133509 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:38.133509 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:38.133509 master-0 kubenswrapper[7440]: I0312 14:23:38.133489 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:39.132151 master-0 kubenswrapper[7440]: I0312 14:23:39.132084 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:39.132151 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:39.132151 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:39.132151 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:39.132486 master-0 kubenswrapper[7440]: I0312 14:23:39.132166 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:39.455245 master-0 kubenswrapper[7440]: I0312 14:23:39.455057 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:23:39.455245 master-0 kubenswrapper[7440]: I0312 14:23:39.455164 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:23:39.787508 master-0 kubenswrapper[7440]: I0312 14:23:39.787461 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" start-of-body= Mar 12 14:23:39.787813 master-0 kubenswrapper[7440]: I0312 14:23:39.787783 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" Mar 12 14:23:40.132110 master-0 kubenswrapper[7440]: I0312 14:23:40.131987 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:40.132110 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:40.132110 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:40.132110 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:40.132110 master-0 kubenswrapper[7440]: I0312 14:23:40.132069 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:41.131118 master-0 kubenswrapper[7440]: I0312 14:23:41.131047 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:41.131118 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:41.131118 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:41.131118 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:41.131643 master-0 kubenswrapper[7440]: I0312 14:23:41.131126 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:42.132077 master-0 kubenswrapper[7440]: I0312 14:23:42.132007 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:42.132077 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:42.132077 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:42.132077 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:42.132841 master-0 kubenswrapper[7440]: I0312 14:23:42.132083 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:42.455531 master-0 kubenswrapper[7440]: I0312 14:23:42.455371 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:23:42.455531 master-0 kubenswrapper[7440]: I0312 14:23:42.455440 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:23:42.879637 master-0 kubenswrapper[7440]: E0312 14:23:42.879553 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:23:42.944288 master-0 kubenswrapper[7440]: I0312 14:23:42.944228 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/3.log" Mar 12 14:23:42.944775 master-0 kubenswrapper[7440]: I0312 14:23:42.944740 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/2.log" Mar 12 14:23:42.945233 master-0 kubenswrapper[7440]: I0312 14:23:42.945202 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/1.log" Mar 12 14:23:42.945298 master-0 kubenswrapper[7440]: I0312 14:23:42.945247 7440 generic.go:334] "Generic (PLEG): container finished" podID="d56089bf-177c-492d-8964-73a45574e7ed" containerID="82a229708282890eba0f2dd66591b7d498131fca3dd378e3fd0c6eab0f3fa96d" exitCode=1 Mar 12 14:23:43.131525 master-0 kubenswrapper[7440]: I0312 14:23:43.131411 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:43.131525 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:43.131525 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:43.131525 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:43.131525 master-0 kubenswrapper[7440]: I0312 14:23:43.131495 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:44.131524 master-0 kubenswrapper[7440]: I0312 14:23:44.131445 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:44.131524 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:44.131524 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:44.131524 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:44.132132 master-0 kubenswrapper[7440]: I0312 14:23:44.131560 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:45.091303 master-0 kubenswrapper[7440]: E0312 14:23:45.091147 7440 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Mar 12 14:23:45.091303 master-0 kubenswrapper[7440]: &Event{ObjectMeta:{machine-config-daemon-ngzc8.189c1dac0726a2df openshift-machine-config-operator 11631 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-ngzc8,UID:8e4d9407-ff79-4396-a37f-896617e024d4,APIVersion:v1,ResourceVersion:8731,FieldPath:spec.containers{machine-config-daemon},},Reason:ProbeError,Message:Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused Mar 12 14:23:45.091303 master-0 kubenswrapper[7440]: body: Mar 12 14:23:45.091303 master-0 kubenswrapper[7440]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:16:17 +0000 UTC,LastTimestamp:2026-03-12 14:17:17.968127501 +0000 UTC m=+298.303506060,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Mar 12 14:23:45.091303 master-0 kubenswrapper[7440]: > Mar 12 14:23:45.131222 master-0 kubenswrapper[7440]: I0312 14:23:45.131161 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:45.131222 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:45.131222 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:45.131222 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:45.131562 master-0 kubenswrapper[7440]: I0312 14:23:45.131226 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:45.455438 master-0 kubenswrapper[7440]: I0312 14:23:45.455256 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:23:45.455438 master-0 kubenswrapper[7440]: I0312 14:23:45.455360 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:23:46.015078 master-0 kubenswrapper[7440]: E0312 14:23:46.015012 7440 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 12 14:23:46.029108 master-0 kubenswrapper[7440]: E0312 14:23:46.029023 7440 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:23:46.029284 master-0 kubenswrapper[7440]: E0312 14:23:46.029267 7440 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.019s" Mar 12 14:23:46.036479 master-0 kubenswrapper[7440]: I0312 14:23:46.036430 7440 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 12 14:23:46.131231 master-0 kubenswrapper[7440]: I0312 14:23:46.131172 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:46.131231 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:46.131231 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:46.131231 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:46.131231 master-0 kubenswrapper[7440]: I0312 14:23:46.131235 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:46.971615 master-0 kubenswrapper[7440]: I0312 14:23:46.971578 7440 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="a54d7c040e4e83aac6a6fc975cc3d2fd03101d4237db0646f2870734d1932e37" exitCode=0 Mar 12 14:23:47.131183 master-0 kubenswrapper[7440]: I0312 14:23:47.131145 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:47.131183 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:47.131183 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:47.131183 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:47.131545 master-0 kubenswrapper[7440]: I0312 14:23:47.131517 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:48.131640 master-0 kubenswrapper[7440]: I0312 14:23:48.131562 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:48.131640 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:48.131640 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:48.131640 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:48.132472 master-0 kubenswrapper[7440]: I0312 14:23:48.131641 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:48.456023 master-0 kubenswrapper[7440]: I0312 14:23:48.455004 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:23:48.456023 master-0 kubenswrapper[7440]: I0312 14:23:48.455123 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:23:49.132173 master-0 kubenswrapper[7440]: I0312 14:23:49.132092 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:49.132173 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:49.132173 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:49.132173 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:49.132173 master-0 kubenswrapper[7440]: I0312 14:23:49.132155 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:49.787620 master-0 kubenswrapper[7440]: I0312 14:23:49.787562 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" start-of-body= Mar 12 14:23:49.787811 master-0 kubenswrapper[7440]: I0312 14:23:49.787628 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" Mar 12 14:23:50.131831 master-0 kubenswrapper[7440]: I0312 14:23:50.131687 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:50.131831 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:50.131831 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:50.131831 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:50.131831 master-0 kubenswrapper[7440]: I0312 14:23:50.131774 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:51.019476 master-0 kubenswrapper[7440]: E0312 14:23:51.019300 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:23:51.131654 master-0 kubenswrapper[7440]: I0312 14:23:51.131599 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:51.131654 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:51.131654 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:51.131654 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:51.131934 master-0 kubenswrapper[7440]: I0312 14:23:51.131672 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:51.455711 master-0 kubenswrapper[7440]: I0312 14:23:51.455522 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:23:51.455711 master-0 kubenswrapper[7440]: I0312 14:23:51.455596 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:23:52.131517 master-0 kubenswrapper[7440]: I0312 14:23:52.131449 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:52.131517 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:52.131517 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:52.131517 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:52.132066 master-0 kubenswrapper[7440]: I0312 14:23:52.131520 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:52.880213 master-0 kubenswrapper[7440]: E0312 14:23:52.880153 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:23:52.880213 master-0 kubenswrapper[7440]: E0312 14:23:52.880202 7440 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 14:23:53.131843 master-0 kubenswrapper[7440]: I0312 14:23:53.131749 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:53.131843 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:53.131843 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:53.131843 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:53.132468 master-0 kubenswrapper[7440]: I0312 14:23:53.132435 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:54.131303 master-0 kubenswrapper[7440]: I0312 14:23:54.131241 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:54.131303 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:54.131303 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:54.131303 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:54.131680 master-0 kubenswrapper[7440]: I0312 14:23:54.131320 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:54.455281 master-0 kubenswrapper[7440]: I0312 14:23:54.455117 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:23:54.456329 master-0 kubenswrapper[7440]: I0312 14:23:54.456263 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:23:55.131413 master-0 kubenswrapper[7440]: I0312 14:23:55.131344 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:55.131413 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:55.131413 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:55.131413 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:55.131690 master-0 kubenswrapper[7440]: I0312 14:23:55.131425 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:56.131784 master-0 kubenswrapper[7440]: I0312 14:23:56.131656 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:56.131784 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:56.131784 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:56.131784 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:56.132481 master-0 kubenswrapper[7440]: I0312 14:23:56.131849 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:57.131922 master-0 kubenswrapper[7440]: I0312 14:23:57.131849 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:57.131922 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:57.131922 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:57.131922 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:57.132716 master-0 kubenswrapper[7440]: I0312 14:23:57.131947 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:57.455199 master-0 kubenswrapper[7440]: I0312 14:23:57.455049 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:23:57.455388 master-0 kubenswrapper[7440]: I0312 14:23:57.455152 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:23:57.657919 master-0 kubenswrapper[7440]: I0312 14:23:57.657817 7440 status_manager.go:851] "Failed to get status for pod" podUID="b2d8e6e9-c10f-4b43-8155-9addbfddba2e" pod="openshift-etcd/installer-2-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)" Mar 12 14:23:58.133162 master-0 kubenswrapper[7440]: I0312 14:23:58.133082 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:58.133162 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:58.133162 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:58.133162 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:58.134000 master-0 kubenswrapper[7440]: I0312 14:23:58.133169 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:59.132290 master-0 kubenswrapper[7440]: I0312 14:23:59.132238 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:23:59.132290 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:23:59.132290 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:23:59.132290 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:23:59.132580 master-0 kubenswrapper[7440]: I0312 14:23:59.132321 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:23:59.788608 master-0 kubenswrapper[7440]: I0312 14:23:59.788534 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" start-of-body= Mar 12 14:23:59.789411 master-0 kubenswrapper[7440]: I0312 14:23:59.788756 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" Mar 12 14:24:00.131750 master-0 kubenswrapper[7440]: I0312 14:24:00.131636 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:24:00.131750 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:24:00.131750 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:24:00.131750 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:24:00.131750 master-0 kubenswrapper[7440]: I0312 14:24:00.131697 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:24:00.455475 master-0 kubenswrapper[7440]: I0312 14:24:00.455333 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:24:00.455475 master-0 kubenswrapper[7440]: I0312 14:24:00.455410 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:24:01.131438 master-0 kubenswrapper[7440]: I0312 14:24:01.131376 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:24:01.131438 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:24:01.131438 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:24:01.131438 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:24:01.132124 master-0 kubenswrapper[7440]: I0312 14:24:01.131442 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:24:02.132309 master-0 kubenswrapper[7440]: I0312 14:24:02.132251 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:24:02.132309 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:24:02.132309 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:24:02.132309 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:24:02.132814 master-0 kubenswrapper[7440]: I0312 14:24:02.132315 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:24:03.132295 master-0 kubenswrapper[7440]: I0312 14:24:03.132241 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:24:03.132295 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:24:03.132295 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:24:03.132295 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:24:03.133067 master-0 kubenswrapper[7440]: I0312 14:24:03.132300 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:24:03.454602 master-0 kubenswrapper[7440]: I0312 14:24:03.454432 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:24:03.454602 master-0 kubenswrapper[7440]: I0312 14:24:03.454502 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:24:06.455714 master-0 kubenswrapper[7440]: I0312 14:24:06.455602 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:24:06.455714 master-0 kubenswrapper[7440]: I0312 14:24:06.455690 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:24:08.020576 master-0 kubenswrapper[7440]: E0312 14:24:08.020501 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:24:09.455092 master-0 kubenswrapper[7440]: I0312 14:24:09.455024 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:24:09.455092 master-0 kubenswrapper[7440]: I0312 14:24:09.455080 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:24:09.788322 master-0 kubenswrapper[7440]: I0312 14:24:09.788227 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" start-of-body= Mar 12 14:24:09.788322 master-0 kubenswrapper[7440]: I0312 14:24:09.788315 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" Mar 12 14:24:12.455822 master-0 kubenswrapper[7440]: I0312 14:24:12.455704 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:24:12.455822 master-0 kubenswrapper[7440]: I0312 14:24:12.455796 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:24:13.141407 master-0 kubenswrapper[7440]: I0312 14:24:13.141335 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hs6mc_3edaa533-ecbb-443e-a270-4cb4f923daf6/cluster-baremetal-operator/2.log" Mar 12 14:24:13.141819 master-0 kubenswrapper[7440]: I0312 14:24:13.141755 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hs6mc_3edaa533-ecbb-443e-a270-4cb4f923daf6/cluster-baremetal-operator/1.log" Mar 12 14:24:13.142381 master-0 kubenswrapper[7440]: I0312 14:24:13.142313 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hs6mc_3edaa533-ecbb-443e-a270-4cb4f923daf6/cluster-baremetal-operator/0.log" Mar 12 14:24:13.142381 master-0 kubenswrapper[7440]: I0312 14:24:13.142342 7440 generic.go:334] "Generic (PLEG): container finished" podID="3edaa533-ecbb-443e-a270-4cb4f923daf6" containerID="e085982a635b4a0eba26b5bb736bad34e1f9261a79ed2b915c47c028db213dd4" exitCode=1 Mar 12 14:24:13.169453 master-0 kubenswrapper[7440]: E0312 14:24:13.169329 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:24:03Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:24:03Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:24:03Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:24:03Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:0d4c830b2653f2eeffebd09537afb06afb5ae827adbc03f224ab7269f399c05c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d6065909bc521a3f9a85174276fdbceafad02a276449a7dd1952a1f689b0d362\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1735807445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:185237e125a9d710a58d4b588ea6b75eb361e4e99d979c1acd193de3b5d787f1\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:746054bb64fa0b27b1a696cd5db508bb9ee883a94969e4c1c4b5d35a93da8ef5\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1281521882},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:419c6163a23c12fa8884122764fc9055f901e98f35811ea7b5af57f8a71cdb3c\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bbd5afda20f052626b7914c319e3b44721ac442a05724cfe4199e8736319dcf1\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221789390},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032\\\"],\\\"sizeBytes\\\":487151732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e\\\"],\\\"sizeBytes\\\":480534195},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f\\\"],\\\"sizeBytes\\\":471430788}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:24:15.455260 master-0 kubenswrapper[7440]: I0312 14:24:15.455208 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:24:15.455731 master-0 kubenswrapper[7440]: I0312 14:24:15.455263 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:24:18.455703 master-0 kubenswrapper[7440]: I0312 14:24:18.455610 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:24:18.455703 master-0 kubenswrapper[7440]: I0312 14:24:18.455693 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:24:19.094660 master-0 kubenswrapper[7440]: E0312 14:24:19.094491 7440 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{machine-config-daemon-ngzc8.189c1dac0728854e openshift-machine-config-operator 11632 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-ngzc8,UID:8e4d9407-ff79-4396-a37f-896617e024d4,APIVersion:v1,ResourceVersion:8731,FieldPath:spec.containers{machine-config-daemon},},Reason:Unhealthy,Message:Liveness probe failed: Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:16:17 +0000 UTC,LastTimestamp:2026-03-12 14:17:17.968251234 +0000 UTC m=+298.303629793,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:24:19.788315 master-0 kubenswrapper[7440]: I0312 14:24:19.788232 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" start-of-body= Mar 12 14:24:19.788945 master-0 kubenswrapper[7440]: I0312 14:24:19.788311 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" Mar 12 14:24:20.039290 master-0 kubenswrapper[7440]: E0312 14:24:20.039125 7440 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:24:20.039535 master-0 kubenswrapper[7440]: E0312 14:24:20.039429 7440 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.01s" Mar 12 14:24:20.039535 master-0 kubenswrapper[7440]: I0312 14:24:20.039474 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:24:20.039535 master-0 kubenswrapper[7440]: I0312 14:24:20.039500 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" event={"ID":"8e4d9407-ff79-4396-a37f-896617e024d4","Type":"ContainerDied","Data":"8f8be4405a8d4e6b47e3984fee4354cff707b030f91ac3d80bc5aee09db3ea4a"} Mar 12 14:24:20.039535 master-0 kubenswrapper[7440]: I0312 14:24:20.039533 7440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f8be4405a8d4e6b47e3984fee4354cff707b030f91ac3d80bc5aee09db3ea4a" Mar 12 14:24:20.039992 master-0 kubenswrapper[7440]: I0312 14:24:20.039554 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:24:20.039992 master-0 kubenswrapper[7440]: I0312 14:24:20.039572 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:24:20.039992 master-0 kubenswrapper[7440]: I0312 14:24:20.039741 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:24:20.039992 master-0 kubenswrapper[7440]: I0312 14:24:20.039933 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:24:20.040297 master-0 kubenswrapper[7440]: I0312 14:24:20.040093 7440 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" containerID="cri-o://942edb2086b196730f2050c8c10e7943616ea284812689341f08412925b12705" Mar 12 14:24:20.040297 master-0 kubenswrapper[7440]: I0312 14:24:20.040115 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:24:20.040297 master-0 kubenswrapper[7440]: I0312 14:24:20.040133 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" event={"ID":"8e4d9407-ff79-4396-a37f-896617e024d4","Type":"ContainerStarted","Data":"f87f3196293c0cde53119456354d52266c897c928bf77795c604874d22ff9dfd"} Mar 12 14:24:20.041079 master-0 kubenswrapper[7440]: I0312 14:24:20.041016 7440 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"8e14f7d442275322d3e494f60cf9fca855dca850e1bd67ff3f7aec976914d196"} pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" containerMessage="Container router failed startup probe, will be restarted" Mar 12 14:24:20.041192 master-0 kubenswrapper[7440]: I0312 14:24:20.041112 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" containerID="cri-o://8e14f7d442275322d3e494f60cf9fca855dca850e1bd67ff3f7aec976914d196" gracePeriod=3600 Mar 12 14:24:20.041273 master-0 kubenswrapper[7440]: I0312 14:24:20.041199 7440 scope.go:117] "RemoveContainer" containerID="a946cdc53167780579891b144ae4c01088126bb42ef45317938bc8fe5fe26cbb" Mar 12 14:24:20.054752 master-0 kubenswrapper[7440]: I0312 14:24:20.054667 7440 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 12 14:24:21.236657 master-0 kubenswrapper[7440]: I0312 14:24:21.236597 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/4.log" Mar 12 14:24:21.455409 master-0 kubenswrapper[7440]: I0312 14:24:21.455326 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:24:21.455409 master-0 kubenswrapper[7440]: I0312 14:24:21.455409 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:24:23.170750 master-0 kubenswrapper[7440]: E0312 14:24:23.170674 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:24:24.455918 master-0 kubenswrapper[7440]: I0312 14:24:24.455726 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:24:24.456633 master-0 kubenswrapper[7440]: I0312 14:24:24.455974 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:24:25.021869 master-0 kubenswrapper[7440]: E0312 14:24:25.021604 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:24:27.454813 master-0 kubenswrapper[7440]: I0312 14:24:27.454731 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:24:27.454813 master-0 kubenswrapper[7440]: I0312 14:24:27.454804 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:24:29.518292 master-0 kubenswrapper[7440]: I0312 14:24:29.518184 7440 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:24:29.518292 master-0 kubenswrapper[7440]: I0312 14:24:29.518285 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:24:29.787464 master-0 kubenswrapper[7440]: I0312 14:24:29.787310 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" start-of-body= Mar 12 14:24:29.787464 master-0 kubenswrapper[7440]: I0312 14:24:29.787385 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" Mar 12 14:24:30.455473 master-0 kubenswrapper[7440]: I0312 14:24:30.455361 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:24:30.455473 master-0 kubenswrapper[7440]: I0312 14:24:30.455456 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:24:33.171145 master-0 kubenswrapper[7440]: E0312 14:24:33.171090 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:24:33.454721 master-0 kubenswrapper[7440]: I0312 14:24:33.454498 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:24:33.454721 master-0 kubenswrapper[7440]: I0312 14:24:33.454605 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:24:36.455810 master-0 kubenswrapper[7440]: I0312 14:24:36.455714 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:24:36.456452 master-0 kubenswrapper[7440]: I0312 14:24:36.455814 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:24:39.454714 master-0 kubenswrapper[7440]: I0312 14:24:39.454609 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:24:39.455611 master-0 kubenswrapper[7440]: I0312 14:24:39.454735 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:24:39.517259 master-0 kubenswrapper[7440]: I0312 14:24:39.517158 7440 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:24:39.517477 master-0 kubenswrapper[7440]: I0312 14:24:39.517257 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:24:39.787270 master-0 kubenswrapper[7440]: I0312 14:24:39.787198 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" start-of-body= Mar 12 14:24:39.787540 master-0 kubenswrapper[7440]: I0312 14:24:39.787275 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" Mar 12 14:24:42.023515 master-0 kubenswrapper[7440]: E0312 14:24:42.023371 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:24:42.456064 master-0 kubenswrapper[7440]: I0312 14:24:42.455764 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:24:42.456064 master-0 kubenswrapper[7440]: I0312 14:24:42.455955 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:24:43.172442 master-0 kubenswrapper[7440]: E0312 14:24:43.172360 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes master-0)" Mar 12 14:24:43.396017 master-0 kubenswrapper[7440]: I0312 14:24:43.395862 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-7f65c457f5-hkf2t_3dc73c14-852d-4957-b6ac-84366ba0594f/kube-storage-version-migrator-operator/2.log" Mar 12 14:24:43.396569 master-0 kubenswrapper[7440]: I0312 14:24:43.396517 7440 generic.go:334] "Generic (PLEG): container finished" podID="3dc73c14-852d-4957-b6ac-84366ba0594f" containerID="69c454beac6cc5afa4b488e211eca34b869e3d6b5b9eaf12b4d8b91763dfc9d3" exitCode=255 Mar 12 14:24:43.398968 master-0 kubenswrapper[7440]: I0312 14:24:43.398927 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-qtql5_1bba274a-38c7-4d13-88a5-6bc39228416c/kube-controller-manager-operator/2.log" Mar 12 14:24:43.399435 master-0 kubenswrapper[7440]: I0312 14:24:43.399396 7440 generic.go:334] "Generic (PLEG): container finished" podID="1bba274a-38c7-4d13-88a5-6bc39228416c" containerID="1435326bdb2bef433d6cb6c8682a1509956eb7447248331d4290a4c67fb3bc38" exitCode=255 Mar 12 14:24:43.401750 master-0 kubenswrapper[7440]: I0312 14:24:43.401709 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-jpf47_57930a54-89ab-4ec8-a504-74035bb74d63/authentication-operator/2.log" Mar 12 14:24:43.402233 master-0 kubenswrapper[7440]: I0312 14:24:43.402175 7440 generic.go:334] "Generic (PLEG): container finished" podID="57930a54-89ab-4ec8-a504-74035bb74d63" containerID="91255c6b16c7af2529c1e521fdbc69eade224ea969c92c151d4e92cf91d45cc1" exitCode=255 Mar 12 14:24:43.404444 master-0 kubenswrapper[7440]: I0312 14:24:43.404366 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-84bfdbbb7f-7lx8p_61de099a-410b-4d30-83e8-19cf5901cb27/service-ca-controller/1.log" Mar 12 14:24:43.405120 master-0 kubenswrapper[7440]: I0312 14:24:43.405068 7440 generic.go:334] "Generic (PLEG): container finished" podID="61de099a-410b-4d30-83e8-19cf5901cb27" containerID="ff3016afcdb6778aaf743a4289ede546ee1d9d24d09eb7a34743d13e7defa760" exitCode=255 Mar 12 14:24:43.407286 master-0 kubenswrapper[7440]: I0312 14:24:43.407254 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-zwdgk_d00a8cc7-7774-40bd-94a1-9ac2d0f63234/openshift-controller-manager-operator/2.log" Mar 12 14:24:43.407873 master-0 kubenswrapper[7440]: I0312 14:24:43.407821 7440 generic.go:334] "Generic (PLEG): container finished" podID="d00a8cc7-7774-40bd-94a1-9ac2d0f63234" containerID="95dd32cf12bfc127e14e6bb356ac107cba94348a2608b67065159ea570fe224b" exitCode=255 Mar 12 14:24:43.410710 master-0 kubenswrapper[7440]: I0312 14:24:43.410676 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-smpl5_a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2/kube-apiserver-operator/2.log" Mar 12 14:24:43.411389 master-0 kubenswrapper[7440]: I0312 14:24:43.411311 7440 generic.go:334] "Generic (PLEG): container finished" podID="a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2" containerID="44d72417f81941751149d110a32ac53aaf3ebd578a63426cf573e0c9323995fa" exitCode=255 Mar 12 14:24:43.413640 master-0 kubenswrapper[7440]: I0312 14:24:43.413540 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-mjxsv_8d775283-2696-4411-8ddf-d4e6000f0a0c/etcd-operator/2.log" Mar 12 14:24:43.414268 master-0 kubenswrapper[7440]: I0312 14:24:43.414202 7440 generic.go:334] "Generic (PLEG): container finished" podID="8d775283-2696-4411-8ddf-d4e6000f0a0c" containerID="add6a7027222fcbcfebd634ec4319fff646d91633d5b0bce4f0126cf9eac311e" exitCode=255 Mar 12 14:24:43.416632 master-0 kubenswrapper[7440]: I0312 14:24:43.416576 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-ldxfn_7433d9bf-4edf-4787-a7a1-e5102c7264c7/network-operator/2.log" Mar 12 14:24:43.417409 master-0 kubenswrapper[7440]: I0312 14:24:43.417351 7440 generic.go:334] "Generic (PLEG): container finished" podID="7433d9bf-4edf-4787-a7a1-e5102c7264c7" containerID="90afdba5757dcaf59474b1c77f52ccec8c1322e55deca5b3c4435bc3be8ed5e2" exitCode=255 Mar 12 14:24:43.419530 master-0 kubenswrapper[7440]: I0312 14:24:43.419479 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5685fbc7d-ckmlv_8660cef9-0ab3-453e-a4b9-c243daa6ddb0/csi-snapshot-controller-operator/1.log" Mar 12 14:24:43.419949 master-0 kubenswrapper[7440]: I0312 14:24:43.419882 7440 generic.go:334] "Generic (PLEG): container finished" podID="8660cef9-0ab3-453e-a4b9-c243daa6ddb0" containerID="ab1742f72c830599c24487d25e2f7d4998ed83fdb4a1bdbebd1e3d87b6efbbf6" exitCode=255 Mar 12 14:24:45.455782 master-0 kubenswrapper[7440]: I0312 14:24:45.455704 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:24:45.456435 master-0 kubenswrapper[7440]: I0312 14:24:45.455787 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:24:48.455044 master-0 kubenswrapper[7440]: I0312 14:24:48.454965 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:24:48.455044 master-0 kubenswrapper[7440]: I0312 14:24:48.455039 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:24:49.518424 master-0 kubenswrapper[7440]: I0312 14:24:49.518331 7440 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:24:49.519158 master-0 kubenswrapper[7440]: I0312 14:24:49.518449 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:24:49.788277 master-0 kubenswrapper[7440]: I0312 14:24:49.788060 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" start-of-body= Mar 12 14:24:49.788277 master-0 kubenswrapper[7440]: I0312 14:24:49.788174 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" Mar 12 14:24:51.455145 master-0 kubenswrapper[7440]: I0312 14:24:51.455000 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:24:51.455145 master-0 kubenswrapper[7440]: I0312 14:24:51.455088 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:24:51.473102 master-0 kubenswrapper[7440]: I0312 14:24:51.473013 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/5.log" Mar 12 14:24:51.473688 master-0 kubenswrapper[7440]: I0312 14:24:51.473626 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/4.log" Mar 12 14:24:51.475097 master-0 kubenswrapper[7440]: I0312 14:24:51.475017 7440 generic.go:334] "Generic (PLEG): container finished" podID="7fed292c3d5a90a99bfee43e89190405" containerID="292c715d936689cc5a4e9267c3b0c4dd0ea682eff5c05fa9b9cfcf2c9fa3088f" exitCode=255 Mar 12 14:24:53.098014 master-0 kubenswrapper[7440]: E0312 14:24:53.097790 7440 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Mar 12 14:24:53.098014 master-0 kubenswrapper[7440]: &Event{ObjectMeta:{kube-controller-manager-master-0.189c1db807ad5717 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:7fed292c3d5a90a99bfee43e89190405,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 12 14:24:53.098014 master-0 kubenswrapper[7440]: body: Mar 12 14:24:53.098014 master-0 kubenswrapper[7440]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:17:09.517121303 +0000 UTC m=+289.852499862,LastTimestamp:2026-03-12 14:17:19.517176081 +0000 UTC m=+299.852554640,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Mar 12 14:24:53.098014 master-0 kubenswrapper[7440]: > Mar 12 14:24:53.174052 master-0 kubenswrapper[7440]: E0312 14:24:53.173682 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:24:53.174052 master-0 kubenswrapper[7440]: E0312 14:24:53.173756 7440 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 14:24:54.057772 master-0 kubenswrapper[7440]: E0312 14:24:54.057691 7440 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:24:54.058696 master-0 kubenswrapper[7440]: E0312 14:24:54.058642 7440 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.018s" Mar 12 14:24:54.061758 master-0 kubenswrapper[7440]: I0312 14:24:54.061701 7440 scope.go:117] "RemoveContainer" containerID="f3d9c730da43b24ec075e5b126409b0c8c7273cecb83802d3e5610d1f61d4571" Mar 12 14:24:54.061957 master-0 kubenswrapper[7440]: I0312 14:24:54.061875 7440 scope.go:117] "RemoveContainer" containerID="add6a7027222fcbcfebd634ec4319fff646d91633d5b0bce4f0126cf9eac311e" Mar 12 14:24:54.062855 master-0 kubenswrapper[7440]: E0312 14:24:54.062322 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=etcd-operator pod=etcd-operator-5884b9cd56-mjxsv_openshift-etcd-operator(8d775283-2696-4411-8ddf-d4e6000f0a0c)\"" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" podUID="8d775283-2696-4411-8ddf-d4e6000f0a0c" Mar 12 14:24:54.062855 master-0 kubenswrapper[7440]: I0312 14:24:54.062368 7440 scope.go:117] "RemoveContainer" containerID="82a229708282890eba0f2dd66591b7d498131fca3dd378e3fd0c6eab0f3fa96d" Mar 12 14:24:54.062855 master-0 kubenswrapper[7440]: I0312 14:24:54.062739 7440 scope.go:117] "RemoveContainer" containerID="d27cef2ffd951ac8b7af825674c33be11e2853a2bd3265c01b885bcdafe8ff3f" Mar 12 14:24:54.064385 master-0 kubenswrapper[7440]: I0312 14:24:54.064025 7440 scope.go:117] "RemoveContainer" containerID="1435326bdb2bef433d6cb6c8682a1509956eb7447248331d4290a4c67fb3bc38" Mar 12 14:24:54.064488 master-0 kubenswrapper[7440]: E0312 14:24:54.064404 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-86d7cdfdfb-qtql5_openshift-kube-controller-manager-operator(1bba274a-38c7-4d13-88a5-6bc39228416c)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" podUID="1bba274a-38c7-4d13-88a5-6bc39228416c" Mar 12 14:24:54.065640 master-0 kubenswrapper[7440]: I0312 14:24:54.065591 7440 scope.go:117] "RemoveContainer" containerID="10e2670e6ab6b47f07948c60e7e3a46c3f0ed3468cba558c9fc231e5dc2ca43a" Mar 12 14:24:54.066021 master-0 kubenswrapper[7440]: I0312 14:24:54.065990 7440 scope.go:117] "RemoveContainer" containerID="45abcab2b6c821296572dad37b9e6f9ba63e552dbae8db16db31cb4dc1b36a86" Mar 12 14:24:54.066151 master-0 kubenswrapper[7440]: I0312 14:24:54.066114 7440 scope.go:117] "RemoveContainer" containerID="95dd32cf12bfc127e14e6bb356ac107cba94348a2608b67065159ea570fe224b" Mar 12 14:24:54.066882 master-0 kubenswrapper[7440]: E0312 14:24:54.066431 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-8565d84698-zwdgk_openshift-controller-manager-operator(d00a8cc7-7774-40bd-94a1-9ac2d0f63234)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" podUID="d00a8cc7-7774-40bd-94a1-9ac2d0f63234" Mar 12 14:24:54.066882 master-0 kubenswrapper[7440]: I0312 14:24:54.066689 7440 scope.go:117] "RemoveContainer" containerID="e7dea74eb883602f1f3d133f192958f321d40672d5572126aaddfb68d54ed527" Mar 12 14:24:54.067489 master-0 kubenswrapper[7440]: I0312 14:24:54.067066 7440 scope.go:117] "RemoveContainer" containerID="90afdba5757dcaf59474b1c77f52ccec8c1322e55deca5b3c4435bc3be8ed5e2" Mar 12 14:24:54.067489 master-0 kubenswrapper[7440]: E0312 14:24:54.067344 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=network-operator pod=network-operator-7c649bf6d4-ldxfn_openshift-network-operator(7433d9bf-4edf-4787-a7a1-e5102c7264c7)\"" pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" podUID="7433d9bf-4edf-4787-a7a1-e5102c7264c7" Mar 12 14:24:54.067615 master-0 kubenswrapper[7440]: I0312 14:24:54.067510 7440 scope.go:117] "RemoveContainer" containerID="91255c6b16c7af2529c1e521fdbc69eade224ea969c92c151d4e92cf91d45cc1" Mar 12 14:24:54.068600 master-0 kubenswrapper[7440]: E0312 14:24:54.068541 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=authentication-operator pod=authentication-operator-7c6989d6c4-jpf47_openshift-authentication-operator(57930a54-89ab-4ec8-a504-74035bb74d63)\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" podUID="57930a54-89ab-4ec8-a504-74035bb74d63" Mar 12 14:24:54.069211 master-0 kubenswrapper[7440]: I0312 14:24:54.069111 7440 scope.go:117] "RemoveContainer" containerID="e085982a635b4a0eba26b5bb736bad34e1f9261a79ed2b915c47c028db213dd4" Mar 12 14:24:54.069601 master-0 kubenswrapper[7440]: I0312 14:24:54.069564 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:24:54.069688 master-0 kubenswrapper[7440]: I0312 14:24:54.069624 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7fed292c3d5a90a99bfee43e89190405","Type":"ContainerStarted","Data":"77d5ea8d3aeff7d8613d21bf451df4c494347c5824551bb22ccce9ec8f0d6a8d"} Mar 12 14:24:54.069688 master-0 kubenswrapper[7440]: I0312 14:24:54.069655 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"05fc4965-b390-4edc-a407-d431b06d7612","Type":"ContainerDied","Data":"6aa44e483ff3af56ade2c830f5190301f0a2aff21489693f95cab78436b2ad8d"} Mar 12 14:24:54.069775 master-0 kubenswrapper[7440]: I0312 14:24:54.069687 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:24:54.070744 master-0 kubenswrapper[7440]: I0312 14:24:54.070694 7440 scope.go:117] "RemoveContainer" containerID="292c715d936689cc5a4e9267c3b0c4dd0ea682eff5c05fa9b9cfcf2c9fa3088f" Mar 12 14:24:54.071261 master-0 kubenswrapper[7440]: E0312 14:24:54.071198 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:24:54.072358 master-0 kubenswrapper[7440]: I0312 14:24:54.072320 7440 scope.go:117] "RemoveContainer" containerID="74c768e9e11582adc0014bc840fea327d7f38cf0f676db2b9e0edea0c24915ce" Mar 12 14:24:54.073150 master-0 kubenswrapper[7440]: I0312 14:24:54.073108 7440 scope.go:117] "RemoveContainer" containerID="3229df69e2e642a1705181c6aea965ce680072f14717e055b2a989c42f067dc0" Mar 12 14:24:54.073706 master-0 kubenswrapper[7440]: I0312 14:24:54.073661 7440 scope.go:117] "RemoveContainer" containerID="69c454beac6cc5afa4b488e211eca34b869e3d6b5b9eaf12b4d8b91763dfc9d3" Mar 12 14:24:54.074122 master-0 kubenswrapper[7440]: E0312 14:24:54.074053 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-storage-version-migrator-operator pod=kube-storage-version-migrator-operator-7f65c457f5-hkf2t_openshift-kube-storage-version-migrator-operator(3dc73c14-852d-4957-b6ac-84366ba0594f)\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" podUID="3dc73c14-852d-4957-b6ac-84366ba0594f" Mar 12 14:24:54.074563 master-0 kubenswrapper[7440]: I0312 14:24:54.074523 7440 scope.go:117] "RemoveContainer" containerID="d09193ab64fa4ad5898ed40452f50720dec8c982d5f7eb0df7950d928c3d3534" Mar 12 14:24:54.075654 master-0 kubenswrapper[7440]: I0312 14:24:54.075611 7440 scope.go:117] "RemoveContainer" containerID="ab1742f72c830599c24487d25e2f7d4998ed83fdb4a1bdbebd1e3d87b6efbbf6" Mar 12 14:24:54.076834 master-0 kubenswrapper[7440]: I0312 14:24:54.076794 7440 scope.go:117] "RemoveContainer" containerID="44d72417f81941751149d110a32ac53aaf3ebd578a63426cf573e0c9323995fa" Mar 12 14:24:54.077087 master-0 kubenswrapper[7440]: E0312 14:24:54.077051 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-68bd585b-smpl5_openshift-kube-apiserver-operator(a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" podUID="a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2" Mar 12 14:24:54.077753 master-0 kubenswrapper[7440]: I0312 14:24:54.077711 7440 scope.go:117] "RemoveContainer" containerID="ff3016afcdb6778aaf743a4289ede546ee1d9d24d09eb7a34743d13e7defa760" Mar 12 14:24:54.078368 master-0 kubenswrapper[7440]: I0312 14:24:54.078334 7440 scope.go:117] "RemoveContainer" containerID="952a4e5cff72cd7499151126b7d570c4e426b0316c7d3f1d9462b433d44d34b6" Mar 12 14:24:54.078769 master-0 kubenswrapper[7440]: I0312 14:24:54.078738 7440 scope.go:117] "RemoveContainer" containerID="10ebd0ad67dc09a94de6455e90b725a93074cf336ebd90eea3f8574d71ab8322" Mar 12 14:24:54.079562 master-0 kubenswrapper[7440]: I0312 14:24:54.079517 7440 scope.go:117] "RemoveContainer" containerID="d0767e3a40f949712be9170d0b8f7cd2c338fed5faee0a7ad41873676dd6e5ae" Mar 12 14:24:54.093212 master-0 kubenswrapper[7440]: I0312 14:24:54.093160 7440 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 12 14:24:55.512196 master-0 kubenswrapper[7440]: I0312 14:24:55.512121 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/3.log" Mar 12 14:24:55.513082 master-0 kubenswrapper[7440]: I0312 14:24:55.513027 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/2.log" Mar 12 14:24:55.513607 master-0 kubenswrapper[7440]: I0312 14:24:55.513575 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/1.log" Mar 12 14:24:55.526164 master-0 kubenswrapper[7440]: I0312 14:24:55.526141 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hs6mc_3edaa533-ecbb-443e-a270-4cb4f923daf6/cluster-baremetal-operator/2.log" Mar 12 14:24:55.526669 master-0 kubenswrapper[7440]: I0312 14:24:55.526644 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hs6mc_3edaa533-ecbb-443e-a270-4cb4f923daf6/cluster-baremetal-operator/1.log" Mar 12 14:24:55.527669 master-0 kubenswrapper[7440]: I0312 14:24:55.527647 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hs6mc_3edaa533-ecbb-443e-a270-4cb4f923daf6/cluster-baremetal-operator/0.log" Mar 12 14:24:55.530078 master-0 kubenswrapper[7440]: I0312 14:24:55.530027 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5685fbc7d-ckmlv_8660cef9-0ab3-453e-a4b9-c243daa6ddb0/csi-snapshot-controller-operator/1.log" Mar 12 14:24:55.535611 master-0 kubenswrapper[7440]: I0312 14:24:55.535596 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/2.log" Mar 12 14:24:55.536351 master-0 kubenswrapper[7440]: I0312 14:24:55.536333 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/1.log" Mar 12 14:24:55.541639 master-0 kubenswrapper[7440]: I0312 14:24:55.541613 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-qtx2d_6f5cd3ff-ced6-47e3-8054-d83053d87680/machine-api-operator/0.log" Mar 12 14:24:55.551543 master-0 kubenswrapper[7440]: I0312 14:24:55.551524 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-84bfdbbb7f-7lx8p_61de099a-410b-4d30-83e8-19cf5901cb27/service-ca-controller/1.log" Mar 12 14:24:57.662602 master-0 kubenswrapper[7440]: I0312 14:24:57.662505 7440 status_manager.go:851] "Failed to get status for pod" podUID="8d775283-2696-4411-8ddf-d4e6000f0a0c" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-operator-5884b9cd56-mjxsv)" Mar 12 14:24:59.025052 master-0 kubenswrapper[7440]: E0312 14:24:59.024955 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:25:06.637630 master-0 kubenswrapper[7440]: I0312 14:25:06.637540 7440 generic.go:334] "Generic (PLEG): container finished" podID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerID="8e14f7d442275322d3e494f60cf9fca855dca850e1bd67ff3f7aec976914d196" exitCode=0 Mar 12 14:25:07.132643 master-0 kubenswrapper[7440]: I0312 14:25:07.132565 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:07.132643 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:07.132643 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:07.132643 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:07.132998 master-0 kubenswrapper[7440]: I0312 14:25:07.132670 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:08.132165 master-0 kubenswrapper[7440]: I0312 14:25:08.132084 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:08.132165 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:08.132165 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:08.132165 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:08.132882 master-0 kubenswrapper[7440]: I0312 14:25:08.132176 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:09.134557 master-0 kubenswrapper[7440]: I0312 14:25:09.134470 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:09.134557 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:09.134557 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:09.134557 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:09.135284 master-0 kubenswrapper[7440]: I0312 14:25:09.134576 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:10.131849 master-0 kubenswrapper[7440]: I0312 14:25:10.131751 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:10.131849 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:10.131849 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:10.131849 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:10.132256 master-0 kubenswrapper[7440]: I0312 14:25:10.132030 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:11.132541 master-0 kubenswrapper[7440]: I0312 14:25:11.132453 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:11.132541 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:11.132541 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:11.132541 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:11.132541 master-0 kubenswrapper[7440]: I0312 14:25:11.132530 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:12.132278 master-0 kubenswrapper[7440]: I0312 14:25:12.132134 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:12.132278 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:12.132278 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:12.132278 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:12.133108 master-0 kubenswrapper[7440]: I0312 14:25:12.132324 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:13.132882 master-0 kubenswrapper[7440]: I0312 14:25:13.132800 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:13.132882 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:13.132882 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:13.132882 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:13.133553 master-0 kubenswrapper[7440]: I0312 14:25:13.132881 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:13.249500 master-0 kubenswrapper[7440]: E0312 14:25:13.248731 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:25:03Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:25:03Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:25:03Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:25:03Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:0d4c830b2653f2eeffebd09537afb06afb5ae827adbc03f224ab7269f399c05c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d6065909bc521a3f9a85174276fdbceafad02a276449a7dd1952a1f689b0d362\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1735807445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:185237e125a9d710a58d4b588ea6b75eb361e4e99d979c1acd193de3b5d787f1\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:746054bb64fa0b27b1a696cd5db508bb9ee883a94969e4c1c4b5d35a93da8ef5\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1281521882},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:419c6163a23c12fa8884122764fc9055f901e98f35811ea7b5af57f8a71cdb3c\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bbd5afda20f052626b7914c319e3b44721ac442a05724cfe4199e8736319dcf1\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221789390},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032\\\"],\\\"sizeBytes\\\":487151732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e\\\"],\\\"sizeBytes\\\":480534195},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f\\\"],\\\"sizeBytes\\\":471430788}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:25:13.691329 master-0 kubenswrapper[7440]: I0312 14:25:13.691214 7440 generic.go:334] "Generic (PLEG): container finished" podID="dd29b21c-7a0e-4311-952f-427b00468e66" containerID="b7ebd6ed103fd32804e88ec8b0eb113b06bd39e732fa9609967014bb6c6c87cc" exitCode=0 Mar 12 14:25:14.131681 master-0 kubenswrapper[7440]: I0312 14:25:14.131564 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:14.131681 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:14.131681 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:14.131681 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:14.132246 master-0 kubenswrapper[7440]: I0312 14:25:14.131710 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:15.131834 master-0 kubenswrapper[7440]: I0312 14:25:15.131708 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:15.131834 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:15.131834 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:15.131834 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:15.132474 master-0 kubenswrapper[7440]: I0312 14:25:15.131883 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:16.025805 master-0 kubenswrapper[7440]: E0312 14:25:16.025706 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:25:16.131928 master-0 kubenswrapper[7440]: I0312 14:25:16.131843 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:16.131928 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:16.131928 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:16.131928 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:16.132686 master-0 kubenswrapper[7440]: I0312 14:25:16.131948 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:17.132888 master-0 kubenswrapper[7440]: I0312 14:25:17.132802 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:17.132888 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:17.132888 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:17.132888 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:17.133773 master-0 kubenswrapper[7440]: I0312 14:25:17.132890 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:17.968526 master-0 kubenswrapper[7440]: I0312 14:25:17.968481 7440 patch_prober.go:28] interesting pod/machine-config-daemon-ngzc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:25:17.969000 master-0 kubenswrapper[7440]: I0312 14:25:17.968959 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" podUID="8e4d9407-ff79-4396-a37f-896617e024d4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:25:18.132869 master-0 kubenswrapper[7440]: I0312 14:25:18.132786 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:18.132869 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:18.132869 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:18.132869 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:18.134148 master-0 kubenswrapper[7440]: I0312 14:25:18.132871 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:19.132716 master-0 kubenswrapper[7440]: I0312 14:25:19.132602 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:19.132716 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:19.132716 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:19.132716 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:19.133053 master-0 kubenswrapper[7440]: I0312 14:25:19.132717 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:20.131829 master-0 kubenswrapper[7440]: I0312 14:25:20.131775 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:20.131829 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:20.131829 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:20.131829 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:20.132806 master-0 kubenswrapper[7440]: I0312 14:25:20.131837 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:21.131826 master-0 kubenswrapper[7440]: I0312 14:25:21.131731 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:21.131826 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:21.131826 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:21.131826 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:21.131826 master-0 kubenswrapper[7440]: I0312 14:25:21.131801 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:22.131395 master-0 kubenswrapper[7440]: I0312 14:25:22.131297 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:22.131395 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:22.131395 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:22.131395 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:22.131395 master-0 kubenswrapper[7440]: I0312 14:25:22.131360 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:23.131888 master-0 kubenswrapper[7440]: I0312 14:25:23.131802 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:23.131888 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:23.131888 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:23.131888 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:23.131888 master-0 kubenswrapper[7440]: I0312 14:25:23.131857 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:23.250263 master-0 kubenswrapper[7440]: E0312 14:25:23.250106 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:25:24.131833 master-0 kubenswrapper[7440]: I0312 14:25:24.131771 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:24.131833 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:24.131833 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:24.131833 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:24.132418 master-0 kubenswrapper[7440]: I0312 14:25:24.131837 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:24.771714 master-0 kubenswrapper[7440]: I0312 14:25:24.771671 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/4.log" Mar 12 14:25:24.772111 master-0 kubenswrapper[7440]: I0312 14:25:24.772095 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/3.log" Mar 12 14:25:24.772511 master-0 kubenswrapper[7440]: I0312 14:25:24.772489 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/2.log" Mar 12 14:25:24.772983 master-0 kubenswrapper[7440]: I0312 14:25:24.772962 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/1.log" Mar 12 14:25:24.773054 master-0 kubenswrapper[7440]: I0312 14:25:24.773002 7440 generic.go:334] "Generic (PLEG): container finished" podID="d56089bf-177c-492d-8964-73a45574e7ed" containerID="6475bc0affe8a98c9e1b7717d0757a27fe42a8342fbfe27794215021cef2d056" exitCode=1 Mar 12 14:25:25.132243 master-0 kubenswrapper[7440]: I0312 14:25:25.132157 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:25.132243 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:25.132243 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:25.132243 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:25.132243 master-0 kubenswrapper[7440]: I0312 14:25:25.132222 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:25.782877 master-0 kubenswrapper[7440]: I0312 14:25:25.782822 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7f8bfc67b-pz8rc_df31c4c2-304e-4bad-8e6f-18c174eba675/route-controller-manager/1.log" Mar 12 14:25:25.783767 master-0 kubenswrapper[7440]: I0312 14:25:25.783638 7440 generic.go:334] "Generic (PLEG): container finished" podID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerID="b0ef8cb458573461dc78ec84dd70e59e9585b138f2517187a17259dabba2dfeb" exitCode=255 Mar 12 14:25:26.132504 master-0 kubenswrapper[7440]: I0312 14:25:26.132307 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:26.132504 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:26.132504 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:26.132504 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:26.132504 master-0 kubenswrapper[7440]: I0312 14:25:26.132372 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:27.102177 master-0 kubenswrapper[7440]: E0312 14:25:27.102008 7440 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189c1db807adf6a0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:7fed292c3d5a90a99bfee43e89190405,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:17:09.517162144 +0000 UTC m=+289.852540703,LastTimestamp:2026-03-12 14:17:19.517216831 +0000 UTC m=+299.852595390,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:25:27.131784 master-0 kubenswrapper[7440]: I0312 14:25:27.131685 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:27.131784 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:27.131784 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:27.131784 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:27.131784 master-0 kubenswrapper[7440]: I0312 14:25:27.131761 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:28.097065 master-0 kubenswrapper[7440]: E0312 14:25:28.096946 7440 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:25:28.098359 master-0 kubenswrapper[7440]: E0312 14:25:28.097346 7440 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.028s" Mar 12 14:25:28.098359 master-0 kubenswrapper[7440]: I0312 14:25:28.098021 7440 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="10b1bd98-beac-469c-9a65-abee3ca8a243" Mar 12 14:25:28.098359 master-0 kubenswrapper[7440]: I0312 14:25:28.098066 7440 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="10b1bd98-beac-469c-9a65-abee3ca8a243" Mar 12 14:25:28.103191 master-0 kubenswrapper[7440]: I0312 14:25:28.098676 7440 scope.go:117] "RemoveContainer" containerID="1435326bdb2bef433d6cb6c8682a1509956eb7447248331d4290a4c67fb3bc38" Mar 12 14:25:28.103191 master-0 kubenswrapper[7440]: I0312 14:25:28.098776 7440 scope.go:117] "RemoveContainer" containerID="44d72417f81941751149d110a32ac53aaf3ebd578a63426cf573e0c9323995fa" Mar 12 14:25:28.103191 master-0 kubenswrapper[7440]: I0312 14:25:28.098831 7440 scope.go:117] "RemoveContainer" containerID="95dd32cf12bfc127e14e6bb356ac107cba94348a2608b67065159ea570fe224b" Mar 12 14:25:28.103191 master-0 kubenswrapper[7440]: I0312 14:25:28.099142 7440 scope.go:117] "RemoveContainer" containerID="69c454beac6cc5afa4b488e211eca34b869e3d6b5b9eaf12b4d8b91763dfc9d3" Mar 12 14:25:28.103191 master-0 kubenswrapper[7440]: I0312 14:25:28.099250 7440 scope.go:117] "RemoveContainer" containerID="292c715d936689cc5a4e9267c3b0c4dd0ea682eff5c05fa9b9cfcf2c9fa3088f" Mar 12 14:25:28.103191 master-0 kubenswrapper[7440]: E0312 14:25:28.099574 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:25:28.103191 master-0 kubenswrapper[7440]: I0312 14:25:28.100153 7440 scope.go:117] "RemoveContainer" containerID="90afdba5757dcaf59474b1c77f52ccec8c1322e55deca5b3c4435bc3be8ed5e2" Mar 12 14:25:28.103191 master-0 kubenswrapper[7440]: I0312 14:25:28.100290 7440 scope.go:117] "RemoveContainer" containerID="add6a7027222fcbcfebd634ec4319fff646d91633d5b0bce4f0126cf9eac311e" Mar 12 14:25:28.103191 master-0 kubenswrapper[7440]: I0312 14:25:28.100484 7440 scope.go:117] "RemoveContainer" containerID="91255c6b16c7af2529c1e521fdbc69eade224ea969c92c151d4e92cf91d45cc1" Mar 12 14:25:28.114969 master-0 kubenswrapper[7440]: I0312 14:25:28.114926 7440 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 12 14:25:28.131681 master-0 kubenswrapper[7440]: I0312 14:25:28.131598 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:28.131681 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:28.131681 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:28.131681 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:28.131895 master-0 kubenswrapper[7440]: I0312 14:25:28.131733 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:28.809690 master-0 kubenswrapper[7440]: I0312 14:25:28.809609 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-mjxsv_8d775283-2696-4411-8ddf-d4e6000f0a0c/etcd-operator/2.log" Mar 12 14:25:28.813959 master-0 kubenswrapper[7440]: I0312 14:25:28.813843 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-jpf47_57930a54-89ab-4ec8-a504-74035bb74d63/authentication-operator/2.log" Mar 12 14:25:28.817672 master-0 kubenswrapper[7440]: I0312 14:25:28.817509 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-7f65c457f5-hkf2t_3dc73c14-852d-4957-b6ac-84366ba0594f/kube-storage-version-migrator-operator/2.log" Mar 12 14:25:28.821131 master-0 kubenswrapper[7440]: I0312 14:25:28.821091 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-qtql5_1bba274a-38c7-4d13-88a5-6bc39228416c/kube-controller-manager-operator/2.log" Mar 12 14:25:28.824776 master-0 kubenswrapper[7440]: I0312 14:25:28.824712 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-smpl5_a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2/kube-apiserver-operator/2.log" Mar 12 14:25:28.828427 master-0 kubenswrapper[7440]: I0312 14:25:28.828386 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-zwdgk_d00a8cc7-7774-40bd-94a1-9ac2d0f63234/openshift-controller-manager-operator/2.log" Mar 12 14:25:28.832111 master-0 kubenswrapper[7440]: I0312 14:25:28.832062 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-ldxfn_7433d9bf-4edf-4787-a7a1-e5102c7264c7/network-operator/2.log" Mar 12 14:25:29.133188 master-0 kubenswrapper[7440]: I0312 14:25:29.132977 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:29.133188 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:29.133188 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:29.133188 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:29.133188 master-0 kubenswrapper[7440]: I0312 14:25:29.133149 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:30.132121 master-0 kubenswrapper[7440]: I0312 14:25:30.132031 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:30.132121 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:30.132121 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:30.132121 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:30.816025 master-0 kubenswrapper[7440]: I0312 14:25:30.132143 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:31.131946 master-0 kubenswrapper[7440]: I0312 14:25:31.131740 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:31.131946 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:31.131946 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:31.131946 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:31.131946 master-0 kubenswrapper[7440]: I0312 14:25:31.131819 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:32.131595 master-0 kubenswrapper[7440]: I0312 14:25:32.131538 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:32.131595 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:32.131595 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:32.131595 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:32.132423 master-0 kubenswrapper[7440]: I0312 14:25:32.132300 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:33.026810 master-0 kubenswrapper[7440]: E0312 14:25:33.026719 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:25:33.131931 master-0 kubenswrapper[7440]: I0312 14:25:33.131878 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:33.131931 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:33.131931 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:33.131931 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:33.132456 master-0 kubenswrapper[7440]: I0312 14:25:33.132433 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:33.251077 master-0 kubenswrapper[7440]: E0312 14:25:33.250982 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:25:34.132036 master-0 kubenswrapper[7440]: I0312 14:25:34.131925 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:34.132036 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:34.132036 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:34.132036 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:34.133388 master-0 kubenswrapper[7440]: I0312 14:25:34.132029 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:35.131577 master-0 kubenswrapper[7440]: I0312 14:25:35.131494 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:35.131577 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:35.131577 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:35.131577 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:35.131859 master-0 kubenswrapper[7440]: I0312 14:25:35.131589 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:36.131846 master-0 kubenswrapper[7440]: I0312 14:25:36.131777 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:36.131846 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:36.131846 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:36.131846 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:36.132456 master-0 kubenswrapper[7440]: I0312 14:25:36.131865 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:37.132509 master-0 kubenswrapper[7440]: I0312 14:25:37.132402 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:37.132509 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:37.132509 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:37.132509 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:37.133509 master-0 kubenswrapper[7440]: I0312 14:25:37.132518 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:38.131774 master-0 kubenswrapper[7440]: I0312 14:25:38.131692 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:38.131774 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:38.131774 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:38.131774 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:38.131774 master-0 kubenswrapper[7440]: I0312 14:25:38.131764 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:39.132552 master-0 kubenswrapper[7440]: I0312 14:25:39.132469 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:39.132552 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:39.132552 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:39.132552 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:39.133164 master-0 kubenswrapper[7440]: I0312 14:25:39.132563 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:40.131833 master-0 kubenswrapper[7440]: I0312 14:25:40.131771 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:40.131833 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:40.131833 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:40.131833 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:40.132467 master-0 kubenswrapper[7440]: I0312 14:25:40.132427 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:41.131577 master-0 kubenswrapper[7440]: I0312 14:25:41.131507 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:41.131577 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:41.131577 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:41.131577 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:41.132295 master-0 kubenswrapper[7440]: I0312 14:25:41.131578 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:42.132327 master-0 kubenswrapper[7440]: I0312 14:25:42.132240 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:42.132327 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:42.132327 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:42.132327 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:42.133335 master-0 kubenswrapper[7440]: I0312 14:25:42.132337 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:43.132365 master-0 kubenswrapper[7440]: I0312 14:25:43.132289 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:43.132365 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:43.132365 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:43.132365 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:43.133346 master-0 kubenswrapper[7440]: I0312 14:25:43.132374 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:43.251957 master-0 kubenswrapper[7440]: E0312 14:25:43.251849 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:25:44.132554 master-0 kubenswrapper[7440]: I0312 14:25:44.132424 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:44.132554 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:44.132554 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:44.132554 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:44.132554 master-0 kubenswrapper[7440]: I0312 14:25:44.132522 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:45.131759 master-0 kubenswrapper[7440]: I0312 14:25:45.131688 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:45.131759 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:45.131759 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:45.131759 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:45.132262 master-0 kubenswrapper[7440]: I0312 14:25:45.131762 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:46.131971 master-0 kubenswrapper[7440]: I0312 14:25:46.131892 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:46.131971 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:46.131971 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:46.131971 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:46.132761 master-0 kubenswrapper[7440]: I0312 14:25:46.132004 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:47.132338 master-0 kubenswrapper[7440]: I0312 14:25:47.132251 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:47.132338 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:47.132338 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:47.132338 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:47.133077 master-0 kubenswrapper[7440]: I0312 14:25:47.132347 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:47.968129 master-0 kubenswrapper[7440]: I0312 14:25:47.968049 7440 patch_prober.go:28] interesting pod/machine-config-daemon-ngzc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:25:47.968439 master-0 kubenswrapper[7440]: I0312 14:25:47.968141 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" podUID="8e4d9407-ff79-4396-a37f-896617e024d4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:25:48.132023 master-0 kubenswrapper[7440]: I0312 14:25:48.131915 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:48.132023 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:48.132023 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:48.132023 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:48.132023 master-0 kubenswrapper[7440]: I0312 14:25:48.131996 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:49.132418 master-0 kubenswrapper[7440]: I0312 14:25:49.132354 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:49.132418 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:49.132418 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:49.132418 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:49.132958 master-0 kubenswrapper[7440]: I0312 14:25:49.132447 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:50.029144 master-0 kubenswrapper[7440]: E0312 14:25:50.029026 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:25:50.133249 master-0 kubenswrapper[7440]: I0312 14:25:50.133180 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:50.133249 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:50.133249 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:50.133249 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:50.133920 master-0 kubenswrapper[7440]: I0312 14:25:50.133874 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:51.131397 master-0 kubenswrapper[7440]: I0312 14:25:51.131333 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:51.131397 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:51.131397 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:51.131397 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:51.131734 master-0 kubenswrapper[7440]: I0312 14:25:51.131400 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:52.132032 master-0 kubenswrapper[7440]: I0312 14:25:52.131925 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:52.132032 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:52.132032 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:52.132032 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:52.132821 master-0 kubenswrapper[7440]: I0312 14:25:52.132041 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:53.132249 master-0 kubenswrapper[7440]: I0312 14:25:53.132164 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:53.132249 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:53.132249 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:53.132249 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:53.132961 master-0 kubenswrapper[7440]: I0312 14:25:53.132260 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:53.253032 master-0 kubenswrapper[7440]: E0312 14:25:53.252482 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:25:53.253032 master-0 kubenswrapper[7440]: E0312 14:25:53.252556 7440 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 14:25:54.132013 master-0 kubenswrapper[7440]: I0312 14:25:54.131825 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:54.132013 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:54.132013 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:54.132013 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:54.133364 master-0 kubenswrapper[7440]: I0312 14:25:54.133223 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:55.032458 master-0 kubenswrapper[7440]: I0312 14:25:55.032423 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hs6mc_3edaa533-ecbb-443e-a270-4cb4f923daf6/cluster-baremetal-operator/3.log" Mar 12 14:25:55.032909 master-0 kubenswrapper[7440]: I0312 14:25:55.032860 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hs6mc_3edaa533-ecbb-443e-a270-4cb4f923daf6/cluster-baremetal-operator/2.log" Mar 12 14:25:55.033347 master-0 kubenswrapper[7440]: I0312 14:25:55.033316 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hs6mc_3edaa533-ecbb-443e-a270-4cb4f923daf6/cluster-baremetal-operator/1.log" Mar 12 14:25:55.034102 master-0 kubenswrapper[7440]: I0312 14:25:55.034082 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hs6mc_3edaa533-ecbb-443e-a270-4cb4f923daf6/cluster-baremetal-operator/0.log" Mar 12 14:25:55.034152 master-0 kubenswrapper[7440]: I0312 14:25:55.034117 7440 generic.go:334] "Generic (PLEG): container finished" podID="3edaa533-ecbb-443e-a270-4cb4f923daf6" containerID="3ebfe9284b5aa5ae3cf93734a2a620a3ca175da8fc2dbf0765228bbf0c19305a" exitCode=1 Mar 12 14:25:55.133794 master-0 kubenswrapper[7440]: I0312 14:25:55.133659 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:55.133794 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:55.133794 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:55.133794 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:55.134479 master-0 kubenswrapper[7440]: I0312 14:25:55.134450 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:56.131343 master-0 kubenswrapper[7440]: I0312 14:25:56.131270 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:56.131343 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:56.131343 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:56.131343 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:56.131343 master-0 kubenswrapper[7440]: I0312 14:25:56.131338 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:57.132096 master-0 kubenswrapper[7440]: I0312 14:25:57.131983 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:57.132096 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:57.132096 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:57.132096 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:57.133170 master-0 kubenswrapper[7440]: I0312 14:25:57.132122 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:57.663648 master-0 kubenswrapper[7440]: I0312 14:25:57.663580 7440 status_manager.go:851] "Failed to get status for pod" podUID="d00a8cc7-7774-40bd-94a1-9ac2d0f63234" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-controller-manager-operator-8565d84698-zwdgk)" Mar 12 14:25:58.132368 master-0 kubenswrapper[7440]: I0312 14:25:58.132254 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:58.132368 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:58.132368 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:58.132368 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:58.132368 master-0 kubenswrapper[7440]: I0312 14:25:58.132363 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:25:59.132309 master-0 kubenswrapper[7440]: I0312 14:25:59.132187 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:25:59.132309 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:25:59.132309 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:25:59.132309 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:25:59.132309 master-0 kubenswrapper[7440]: I0312 14:25:59.132270 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:00.132781 master-0 kubenswrapper[7440]: I0312 14:26:00.132645 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:00.132781 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:00.132781 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:00.132781 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:00.133668 master-0 kubenswrapper[7440]: I0312 14:26:00.132818 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:01.106316 master-0 kubenswrapper[7440]: E0312 14:26:01.106105 7440 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Mar 12 14:26:01.106316 master-0 kubenswrapper[7440]: &Event{ObjectMeta:{kube-controller-manager-master-0.189c1dbc30193fa8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:7fed292c3d5a90a99bfee43e89190405,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": read tcp 127.0.0.1:52232->127.0.0.1:10357: read: connection reset by peer Mar 12 14:26:01.106316 master-0 kubenswrapper[7440]: body: Mar 12 14:26:01.106316 master-0 kubenswrapper[7440]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:17:27.375151016 +0000 UTC m=+307.710529565,LastTimestamp:2026-03-12 14:17:27.375151016 +0000 UTC m=+307.710529565,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Mar 12 14:26:01.106316 master-0 kubenswrapper[7440]: > Mar 12 14:26:01.131817 master-0 kubenswrapper[7440]: I0312 14:26:01.131722 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:01.131817 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:01.131817 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:01.131817 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:01.131817 master-0 kubenswrapper[7440]: I0312 14:26:01.131808 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:02.101330 master-0 kubenswrapper[7440]: E0312 14:26:02.101252 7440 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 12 14:26:02.118138 master-0 kubenswrapper[7440]: E0312 14:26:02.118064 7440 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 14:26:02.118349 master-0 kubenswrapper[7440]: E0312 14:26:02.118300 7440 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.021s" Mar 12 14:26:02.118349 master-0 kubenswrapper[7440]: I0312 14:26:02.118329 7440 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerID="cri-o://a946cdc53167780579891b144ae4c01088126bb42ef45317938bc8fe5fe26cbb" Mar 12 14:26:02.118349 master-0 kubenswrapper[7440]: I0312 14:26:02.118340 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:26:02.118443 master-0 kubenswrapper[7440]: I0312 14:26:02.118367 7440 status_manager.go:379] "Container startup changed for unknown container" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerID="cri-o://a946cdc53167780579891b144ae4c01088126bb42ef45317938bc8fe5fe26cbb" Mar 12 14:26:02.118443 master-0 kubenswrapper[7440]: I0312 14:26:02.118376 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:26:02.118443 master-0 kubenswrapper[7440]: I0312 14:26:02.118388 7440 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" containerID="cri-o://8e14f7d442275322d3e494f60cf9fca855dca850e1bd67ff3f7aec976914d196" Mar 12 14:26:02.118443 master-0 kubenswrapper[7440]: I0312 14:26:02.118398 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:26:02.118443 master-0 kubenswrapper[7440]: I0312 14:26:02.118409 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:26:02.119421 master-0 kubenswrapper[7440]: I0312 14:26:02.119373 7440 scope.go:117] "RemoveContainer" containerID="292c715d936689cc5a4e9267c3b0c4dd0ea682eff5c05fa9b9cfcf2c9fa3088f" Mar 12 14:26:02.119798 master-0 kubenswrapper[7440]: E0312 14:26:02.119717 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:26:02.131792 master-0 kubenswrapper[7440]: I0312 14:26:02.131700 7440 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 12 14:26:02.132038 master-0 kubenswrapper[7440]: I0312 14:26:02.131922 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:02.132038 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:02.132038 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:02.132038 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:02.132038 master-0 kubenswrapper[7440]: I0312 14:26:02.131974 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:03.130984 master-0 kubenswrapper[7440]: I0312 14:26:03.130944 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:03.130984 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:03.130984 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:03.130984 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:03.131564 master-0 kubenswrapper[7440]: I0312 14:26:03.131000 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:04.132681 master-0 kubenswrapper[7440]: I0312 14:26:04.132542 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:04.132681 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:04.132681 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:04.132681 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:04.132681 master-0 kubenswrapper[7440]: I0312 14:26:04.132626 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:05.131550 master-0 kubenswrapper[7440]: I0312 14:26:05.131474 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:05.131550 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:05.131550 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:05.131550 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:05.131888 master-0 kubenswrapper[7440]: I0312 14:26:05.131814 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:06.132683 master-0 kubenswrapper[7440]: I0312 14:26:06.132595 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:06.132683 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:06.132683 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:06.132683 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:06.133297 master-0 kubenswrapper[7440]: I0312 14:26:06.132694 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:07.030259 master-0 kubenswrapper[7440]: E0312 14:26:07.029768 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:26:07.131875 master-0 kubenswrapper[7440]: I0312 14:26:07.131790 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:07.131875 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:07.131875 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:07.131875 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:07.132202 master-0 kubenswrapper[7440]: I0312 14:26:07.131874 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:08.131134 master-0 kubenswrapper[7440]: I0312 14:26:08.131061 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:08.131134 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:08.131134 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:08.131134 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:08.131702 master-0 kubenswrapper[7440]: I0312 14:26:08.131164 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:09.132218 master-0 kubenswrapper[7440]: I0312 14:26:09.132113 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:09.132218 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:09.132218 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:09.132218 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:09.133539 master-0 kubenswrapper[7440]: I0312 14:26:09.132236 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:10.131948 master-0 kubenswrapper[7440]: I0312 14:26:10.131880 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:10.131948 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:10.131948 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:10.131948 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:10.132354 master-0 kubenswrapper[7440]: I0312 14:26:10.132322 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:11.131740 master-0 kubenswrapper[7440]: I0312 14:26:11.131694 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:11.131740 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:11.131740 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:11.131740 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:11.132192 master-0 kubenswrapper[7440]: I0312 14:26:11.132158 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:12.131579 master-0 kubenswrapper[7440]: I0312 14:26:12.131513 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:12.131579 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:12.131579 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:12.131579 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:12.132791 master-0 kubenswrapper[7440]: I0312 14:26:12.131592 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:13.132248 master-0 kubenswrapper[7440]: I0312 14:26:13.132186 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:13.132248 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:13.132248 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:13.132248 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:13.132958 master-0 kubenswrapper[7440]: I0312 14:26:13.132276 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:13.589048 master-0 kubenswrapper[7440]: E0312 14:26:13.588671 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:26:03Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:26:03Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:26:03Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:26:03Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:0d4c830b2653f2eeffebd09537afb06afb5ae827adbc03f224ab7269f399c05c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d6065909bc521a3f9a85174276fdbceafad02a276449a7dd1952a1f689b0d362\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1735807445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:185237e125a9d710a58d4b588ea6b75eb361e4e99d979c1acd193de3b5d787f1\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:746054bb64fa0b27b1a696cd5db508bb9ee883a94969e4c1c4b5d35a93da8ef5\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1281521882},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:419c6163a23c12fa8884122764fc9055f901e98f35811ea7b5af57f8a71cdb3c\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bbd5afda20f052626b7914c319e3b44721ac442a05724cfe4199e8736319dcf1\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221789390},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032\\\"],\\\"sizeBytes\\\":487151732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e\\\"],\\\"sizeBytes\\\":480534195},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f\\\"],\\\"sizeBytes\\\":471430788}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:26:14.131763 master-0 kubenswrapper[7440]: I0312 14:26:14.131686 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:14.131763 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:14.131763 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:14.131763 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:14.132111 master-0 kubenswrapper[7440]: I0312 14:26:14.131767 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:15.132712 master-0 kubenswrapper[7440]: I0312 14:26:15.132618 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:15.132712 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:15.132712 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:15.132712 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:15.133814 master-0 kubenswrapper[7440]: I0312 14:26:15.132722 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:16.134792 master-0 kubenswrapper[7440]: I0312 14:26:16.134675 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:16.134792 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:16.134792 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:16.134792 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:16.136208 master-0 kubenswrapper[7440]: I0312 14:26:16.134842 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:17.131798 master-0 kubenswrapper[7440]: I0312 14:26:17.131727 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:17.131798 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:17.131798 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:17.131798 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:17.132250 master-0 kubenswrapper[7440]: I0312 14:26:17.131813 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:17.968505 master-0 kubenswrapper[7440]: I0312 14:26:17.968429 7440 patch_prober.go:28] interesting pod/machine-config-daemon-ngzc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:26:17.969101 master-0 kubenswrapper[7440]: I0312 14:26:17.968526 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" podUID="8e4d9407-ff79-4396-a37f-896617e024d4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:26:18.131714 master-0 kubenswrapper[7440]: I0312 14:26:18.131616 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:18.131714 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:18.131714 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:18.131714 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:18.132015 master-0 kubenswrapper[7440]: I0312 14:26:18.131743 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:19.132432 master-0 kubenswrapper[7440]: I0312 14:26:19.132353 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:19.132432 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:19.132432 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:19.132432 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:19.133372 master-0 kubenswrapper[7440]: I0312 14:26:19.132462 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:20.133382 master-0 kubenswrapper[7440]: I0312 14:26:20.133300 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:20.133382 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:20.133382 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:20.133382 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:20.134670 master-0 kubenswrapper[7440]: I0312 14:26:20.133383 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:21.131926 master-0 kubenswrapper[7440]: I0312 14:26:21.131835 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:21.131926 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:21.131926 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:21.131926 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:21.132467 master-0 kubenswrapper[7440]: I0312 14:26:21.131937 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:22.131764 master-0 kubenswrapper[7440]: I0312 14:26:22.131690 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:22.131764 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:22.131764 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:22.131764 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:22.131764 master-0 kubenswrapper[7440]: I0312 14:26:22.131759 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:23.131918 master-0 kubenswrapper[7440]: I0312 14:26:23.131849 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:23.131918 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:23.131918 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:23.131918 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:23.132654 master-0 kubenswrapper[7440]: I0312 14:26:23.132619 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:23.589962 master-0 kubenswrapper[7440]: E0312 14:26:23.589910 7440 request.go:1255] Unexpected error when reading response body: context deadline exceeded (Client.Timeout or context cancellation while reading body) Mar 12 14:26:23.590211 master-0 kubenswrapper[7440]: E0312 14:26:23.589995 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": unexpected error when reading response body. Please retry. Original error: context deadline exceeded (Client.Timeout or context cancellation while reading body)" Mar 12 14:26:23.596126 master-0 kubenswrapper[7440]: E0312 14:26:23.596081 7440 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="21.477s" Mar 12 14:26:23.596361 master-0 kubenswrapper[7440]: I0312 14:26:23.596155 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:26:23.603579 master-0 kubenswrapper[7440]: I0312 14:26:23.603529 7440 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 12 14:26:23.606329 master-0 kubenswrapper[7440]: I0312 14:26:23.606286 7440 status_manager.go:379] "Container startup changed for unknown container" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" containerID="cri-o://8e14f7d442275322d3e494f60cf9fca855dca850e1bd67ff3f7aec976914d196" Mar 12 14:26:23.606329 master-0 kubenswrapper[7440]: I0312 14:26:23.606318 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:26:23.606451 master-0 kubenswrapper[7440]: I0312 14:26:23.606341 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:26:23.606451 master-0 kubenswrapper[7440]: I0312 14:26:23.606355 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:26:23.606451 master-0 kubenswrapper[7440]: I0312 14:26:23.606368 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 12 14:26:23.606451 master-0 kubenswrapper[7440]: I0312 14:26:23.606380 7440 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="d69c2287-2f67-434b-8615-4b40122dfab6" Mar 12 14:26:23.606451 master-0 kubenswrapper[7440]: I0312 14:26:23.606395 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7fed292c3d5a90a99bfee43e89190405","Type":"ContainerDied","Data":"77d5ea8d3aeff7d8613d21bf451df4c494347c5824551bb22ccce9ec8f0d6a8d"} Mar 12 14:26:23.606451 master-0 kubenswrapper[7440]: I0312 14:26:23.606414 7440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77d5ea8d3aeff7d8613d21bf451df4c494347c5824551bb22ccce9ec8f0d6a8d" Mar 12 14:26:23.608920 master-0 kubenswrapper[7440]: I0312 14:26:23.608716 7440 scope.go:117] "RemoveContainer" containerID="292c715d936689cc5a4e9267c3b0c4dd0ea682eff5c05fa9b9cfcf2c9fa3088f" Mar 12 14:26:23.609019 master-0 kubenswrapper[7440]: E0312 14:26:23.608937 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:26:23.609260 master-0 kubenswrapper[7440]: I0312 14:26:23.609236 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:26:23.609322 master-0 kubenswrapper[7440]: I0312 14:26:23.609276 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:26:23.609322 master-0 kubenswrapper[7440]: I0312 14:26:23.609294 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:26:23.609322 master-0 kubenswrapper[7440]: I0312 14:26:23.609314 7440 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" containerID="cri-o://e509fdc6496e2a91ab75938ff7600d03685ac240f8fb3c3d670f376d905b17ab" Mar 12 14:26:23.609450 master-0 kubenswrapper[7440]: I0312 14:26:23.609327 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:26:23.609450 master-0 kubenswrapper[7440]: I0312 14:26:23.609372 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:26:23.609450 master-0 kubenswrapper[7440]: I0312 14:26:23.609394 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 12 14:26:23.609450 master-0 kubenswrapper[7440]: I0312 14:26:23.609410 7440 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="d69c2287-2f67-434b-8615-4b40122dfab6" Mar 12 14:26:23.609450 master-0 kubenswrapper[7440]: I0312 14:26:23.609446 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:26:23.609646 master-0 kubenswrapper[7440]: I0312 14:26:23.609465 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"7ad0044b2389b9999007ceef7cd4808d51c84380e6314ac6db787dc5a548f095"} Mar 12 14:26:23.609646 master-0 kubenswrapper[7440]: I0312 14:26:23.609489 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:26:23.609646 master-0 kubenswrapper[7440]: I0312 14:26:23.609507 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:26:23.609646 master-0 kubenswrapper[7440]: I0312 14:26:23.609519 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"d546c5397e398d2fa2328f65fedfe1cce52498d31ad5c371f9043b0bc9f34f16"} Mar 12 14:26:23.609646 master-0 kubenswrapper[7440]: I0312 14:26:23.609534 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:26:23.609646 master-0 kubenswrapper[7440]: I0312 14:26:23.609550 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" event={"ID":"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6","Type":"ContainerDied","Data":"cb41f5989ad50bdc5ae078b167c9bb559590c0f507a4b8b3d5d90309a6eca4b7"} Mar 12 14:26:23.609646 master-0 kubenswrapper[7440]: I0312 14:26:23.609568 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7fed292c3d5a90a99bfee43e89190405","Type":"ContainerStarted","Data":"41658f62545b7d9b7450bbc8dac7589cb3b2a123f8c6b156d2fe20c54741e987"} Mar 12 14:26:23.612002 master-0 kubenswrapper[7440]: I0312 14:26:23.610335 7440 scope.go:117] "RemoveContainer" containerID="cb41f5989ad50bdc5ae078b167c9bb559590c0f507a4b8b3d5d90309a6eca4b7" Mar 12 14:26:23.612002 master-0 kubenswrapper[7440]: I0312 14:26:23.610403 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:26:23.612002 master-0 kubenswrapper[7440]: I0312 14:26:23.610622 7440 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d291d3f8cf9b232bd82f0a951b10eec242d292f5ec0b07ae030409f0e0e9d18"} pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 14:26:23.612002 master-0 kubenswrapper[7440]: I0312 14:26:23.610669 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" podUID="8e4d9407-ff79-4396-a37f-896617e024d4" containerName="machine-config-daemon" containerID="cri-o://3d291d3f8cf9b232bd82f0a951b10eec242d292f5ec0b07ae030409f0e0e9d18" gracePeriod=600 Mar 12 14:26:23.612002 master-0 kubenswrapper[7440]: I0312 14:26:23.611095 7440 scope.go:117] "RemoveContainer" containerID="6475bc0affe8a98c9e1b7717d0757a27fe42a8342fbfe27794215021cef2d056" Mar 12 14:26:23.612002 master-0 kubenswrapper[7440]: I0312 14:26:23.611211 7440 scope.go:117] "RemoveContainer" containerID="b0ef8cb458573461dc78ec84dd70e59e9585b138f2517187a17259dabba2dfeb" Mar 12 14:26:23.612002 master-0 kubenswrapper[7440]: E0312 14:26:23.611312 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-z9hzg_openshift-cluster-storage-operator(d56089bf-177c-492d-8964-73a45574e7ed)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" podUID="d56089bf-177c-492d-8964-73a45574e7ed" Mar 12 14:26:23.612002 master-0 kubenswrapper[7440]: I0312 14:26:23.611881 7440 scope.go:117] "RemoveContainer" containerID="3ebfe9284b5aa5ae3cf93734a2a620a3ca175da8fc2dbf0765228bbf0c19305a" Mar 12 14:26:23.612366 master-0 kubenswrapper[7440]: E0312 14:26:23.612056 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-5cdb4c5598-hs6mc_openshift-machine-api(3edaa533-ecbb-443e-a270-4cb4f923daf6)\"" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" podUID="3edaa533-ecbb-443e-a270-4cb4f923daf6" Mar 12 14:26:23.612847 master-0 kubenswrapper[7440]: I0312 14:26:23.612819 7440 scope.go:117] "RemoveContainer" containerID="b7ebd6ed103fd32804e88ec8b0eb113b06bd39e732fa9609967014bb6c6c87cc" Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613235 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613264 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613280 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613302 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613319 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"b2d8e6e9-c10f-4b43-8155-9addbfddba2e","Type":"ContainerDied","Data":"f6b8e2c91dfdac4af077c810b8c82108167dd8fa5fde5c09fa329a80aae9a543"} Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613338 7440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6b8e2c91dfdac4af077c810b8c82108167dd8fa5fde5c09fa329a80aae9a543" Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613352 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613365 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613376 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" event={"ID":"1bc0d552-01c7-4212-a551-d16419f2dc80","Type":"ContainerDied","Data":"d4f5f31cb9b13fbf54308c119403bf09d2d0acf82b48cd71b5bda3672a1ed049"} Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613391 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" event={"ID":"d56089bf-177c-492d-8964-73a45574e7ed","Type":"ContainerDied","Data":"7bd65ca4e680b5333dd47dc3da6564b9ecb4961327d3c93643808daf9a4c8812"} Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613411 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613424 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7fed292c3d5a90a99bfee43e89190405","Type":"ContainerDied","Data":"41658f62545b7d9b7450bbc8dac7589cb3b2a123f8c6b156d2fe20c54741e987"} Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613435 7440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41658f62545b7d9b7450bbc8dac7589cb3b2a123f8c6b156d2fe20c54741e987" Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613444 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613454 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" event={"ID":"6defef79-6058-466a-ae0b-8eb9258126be","Type":"ContainerDied","Data":"e09e9528f2e667c7ca5a54a2f40134d7a65389dd5410fb6f666432c3167149ba"} Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613468 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613483 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" event={"ID":"3edaa533-ecbb-443e-a270-4cb4f923daf6","Type":"ContainerDied","Data":"b7d782d5bb2308ec609e902e46de5e46198bee9122afbefc61233a9ba61991af"} Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613501 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" event={"ID":"99433993-93cf-46cb-bb66-485672cb2554","Type":"ContainerDied","Data":"942edb2086b196730f2050c8c10e7943616ea284812689341f08412925b12705"} Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613516 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-7s8fj" event={"ID":"f3c13c5f-3d1f-4e0a-b77b-732255680086","Type":"ContainerDied","Data":"c67f823638be00e0ed74a2579b7dd1b4da80134d340ad18f11466d7e3913888f"} Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613533 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613550 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" event={"ID":"40912d56-8288-4d58-ad91-7455bd460887","Type":"ContainerDied","Data":"6b815065f5b803f6446ee0525693bbd7ee720d608451c165c93b259f6a7e3184"} Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613564 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"c6a711bc27e73e2efc239fb72d1184e6","Type":"ContainerDied","Data":"b7832dc4839767f3cbfd92e515cd8bc243889013b3c5aafd8b213f8334c4b7db"} Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613579 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7fed292c3d5a90a99bfee43e89190405","Type":"ContainerStarted","Data":"5129e658d4f38f219309b50d5fba03618805b0cabc3e08b4d6c2ce7c8973f8b3"} Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613590 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613600 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7fed292c3d5a90a99bfee43e89190405","Type":"ContainerDied","Data":"5129e658d4f38f219309b50d5fba03618805b0cabc3e08b4d6c2ce7c8973f8b3"} Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613612 7440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5129e658d4f38f219309b50d5fba03618805b0cabc3e08b4d6c2ce7c8973f8b3" Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613631 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613642 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" event={"ID":"40912d56-8288-4d58-ad91-7455bd460887","Type":"ContainerStarted","Data":"1dac5d600ea05249e8d8af0156190efd630d5fd6d9218a7c125e8b47799a9d88"} Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613656 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613670 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613679 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" event={"ID":"1bc0d552-01c7-4212-a551-d16419f2dc80","Type":"ContainerStarted","Data":"87e408de3133a4bf2efebc128a746f8d5687684784576d160d686aa712c52c42"} Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613689 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" event={"ID":"3edaa533-ecbb-443e-a270-4cb4f923daf6","Type":"ContainerStarted","Data":"76d5f71d0b9e07fd3636c963c1e496e4449c72c239decd6092cccc9fe18dbb61"} Mar 12 14:26:23.613702 master-0 kubenswrapper[7440]: I0312 14:26:23.613701 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" event={"ID":"6defef79-6058-466a-ae0b-8eb9258126be","Type":"ContainerStarted","Data":"93f129166a8bd6d0ee33efc1d3e3d3b386f208cdbb79ef0a8dea04125491275d"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614025 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" event={"ID":"99433993-93cf-46cb-bb66-485672cb2554","Type":"ContainerStarted","Data":"80852c13a84697f07d1a8ca8a4892c3fa3a6416ed1dfca07e537b2d4c816a13a"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614056 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-7s8fj" event={"ID":"f3c13c5f-3d1f-4e0a-b77b-732255680086","Type":"ContainerStarted","Data":"510ccfcc8baef1b4d5cf64c2613ac89aa7307dc24793f9d1e3ffbb21645aa509"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614067 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"c6a711bc27e73e2efc239fb72d1184e6","Type":"ContainerStarted","Data":"c29049190c2156c35ffa7feae22368ca8c2c0a91bfbd57f97c9a9e38dccc0bdf"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614079 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" event={"ID":"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6","Type":"ContainerStarted","Data":"45abcab2b6c821296572dad37b9e6f9ba63e552dbae8db16db31cb4dc1b36a86"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614090 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" event={"ID":"d56089bf-177c-492d-8964-73a45574e7ed","Type":"ContainerStarted","Data":"8ae9516d1bad64d2b36bf66ae5496f784cbde176fd71bcff31926fef9dd2ff15"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614101 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"05fc4965-b390-4edc-a407-d431b06d7612","Type":"ContainerDied","Data":"2994881b5befdba78efa5f6568b4edfa2a8b9fa1561fed91504e637ca759f929"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614113 7440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2994881b5befdba78efa5f6568b4edfa2a8b9fa1561fed91504e637ca759f929" Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614122 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" event={"ID":"d56089bf-177c-492d-8964-73a45574e7ed","Type":"ContainerDied","Data":"8ae9516d1bad64d2b36bf66ae5496f784cbde176fd71bcff31926fef9dd2ff15"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614133 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7fed292c3d5a90a99bfee43e89190405","Type":"ContainerStarted","Data":"a946cdc53167780579891b144ae4c01088126bb42ef45317938bc8fe5fe26cbb"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614151 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"557e5767b6a5906fd35802d8cc7a729030365600bcb6aca559cdc1d58e816deb"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614163 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" event={"ID":"e7f6ebd3-98c8-457c-a88c-7e81270f01b5","Type":"ContainerDied","Data":"c7f808f64216bac6165a91847cbe1e04c9cbb2e41a6946684e87039fd940bcf1"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614174 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" event={"ID":"e7f6ebd3-98c8-457c-a88c-7e81270f01b5","Type":"ContainerStarted","Data":"8e14f7d442275322d3e494f60cf9fca855dca850e1bd67ff3f7aec976914d196"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614184 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" event={"ID":"3edaa533-ecbb-443e-a270-4cb4f923daf6","Type":"ContainerDied","Data":"76d5f71d0b9e07fd3636c963c1e496e4449c72c239decd6092cccc9fe18dbb61"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614195 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7fed292c3d5a90a99bfee43e89190405","Type":"ContainerDied","Data":"a946cdc53167780579891b144ae4c01088126bb42ef45317938bc8fe5fe26cbb"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614210 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" event={"ID":"61d829d7-38e1-4826-942c-f7317c4a4bec","Type":"ContainerDied","Data":"952a4e5cff72cd7499151126b7d570c4e426b0316c7d3f1d9462b433d44d34b6"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614221 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" event={"ID":"de61e1fe-294c-48a6-8cf3-aeb4637ef2cc","Type":"ContainerDied","Data":"1da1f692fe7f463fbb1c0cbb755fdd4e259885377082c810ee0f69c91f679d04"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614233 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" event={"ID":"76d596c0-6a41-43e1-9516-aee9ad834ec2","Type":"ContainerDied","Data":"3229df69e2e642a1705181c6aea965ce680072f14717e055b2a989c42f067dc0"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614479 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" event={"ID":"61de099a-410b-4d30-83e8-19cf5901cb27","Type":"ContainerDied","Data":"b53df61802c76275e2ee152b7486584e46a40bc0a811c6ed0a3e9d62b01955be"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614550 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" event={"ID":"9757edbb-8ce2-4513-9b32-a552df50634c","Type":"ContainerDied","Data":"1f6d2570897da6801ddcca5ad1dff41b4e29f16cbcc5ab930745b1a932963f31"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614583 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" event={"ID":"7433d9bf-4edf-4787-a7a1-e5102c7264c7","Type":"ContainerDied","Data":"93fc043f83fd1d3afac8895480948677e740498aeff368b3ec9e23d75ce7f261"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614626 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" event={"ID":"57930a54-89ab-4ec8-a504-74035bb74d63","Type":"ContainerDied","Data":"926a040435e0968b248eb5c7123d8465f49b77a778c24d92b17563fbe0da4bd1"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614651 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" event={"ID":"df31c4c2-304e-4bad-8e6f-18c174eba675","Type":"ContainerDied","Data":"f3d9c730da43b24ec075e5b126409b0c8c7273cecb83802d3e5610d1f61d4571"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614670 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" event={"ID":"85459175-2c9c-425d-bdfb-0a79c92ed110","Type":"ContainerDied","Data":"e509fdc6496e2a91ab75938ff7600d03685ac240f8fb3c3d670f376d905b17ab"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614688 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" event={"ID":"08ea0d9f-0635-4759-803e-572eca2f2d34","Type":"ContainerDied","Data":"d27cef2ffd951ac8b7af825674c33be11e2853a2bd3265c01b885bcdafe8ff3f"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614708 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" event={"ID":"0a898118-6d01-4211-92f0-43967b75405c","Type":"ContainerDied","Data":"10e2670e6ab6b47f07948c60e7e3a46c3f0ed3468cba558c9fc231e5dc2ca43a"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614729 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" event={"ID":"6b77ad35-2fff-47bb-ad34-abb3868b09a9","Type":"ContainerDied","Data":"b8d113d4078bf75e05e20466c91ff71f4f6b488f7676b497a0a45f5dab626d36"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614749 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" event={"ID":"3f72fbbe-69f0-4622-be05-b839ff9b4d45","Type":"ContainerDied","Data":"e7dea74eb883602f1f3d133f192958f321d40672d5572126aaddfb68d54ed527"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614787 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" event={"ID":"6f5cd3ff-ced6-47e3-8054-d83053d87680","Type":"ContainerDied","Data":"d0767e3a40f949712be9170d0b8f7cd2c338fed5faee0a7ad41873676dd6e5ae"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614818 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" event={"ID":"d00a8cc7-7774-40bd-94a1-9ac2d0f63234","Type":"ContainerDied","Data":"9187f76670a738ddd581636a016ef4d6741503d5745e898edf219cba574d1307"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614833 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" event={"ID":"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2","Type":"ContainerDied","Data":"ea065bab14dca0766dced510f8f192078bd28fcc445355d287138a674e19946f"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.614861 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" event={"ID":"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea","Type":"ContainerDied","Data":"84cd4dda4ef244649d072d7fb3ef07cda0fc4acab308d3a457899758e508ea9b"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.615086 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw" event={"ID":"06eb9f4b-167e-435b-8ef6-ae44fc0b85a9","Type":"ContainerDied","Data":"10ebd0ad67dc09a94de6455e90b725a93074cf336ebd90eea3f8574d71ab8322"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.615112 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" event={"ID":"1bba274a-38c7-4d13-88a5-6bc39228416c","Type":"ContainerDied","Data":"b98815f2940c407dcd2edaca0a185078f6d9c591becb207f34495f0ed682e5be"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.615260 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" event={"ID":"8d775283-2696-4411-8ddf-d4e6000f0a0c","Type":"ContainerDied","Data":"0eed999a49dbae8cddba70df11741d86114a7456650eda2650c12101e15de11f"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.615287 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv" event={"ID":"8660cef9-0ab3-453e-a4b9-c243daa6ddb0","Type":"ContainerDied","Data":"fa444aaa7916a9b8ce7bfb85bc927673df9636ab7f0f10b61e757d7a6e637d9d"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.615949 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" event={"ID":"a2435b91-86d6-415b-a978-34cc859e74f2","Type":"ContainerDied","Data":"875a6bda6b71188c64ac2ab0648f7976d1deadab74df54ad54a3c4c6e3e8c152"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.615972 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" event={"ID":"8106d14a-b448-4dd1-bccd-926f85394b5d","Type":"ContainerDied","Data":"d09193ab64fa4ad5898ed40452f50720dec8c982d5f7eb0df7950d928c3d3534"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616026 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" event={"ID":"3dc73c14-852d-4957-b6ac-84366ba0594f","Type":"ContainerDied","Data":"e69ae5e560439e8be83727200f3f70b72e784d09cd8dbceed926d8123583ce1c"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616093 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-gltz7" event={"ID":"dd29b21c-7a0e-4311-952f-427b00468e66","Type":"ContainerDied","Data":"5c0e8a37f9d56e49ba600123779ab452255e4d506e12df3758cc982e1da22f30"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616128 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" event={"ID":"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6","Type":"ContainerDied","Data":"45abcab2b6c821296572dad37b9e6f9ba63e552dbae8db16db31cb4dc1b36a86"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616149 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" event={"ID":"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea","Type":"ContainerStarted","Data":"b7f5d85b9d4bda1ad07cf87ff44bb85a1287e1637de9231fcb5c0a28147a7d8e"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616165 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" event={"ID":"de61e1fe-294c-48a6-8cf3-aeb4637ef2cc","Type":"ContainerStarted","Data":"a5acc699b1a37e91f5340ec4c115ef975c8b471e9e344c9594483a5c84605341"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616179 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" event={"ID":"3dc73c14-852d-4957-b6ac-84366ba0594f","Type":"ContainerStarted","Data":"69c454beac6cc5afa4b488e211eca34b869e3d6b5b9eaf12b4d8b91763dfc9d3"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616193 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" event={"ID":"d56089bf-177c-492d-8964-73a45574e7ed","Type":"ContainerStarted","Data":"82a229708282890eba0f2dd66591b7d498131fca3dd378e3fd0c6eab0f3fa96d"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616208 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" event={"ID":"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2","Type":"ContainerStarted","Data":"44d72417f81941751149d110a32ac53aaf3ebd578a63426cf573e0c9323995fa"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616223 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" event={"ID":"3edaa533-ecbb-443e-a270-4cb4f923daf6","Type":"ContainerStarted","Data":"e085982a635b4a0eba26b5bb736bad34e1f9261a79ed2b915c47c028db213dd4"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616235 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" event={"ID":"85459175-2c9c-425d-bdfb-0a79c92ed110","Type":"ContainerStarted","Data":"80fe428101670d5bb2198155d9aa028725f1d648d8b1891b02a37c2835bc8023"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616249 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-gltz7" event={"ID":"dd29b21c-7a0e-4311-952f-427b00468e66","Type":"ContainerStarted","Data":"b7ebd6ed103fd32804e88ec8b0eb113b06bd39e732fa9609967014bb6c6c87cc"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616262 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" event={"ID":"8d775283-2696-4411-8ddf-d4e6000f0a0c","Type":"ContainerStarted","Data":"add6a7027222fcbcfebd634ec4319fff646d91633d5b0bce4f0126cf9eac311e"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616275 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv" event={"ID":"8660cef9-0ab3-453e-a4b9-c243daa6ddb0","Type":"ContainerStarted","Data":"ab1742f72c830599c24487d25e2f7d4998ed83fdb4a1bdbebd1e3d87b6efbbf6"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616288 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" event={"ID":"8e4d9407-ff79-4396-a37f-896617e024d4","Type":"ContainerDied","Data":"f87f3196293c0cde53119456354d52266c897c928bf77795c604874d22ff9dfd"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616303 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" event={"ID":"8e4d9407-ff79-4396-a37f-896617e024d4","Type":"ContainerStarted","Data":"3d291d3f8cf9b232bd82f0a951b10eec242d292f5ec0b07ae030409f0e0e9d18"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616315 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" event={"ID":"9757edbb-8ce2-4513-9b32-a552df50634c","Type":"ContainerStarted","Data":"4144b508950e9d55aa988b5826fcc71dda27f29d18bef2532e5c5b4d53868302"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616329 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" event={"ID":"d00a8cc7-7774-40bd-94a1-9ac2d0f63234","Type":"ContainerStarted","Data":"95dd32cf12bfc127e14e6bb356ac107cba94348a2608b67065159ea570fe224b"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616341 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" event={"ID":"7433d9bf-4edf-4787-a7a1-e5102c7264c7","Type":"ContainerStarted","Data":"90afdba5757dcaf59474b1c77f52ccec8c1322e55deca5b3c4435bc3be8ed5e2"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616353 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" event={"ID":"57930a54-89ab-4ec8-a504-74035bb74d63","Type":"ContainerStarted","Data":"91255c6b16c7af2529c1e521fdbc69eade224ea969c92c151d4e92cf91d45cc1"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616365 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" event={"ID":"a2435b91-86d6-415b-a978-34cc859e74f2","Type":"ContainerStarted","Data":"8504d8b3c047fc38b216e74a2854ab9051eda54c09a2ad35024a92e002c39426"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616379 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" event={"ID":"1bba274a-38c7-4d13-88a5-6bc39228416c","Type":"ContainerStarted","Data":"1435326bdb2bef433d6cb6c8682a1509956eb7447248331d4290a4c67fb3bc38"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616410 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" event={"ID":"61de099a-410b-4d30-83e8-19cf5901cb27","Type":"ContainerStarted","Data":"ff3016afcdb6778aaf743a4289ede546ee1d9d24d09eb7a34743d13e7defa760"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616435 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" event={"ID":"6b77ad35-2fff-47bb-ad34-abb3868b09a9","Type":"ContainerStarted","Data":"21f8f12539de21393c8dd3d19a14cef264215b3b3fd47ca7bb10332072f42348"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616454 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" event={"ID":"a35674af-162c-4a4a-8605-158b2326267e","Type":"ContainerDied","Data":"74c768e9e11582adc0014bc840fea327d7f38cf0f676db2b9e0edea0c24915ce"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616476 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" event={"ID":"d56089bf-177c-492d-8964-73a45574e7ed","Type":"ContainerDied","Data":"82a229708282890eba0f2dd66591b7d498131fca3dd378e3fd0c6eab0f3fa96d"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616495 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"a54d7c040e4e83aac6a6fc975cc3d2fd03101d4237db0646f2870734d1932e37"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616511 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" event={"ID":"3edaa533-ecbb-443e-a270-4cb4f923daf6","Type":"ContainerDied","Data":"e085982a635b4a0eba26b5bb736bad34e1f9261a79ed2b915c47c028db213dd4"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616529 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7fed292c3d5a90a99bfee43e89190405","Type":"ContainerStarted","Data":"292c715d936689cc5a4e9267c3b0c4dd0ea682eff5c05fa9b9cfcf2c9fa3088f"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616545 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" event={"ID":"3dc73c14-852d-4957-b6ac-84366ba0594f","Type":"ContainerDied","Data":"69c454beac6cc5afa4b488e211eca34b869e3d6b5b9eaf12b4d8b91763dfc9d3"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616561 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" event={"ID":"1bba274a-38c7-4d13-88a5-6bc39228416c","Type":"ContainerDied","Data":"1435326bdb2bef433d6cb6c8682a1509956eb7447248331d4290a4c67fb3bc38"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616580 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" event={"ID":"57930a54-89ab-4ec8-a504-74035bb74d63","Type":"ContainerDied","Data":"91255c6b16c7af2529c1e521fdbc69eade224ea969c92c151d4e92cf91d45cc1"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616595 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" event={"ID":"61de099a-410b-4d30-83e8-19cf5901cb27","Type":"ContainerDied","Data":"ff3016afcdb6778aaf743a4289ede546ee1d9d24d09eb7a34743d13e7defa760"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616615 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" event={"ID":"d00a8cc7-7774-40bd-94a1-9ac2d0f63234","Type":"ContainerDied","Data":"95dd32cf12bfc127e14e6bb356ac107cba94348a2608b67065159ea570fe224b"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616629 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" event={"ID":"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2","Type":"ContainerDied","Data":"44d72417f81941751149d110a32ac53aaf3ebd578a63426cf573e0c9323995fa"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616644 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" event={"ID":"8d775283-2696-4411-8ddf-d4e6000f0a0c","Type":"ContainerDied","Data":"add6a7027222fcbcfebd634ec4319fff646d91633d5b0bce4f0126cf9eac311e"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616661 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" event={"ID":"7433d9bf-4edf-4787-a7a1-e5102c7264c7","Type":"ContainerDied","Data":"90afdba5757dcaf59474b1c77f52ccec8c1322e55deca5b3c4435bc3be8ed5e2"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616678 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv" event={"ID":"8660cef9-0ab3-453e-a4b9-c243daa6ddb0","Type":"ContainerDied","Data":"ab1742f72c830599c24487d25e2f7d4998ed83fdb4a1bdbebd1e3d87b6efbbf6"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616692 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7fed292c3d5a90a99bfee43e89190405","Type":"ContainerDied","Data":"292c715d936689cc5a4e9267c3b0c4dd0ea682eff5c05fa9b9cfcf2c9fa3088f"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616707 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" event={"ID":"df31c4c2-304e-4bad-8e6f-18c174eba675","Type":"ContainerStarted","Data":"b0ef8cb458573461dc78ec84dd70e59e9585b138f2517187a17259dabba2dfeb"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616722 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" event={"ID":"d56089bf-177c-492d-8964-73a45574e7ed","Type":"ContainerStarted","Data":"6475bc0affe8a98c9e1b7717d0757a27fe42a8342fbfe27794215021cef2d056"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616736 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" event={"ID":"3f72fbbe-69f0-4622-be05-b839ff9b4d45","Type":"ContainerStarted","Data":"46c2a4e909bb52a20054b9e9b5b0a7b00da6400e691aeeec0e60efe2c628204c"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616752 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" event={"ID":"0a898118-6d01-4211-92f0-43967b75405c","Type":"ContainerStarted","Data":"0bc982c3725d14223ab24d0dc070fc9eb1be21068c5ee128ccc02aa0ec0f60c5"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616767 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" event={"ID":"8106d14a-b448-4dd1-bccd-926f85394b5d","Type":"ContainerStarted","Data":"07fcba2f19661d8828bf52496d599b063fbcaa903c444fc6dc693f6b4ced2d26"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616782 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" event={"ID":"3edaa533-ecbb-443e-a270-4cb4f923daf6","Type":"ContainerStarted","Data":"3ebfe9284b5aa5ae3cf93734a2a620a3ca175da8fc2dbf0765228bbf0c19305a"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616794 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv" event={"ID":"8660cef9-0ab3-453e-a4b9-c243daa6ddb0","Type":"ContainerStarted","Data":"d135f68615930d49632ead44689c31ed1dba2d0c236cbda4ae0463dc788e0e6a"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616807 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" event={"ID":"61d829d7-38e1-4826-942c-f7317c4a4bec","Type":"ContainerStarted","Data":"39f68fed61650f6dec97860dd3151ed994abaeef80f3d14d0170b6aa53c69d9d"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616824 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" event={"ID":"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6","Type":"ContainerStarted","Data":"ce4ac6bc5605b012a8c47f4c0b169a09ed9e7155807e4b4269519a7e642d6b66"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616837 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" event={"ID":"a35674af-162c-4a4a-8605-158b2326267e","Type":"ContainerStarted","Data":"eab4795a93eb894d5a185a08bb28e127ecd93f412b5b24a97499c132d3ea0156"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616851 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" event={"ID":"6f5cd3ff-ced6-47e3-8054-d83053d87680","Type":"ContainerStarted","Data":"882c8b126a35149a72e79b677b717d54b482233f211b3eeec7589c0e044c5274"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616865 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" event={"ID":"08ea0d9f-0635-4759-803e-572eca2f2d34","Type":"ContainerStarted","Data":"c7748344653d88d11ff333e5116bce0c85dee6521b85089b95571404112fbab9"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616879 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw" event={"ID":"06eb9f4b-167e-435b-8ef6-ae44fc0b85a9","Type":"ContainerStarted","Data":"f0b49f86d1ebba78f4cfa063af24f0516cffba203587d317eadf4a198fe2c77d"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616919 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" event={"ID":"76d596c0-6a41-43e1-9516-aee9ad834ec2","Type":"ContainerStarted","Data":"132c247fef63805e546221090174559865f0a5c67459f97a478961649f25c4ce"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616938 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" event={"ID":"61de099a-410b-4d30-83e8-19cf5901cb27","Type":"ContainerStarted","Data":"a9360a88d496d9b99968219677b5a40fc143b8872564dfdffdd3aa113acbb8d5"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616953 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" event={"ID":"e7f6ebd3-98c8-457c-a88c-7e81270f01b5","Type":"ContainerDied","Data":"8e14f7d442275322d3e494f60cf9fca855dca850e1bd67ff3f7aec976914d196"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616969 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" event={"ID":"e7f6ebd3-98c8-457c-a88c-7e81270f01b5","Type":"ContainerStarted","Data":"8ea8824cc66d3733dec4f191955e838e6c7cbda51a4332331b8b1ab5e09b2eaf"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.616983 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-gltz7" event={"ID":"dd29b21c-7a0e-4311-952f-427b00468e66","Type":"ContainerDied","Data":"b7ebd6ed103fd32804e88ec8b0eb113b06bd39e732fa9609967014bb6c6c87cc"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.617000 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" event={"ID":"d56089bf-177c-492d-8964-73a45574e7ed","Type":"ContainerDied","Data":"6475bc0affe8a98c9e1b7717d0757a27fe42a8342fbfe27794215021cef2d056"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.617017 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" event={"ID":"df31c4c2-304e-4bad-8e6f-18c174eba675","Type":"ContainerDied","Data":"b0ef8cb458573461dc78ec84dd70e59e9585b138f2517187a17259dabba2dfeb"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.617034 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" event={"ID":"8d775283-2696-4411-8ddf-d4e6000f0a0c","Type":"ContainerStarted","Data":"dab12d78b58362271ed50f79c5a69254f295643a7991e2e36b8a3b67ed281ba9"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.617049 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" event={"ID":"57930a54-89ab-4ec8-a504-74035bb74d63","Type":"ContainerStarted","Data":"ccb4e996c4095d3424f211c34c210a7991baf5a57a30f0b35ae26da073728490"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.617063 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" event={"ID":"3dc73c14-852d-4957-b6ac-84366ba0594f","Type":"ContainerStarted","Data":"7c75b0b66bdc20c82fe578e42fb9ae10c12f677e86c5f3339f7a2fe4881a6199"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.617077 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" event={"ID":"1bba274a-38c7-4d13-88a5-6bc39228416c","Type":"ContainerStarted","Data":"a44c4ecc04fa9e6c4e5b12d13bcdb1beeaf87374ca0d2540444a8445b0121666"} Mar 12 14:26:23.618004 master-0 kubenswrapper[7440]: I0312 14:26:23.617097 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" event={"ID":"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2","Type":"ContainerStarted","Data":"5efaa8718300502113322a1eee9979f20223fd4bf67820218994af2c3ddf3fdb"} Mar 12 14:26:23.623633 master-0 kubenswrapper[7440]: I0312 14:26:23.617120 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" event={"ID":"d00a8cc7-7774-40bd-94a1-9ac2d0f63234","Type":"ContainerStarted","Data":"cdfe0e410845d5baf2e09f8531028d9af2d70fe1e72cb65a07430cd6462f940c"} Mar 12 14:26:23.623633 master-0 kubenswrapper[7440]: I0312 14:26:23.617136 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" event={"ID":"7433d9bf-4edf-4787-a7a1-e5102c7264c7","Type":"ContainerStarted","Data":"48fe02f7a254d8d98f49ab36edbe52b1845dafa9c51071f3a38df472248895ba"} Mar 12 14:26:23.623633 master-0 kubenswrapper[7440]: I0312 14:26:23.617150 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" event={"ID":"3edaa533-ecbb-443e-a270-4cb4f923daf6","Type":"ContainerDied","Data":"3ebfe9284b5aa5ae3cf93734a2a620a3ca175da8fc2dbf0765228bbf0c19305a"} Mar 12 14:26:23.623633 master-0 kubenswrapper[7440]: I0312 14:26:23.617165 7440 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b7d782d5bb2308ec609e902e46de5e46198bee9122afbefc61233a9ba61991af"} Mar 12 14:26:23.623633 master-0 kubenswrapper[7440]: I0312 14:26:23.617182 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"5009920acedd27d6a0105b1b145e95689f042f0ff07c9e9a14badc4267ae9ad8"} Mar 12 14:26:23.623633 master-0 kubenswrapper[7440]: I0312 14:26:23.617195 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"9dddf3271a2d9acbd283c8eb5c1a2bf711cfeed332f245d2144a8b6421eca562"} Mar 12 14:26:23.623633 master-0 kubenswrapper[7440]: I0312 14:26:23.617209 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"52be6745e5385673d059fdbe2baaa4388277f83fc99a7fe7a8efe93c4686d66e"} Mar 12 14:26:23.623633 master-0 kubenswrapper[7440]: I0312 14:26:23.617221 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"072241b0ff18685e3ac5cf437ba20aea2256aa2c2d716ca900fb030653d7963d"} Mar 12 14:26:23.623633 master-0 kubenswrapper[7440]: I0312 14:26:23.617234 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"356f690299e3b7ad78aab551d955363ff311b1f2444fabe29c77c744cb4403f0"} Mar 12 14:26:23.623633 master-0 kubenswrapper[7440]: I0312 14:26:23.623172 7440 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="10b1bd98-beac-469c-9a65-abee3ca8a243" Mar 12 14:26:23.623633 master-0 kubenswrapper[7440]: I0312 14:26:23.623214 7440 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="10b1bd98-beac-469c-9a65-abee3ca8a243" Mar 12 14:26:23.639090 master-0 kubenswrapper[7440]: I0312 14:26:23.639035 7440 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Mar 12 14:26:23.640130 master-0 kubenswrapper[7440]: I0312 14:26:23.640107 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 12 14:26:23.643875 master-0 kubenswrapper[7440]: I0312 14:26:23.643803 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 12 14:26:23.674051 master-0 kubenswrapper[7440]: I0312 14:26:23.674008 7440 scope.go:117] "RemoveContainer" containerID="82a229708282890eba0f2dd66591b7d498131fca3dd378e3fd0c6eab0f3fa96d" Mar 12 14:26:23.815252 master-0 kubenswrapper[7440]: I0312 14:26:23.815199 7440 scope.go:117] "RemoveContainer" containerID="8ae9516d1bad64d2b36bf66ae5496f784cbde176fd71bcff31926fef9dd2ff15" Mar 12 14:26:23.871110 master-0 kubenswrapper[7440]: I0312 14:26:23.870927 7440 scope.go:117] "RemoveContainer" containerID="7bd65ca4e680b5333dd47dc3da6564b9ecb4961327d3c93643808daf9a4c8812" Mar 12 14:26:23.902269 master-0 kubenswrapper[7440]: I0312 14:26:23.902247 7440 scope.go:117] "RemoveContainer" containerID="e085982a635b4a0eba26b5bb736bad34e1f9261a79ed2b915c47c028db213dd4" Mar 12 14:26:23.922015 master-0 kubenswrapper[7440]: I0312 14:26:23.921961 7440 scope.go:117] "RemoveContainer" containerID="76d5f71d0b9e07fd3636c963c1e496e4449c72c239decd6092cccc9fe18dbb61" Mar 12 14:26:23.941955 master-0 kubenswrapper[7440]: I0312 14:26:23.941924 7440 scope.go:117] "RemoveContainer" containerID="b7d782d5bb2308ec609e902e46de5e46198bee9122afbefc61233a9ba61991af" Mar 12 14:26:23.960940 master-0 kubenswrapper[7440]: I0312 14:26:23.960873 7440 scope.go:117] "RemoveContainer" containerID="82a229708282890eba0f2dd66591b7d498131fca3dd378e3fd0c6eab0f3fa96d" Mar 12 14:26:23.961417 master-0 kubenswrapper[7440]: E0312 14:26:23.961371 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82a229708282890eba0f2dd66591b7d498131fca3dd378e3fd0c6eab0f3fa96d\": container with ID starting with 82a229708282890eba0f2dd66591b7d498131fca3dd378e3fd0c6eab0f3fa96d not found: ID does not exist" containerID="82a229708282890eba0f2dd66591b7d498131fca3dd378e3fd0c6eab0f3fa96d" Mar 12 14:26:23.961486 master-0 kubenswrapper[7440]: I0312 14:26:23.961426 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82a229708282890eba0f2dd66591b7d498131fca3dd378e3fd0c6eab0f3fa96d"} err="failed to get container status \"82a229708282890eba0f2dd66591b7d498131fca3dd378e3fd0c6eab0f3fa96d\": rpc error: code = NotFound desc = could not find container \"82a229708282890eba0f2dd66591b7d498131fca3dd378e3fd0c6eab0f3fa96d\": container with ID starting with 82a229708282890eba0f2dd66591b7d498131fca3dd378e3fd0c6eab0f3fa96d not found: ID does not exist" Mar 12 14:26:23.961486 master-0 kubenswrapper[7440]: I0312 14:26:23.961461 7440 scope.go:117] "RemoveContainer" containerID="8ae9516d1bad64d2b36bf66ae5496f784cbde176fd71bcff31926fef9dd2ff15" Mar 12 14:26:23.962651 master-0 kubenswrapper[7440]: E0312 14:26:23.962573 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ae9516d1bad64d2b36bf66ae5496f784cbde176fd71bcff31926fef9dd2ff15\": container with ID starting with 8ae9516d1bad64d2b36bf66ae5496f784cbde176fd71bcff31926fef9dd2ff15 not found: ID does not exist" containerID="8ae9516d1bad64d2b36bf66ae5496f784cbde176fd71bcff31926fef9dd2ff15" Mar 12 14:26:23.962725 master-0 kubenswrapper[7440]: I0312 14:26:23.962649 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ae9516d1bad64d2b36bf66ae5496f784cbde176fd71bcff31926fef9dd2ff15"} err="failed to get container status \"8ae9516d1bad64d2b36bf66ae5496f784cbde176fd71bcff31926fef9dd2ff15\": rpc error: code = NotFound desc = could not find container \"8ae9516d1bad64d2b36bf66ae5496f784cbde176fd71bcff31926fef9dd2ff15\": container with ID starting with 8ae9516d1bad64d2b36bf66ae5496f784cbde176fd71bcff31926fef9dd2ff15 not found: ID does not exist" Mar 12 14:26:23.962725 master-0 kubenswrapper[7440]: I0312 14:26:23.962670 7440 scope.go:117] "RemoveContainer" containerID="7bd65ca4e680b5333dd47dc3da6564b9ecb4961327d3c93643808daf9a4c8812" Mar 12 14:26:23.963107 master-0 kubenswrapper[7440]: E0312 14:26:23.963033 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bd65ca4e680b5333dd47dc3da6564b9ecb4961327d3c93643808daf9a4c8812\": container with ID starting with 7bd65ca4e680b5333dd47dc3da6564b9ecb4961327d3c93643808daf9a4c8812 not found: ID does not exist" containerID="7bd65ca4e680b5333dd47dc3da6564b9ecb4961327d3c93643808daf9a4c8812" Mar 12 14:26:23.963178 master-0 kubenswrapper[7440]: I0312 14:26:23.963108 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bd65ca4e680b5333dd47dc3da6564b9ecb4961327d3c93643808daf9a4c8812"} err="failed to get container status \"7bd65ca4e680b5333dd47dc3da6564b9ecb4961327d3c93643808daf9a4c8812\": rpc error: code = NotFound desc = could not find container \"7bd65ca4e680b5333dd47dc3da6564b9ecb4961327d3c93643808daf9a4c8812\": container with ID starting with 7bd65ca4e680b5333dd47dc3da6564b9ecb4961327d3c93643808daf9a4c8812 not found: ID does not exist" Mar 12 14:26:23.963218 master-0 kubenswrapper[7440]: I0312 14:26:23.963175 7440 scope.go:117] "RemoveContainer" containerID="c7f808f64216bac6165a91847cbe1e04c9cbb2e41a6946684e87039fd940bcf1" Mar 12 14:26:23.982531 master-0 kubenswrapper[7440]: I0312 14:26:23.982499 7440 scope.go:117] "RemoveContainer" containerID="e085982a635b4a0eba26b5bb736bad34e1f9261a79ed2b915c47c028db213dd4" Mar 12 14:26:23.982988 master-0 kubenswrapper[7440]: E0312 14:26:23.982952 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e085982a635b4a0eba26b5bb736bad34e1f9261a79ed2b915c47c028db213dd4\": container with ID starting with e085982a635b4a0eba26b5bb736bad34e1f9261a79ed2b915c47c028db213dd4 not found: ID does not exist" containerID="e085982a635b4a0eba26b5bb736bad34e1f9261a79ed2b915c47c028db213dd4" Mar 12 14:26:23.983057 master-0 kubenswrapper[7440]: I0312 14:26:23.983020 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e085982a635b4a0eba26b5bb736bad34e1f9261a79ed2b915c47c028db213dd4"} err="failed to get container status \"e085982a635b4a0eba26b5bb736bad34e1f9261a79ed2b915c47c028db213dd4\": rpc error: code = NotFound desc = could not find container \"e085982a635b4a0eba26b5bb736bad34e1f9261a79ed2b915c47c028db213dd4\": container with ID starting with e085982a635b4a0eba26b5bb736bad34e1f9261a79ed2b915c47c028db213dd4 not found: ID does not exist" Mar 12 14:26:23.983105 master-0 kubenswrapper[7440]: I0312 14:26:23.983057 7440 scope.go:117] "RemoveContainer" containerID="76d5f71d0b9e07fd3636c963c1e496e4449c72c239decd6092cccc9fe18dbb61" Mar 12 14:26:23.983339 master-0 kubenswrapper[7440]: E0312 14:26:23.983307 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76d5f71d0b9e07fd3636c963c1e496e4449c72c239decd6092cccc9fe18dbb61\": container with ID starting with 76d5f71d0b9e07fd3636c963c1e496e4449c72c239decd6092cccc9fe18dbb61 not found: ID does not exist" containerID="76d5f71d0b9e07fd3636c963c1e496e4449c72c239decd6092cccc9fe18dbb61" Mar 12 14:26:23.983339 master-0 kubenswrapper[7440]: I0312 14:26:23.983328 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76d5f71d0b9e07fd3636c963c1e496e4449c72c239decd6092cccc9fe18dbb61"} err="failed to get container status \"76d5f71d0b9e07fd3636c963c1e496e4449c72c239decd6092cccc9fe18dbb61\": rpc error: code = NotFound desc = could not find container \"76d5f71d0b9e07fd3636c963c1e496e4449c72c239decd6092cccc9fe18dbb61\": container with ID starting with 76d5f71d0b9e07fd3636c963c1e496e4449c72c239decd6092cccc9fe18dbb61 not found: ID does not exist" Mar 12 14:26:23.983339 master-0 kubenswrapper[7440]: I0312 14:26:23.983340 7440 scope.go:117] "RemoveContainer" containerID="b7d782d5bb2308ec609e902e46de5e46198bee9122afbefc61233a9ba61991af" Mar 12 14:26:23.983809 master-0 kubenswrapper[7440]: E0312 14:26:23.983762 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7d782d5bb2308ec609e902e46de5e46198bee9122afbefc61233a9ba61991af\": container with ID starting with b7d782d5bb2308ec609e902e46de5e46198bee9122afbefc61233a9ba61991af not found: ID does not exist" containerID="b7d782d5bb2308ec609e902e46de5e46198bee9122afbefc61233a9ba61991af" Mar 12 14:26:23.983861 master-0 kubenswrapper[7440]: I0312 14:26:23.983809 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7d782d5bb2308ec609e902e46de5e46198bee9122afbefc61233a9ba61991af"} err="failed to get container status \"b7d782d5bb2308ec609e902e46de5e46198bee9122afbefc61233a9ba61991af\": rpc error: code = NotFound desc = could not find container \"b7d782d5bb2308ec609e902e46de5e46198bee9122afbefc61233a9ba61991af\": container with ID starting with b7d782d5bb2308ec609e902e46de5e46198bee9122afbefc61233a9ba61991af not found: ID does not exist" Mar 12 14:26:23.983861 master-0 kubenswrapper[7440]: I0312 14:26:23.983839 7440 scope.go:117] "RemoveContainer" containerID="a946cdc53167780579891b144ae4c01088126bb42ef45317938bc8fe5fe26cbb" Mar 12 14:26:24.007118 master-0 kubenswrapper[7440]: I0312 14:26:24.006755 7440 scope.go:117] "RemoveContainer" containerID="b53df61802c76275e2ee152b7486584e46a40bc0a811c6ed0a3e9d62b01955be" Mar 12 14:26:24.030127 master-0 kubenswrapper[7440]: I0312 14:26:24.029850 7440 scope.go:117] "RemoveContainer" containerID="93fc043f83fd1d3afac8895480948677e740498aeff368b3ec9e23d75ce7f261" Mar 12 14:26:24.031451 master-0 kubenswrapper[7440]: E0312 14:26:24.031389 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:26:24.048168 master-0 kubenswrapper[7440]: I0312 14:26:24.048119 7440 scope.go:117] "RemoveContainer" containerID="926a040435e0968b248eb5c7123d8465f49b77a778c24d92b17563fbe0da4bd1" Mar 12 14:26:24.079910 master-0 kubenswrapper[7440]: I0312 14:26:24.079861 7440 scope.go:117] "RemoveContainer" containerID="f3d9c730da43b24ec075e5b126409b0c8c7273cecb83802d3e5610d1f61d4571" Mar 12 14:26:24.114437 master-0 kubenswrapper[7440]: I0312 14:26:24.113262 7440 scope.go:117] "RemoveContainer" containerID="9187f76670a738ddd581636a016ef4d6741503d5745e898edf219cba574d1307" Mar 12 14:26:24.133338 master-0 kubenswrapper[7440]: I0312 14:26:24.133309 7440 scope.go:117] "RemoveContainer" containerID="ea065bab14dca0766dced510f8f192078bd28fcc445355d287138a674e19946f" Mar 12 14:26:24.150348 master-0 kubenswrapper[7440]: I0312 14:26:24.134364 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:24.150348 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:24.150348 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:24.150348 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:24.150348 master-0 kubenswrapper[7440]: I0312 14:26:24.134469 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:24.183473 master-0 kubenswrapper[7440]: I0312 14:26:24.183420 7440 scope.go:117] "RemoveContainer" containerID="b98815f2940c407dcd2edaca0a185078f6d9c591becb207f34495f0ed682e5be" Mar 12 14:26:24.205534 master-0 kubenswrapper[7440]: I0312 14:26:24.205496 7440 scope.go:117] "RemoveContainer" containerID="0eed999a49dbae8cddba70df11741d86114a7456650eda2650c12101e15de11f" Mar 12 14:26:24.225974 master-0 kubenswrapper[7440]: I0312 14:26:24.225944 7440 scope.go:117] "RemoveContainer" containerID="fa444aaa7916a9b8ce7bfb85bc927673df9636ab7f0f10b61e757d7a6e637d9d" Mar 12 14:26:24.244277 master-0 kubenswrapper[7440]: I0312 14:26:24.244234 7440 scope.go:117] "RemoveContainer" containerID="e69ae5e560439e8be83727200f3f70b72e784d09cd8dbceed926d8123583ce1c" Mar 12 14:26:24.265497 master-0 kubenswrapper[7440]: I0312 14:26:24.265455 7440 scope.go:117] "RemoveContainer" containerID="5c0e8a37f9d56e49ba600123779ab452255e4d506e12df3758cc982e1da22f30" Mar 12 14:26:24.285530 master-0 kubenswrapper[7440]: I0312 14:26:24.285489 7440 scope.go:117] "RemoveContainer" containerID="cb41f5989ad50bdc5ae078b167c9bb559590c0f507a4b8b3d5d90309a6eca4b7" Mar 12 14:26:24.285926 master-0 kubenswrapper[7440]: E0312 14:26:24.285876 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb41f5989ad50bdc5ae078b167c9bb559590c0f507a4b8b3d5d90309a6eca4b7\": container with ID starting with cb41f5989ad50bdc5ae078b167c9bb559590c0f507a4b8b3d5d90309a6eca4b7 not found: ID does not exist" containerID="cb41f5989ad50bdc5ae078b167c9bb559590c0f507a4b8b3d5d90309a6eca4b7" Mar 12 14:26:24.286004 master-0 kubenswrapper[7440]: I0312 14:26:24.285934 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb41f5989ad50bdc5ae078b167c9bb559590c0f507a4b8b3d5d90309a6eca4b7"} err="failed to get container status \"cb41f5989ad50bdc5ae078b167c9bb559590c0f507a4b8b3d5d90309a6eca4b7\": rpc error: code = NotFound desc = could not find container \"cb41f5989ad50bdc5ae078b167c9bb559590c0f507a4b8b3d5d90309a6eca4b7\": container with ID starting with cb41f5989ad50bdc5ae078b167c9bb559590c0f507a4b8b3d5d90309a6eca4b7 not found: ID does not exist" Mar 12 14:26:24.286004 master-0 kubenswrapper[7440]: I0312 14:26:24.285957 7440 scope.go:117] "RemoveContainer" containerID="82a229708282890eba0f2dd66591b7d498131fca3dd378e3fd0c6eab0f3fa96d" Mar 12 14:26:24.286227 master-0 kubenswrapper[7440]: I0312 14:26:24.286202 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82a229708282890eba0f2dd66591b7d498131fca3dd378e3fd0c6eab0f3fa96d"} err="failed to get container status \"82a229708282890eba0f2dd66591b7d498131fca3dd378e3fd0c6eab0f3fa96d\": rpc error: code = NotFound desc = could not find container \"82a229708282890eba0f2dd66591b7d498131fca3dd378e3fd0c6eab0f3fa96d\": container with ID starting with 82a229708282890eba0f2dd66591b7d498131fca3dd378e3fd0c6eab0f3fa96d not found: ID does not exist" Mar 12 14:26:24.286227 master-0 kubenswrapper[7440]: I0312 14:26:24.286222 7440 scope.go:117] "RemoveContainer" containerID="8ae9516d1bad64d2b36bf66ae5496f784cbde176fd71bcff31926fef9dd2ff15" Mar 12 14:26:24.286413 master-0 kubenswrapper[7440]: I0312 14:26:24.286391 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ae9516d1bad64d2b36bf66ae5496f784cbde176fd71bcff31926fef9dd2ff15"} err="failed to get container status \"8ae9516d1bad64d2b36bf66ae5496f784cbde176fd71bcff31926fef9dd2ff15\": rpc error: code = NotFound desc = could not find container \"8ae9516d1bad64d2b36bf66ae5496f784cbde176fd71bcff31926fef9dd2ff15\": container with ID starting with 8ae9516d1bad64d2b36bf66ae5496f784cbde176fd71bcff31926fef9dd2ff15 not found: ID does not exist" Mar 12 14:26:24.286413 master-0 kubenswrapper[7440]: I0312 14:26:24.286408 7440 scope.go:117] "RemoveContainer" containerID="7bd65ca4e680b5333dd47dc3da6564b9ecb4961327d3c93643808daf9a4c8812" Mar 12 14:26:24.286861 master-0 kubenswrapper[7440]: I0312 14:26:24.286837 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bd65ca4e680b5333dd47dc3da6564b9ecb4961327d3c93643808daf9a4c8812"} err="failed to get container status \"7bd65ca4e680b5333dd47dc3da6564b9ecb4961327d3c93643808daf9a4c8812\": rpc error: code = NotFound desc = could not find container \"7bd65ca4e680b5333dd47dc3da6564b9ecb4961327d3c93643808daf9a4c8812\": container with ID starting with 7bd65ca4e680b5333dd47dc3da6564b9ecb4961327d3c93643808daf9a4c8812 not found: ID does not exist" Mar 12 14:26:24.286861 master-0 kubenswrapper[7440]: I0312 14:26:24.286856 7440 scope.go:117] "RemoveContainer" containerID="e085982a635b4a0eba26b5bb736bad34e1f9261a79ed2b915c47c028db213dd4" Mar 12 14:26:24.287353 master-0 kubenswrapper[7440]: I0312 14:26:24.287167 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e085982a635b4a0eba26b5bb736bad34e1f9261a79ed2b915c47c028db213dd4"} err="failed to get container status \"e085982a635b4a0eba26b5bb736bad34e1f9261a79ed2b915c47c028db213dd4\": rpc error: code = NotFound desc = could not find container \"e085982a635b4a0eba26b5bb736bad34e1f9261a79ed2b915c47c028db213dd4\": container with ID starting with e085982a635b4a0eba26b5bb736bad34e1f9261a79ed2b915c47c028db213dd4 not found: ID does not exist" Mar 12 14:26:24.287353 master-0 kubenswrapper[7440]: I0312 14:26:24.287220 7440 scope.go:117] "RemoveContainer" containerID="76d5f71d0b9e07fd3636c963c1e496e4449c72c239decd6092cccc9fe18dbb61" Mar 12 14:26:24.287765 master-0 kubenswrapper[7440]: I0312 14:26:24.287726 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76d5f71d0b9e07fd3636c963c1e496e4449c72c239decd6092cccc9fe18dbb61"} err="failed to get container status \"76d5f71d0b9e07fd3636c963c1e496e4449c72c239decd6092cccc9fe18dbb61\": rpc error: code = NotFound desc = could not find container \"76d5f71d0b9e07fd3636c963c1e496e4449c72c239decd6092cccc9fe18dbb61\": container with ID starting with 76d5f71d0b9e07fd3636c963c1e496e4449c72c239decd6092cccc9fe18dbb61 not found: ID does not exist" Mar 12 14:26:24.287765 master-0 kubenswrapper[7440]: I0312 14:26:24.287752 7440 scope.go:117] "RemoveContainer" containerID="b7d782d5bb2308ec609e902e46de5e46198bee9122afbefc61233a9ba61991af" Mar 12 14:26:24.288088 master-0 kubenswrapper[7440]: I0312 14:26:24.288037 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7d782d5bb2308ec609e902e46de5e46198bee9122afbefc61233a9ba61991af"} err="failed to get container status \"b7d782d5bb2308ec609e902e46de5e46198bee9122afbefc61233a9ba61991af\": rpc error: code = NotFound desc = could not find container \"b7d782d5bb2308ec609e902e46de5e46198bee9122afbefc61233a9ba61991af\": container with ID starting with b7d782d5bb2308ec609e902e46de5e46198bee9122afbefc61233a9ba61991af not found: ID does not exist" Mar 12 14:26:24.288088 master-0 kubenswrapper[7440]: I0312 14:26:24.288055 7440 scope.go:117] "RemoveContainer" containerID="e69ae5e560439e8be83727200f3f70b72e784d09cd8dbceed926d8123583ce1c" Mar 12 14:26:24.288828 master-0 kubenswrapper[7440]: E0312 14:26:24.288795 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e69ae5e560439e8be83727200f3f70b72e784d09cd8dbceed926d8123583ce1c\": container with ID starting with e69ae5e560439e8be83727200f3f70b72e784d09cd8dbceed926d8123583ce1c not found: ID does not exist" containerID="e69ae5e560439e8be83727200f3f70b72e784d09cd8dbceed926d8123583ce1c" Mar 12 14:26:24.288828 master-0 kubenswrapper[7440]: I0312 14:26:24.288819 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e69ae5e560439e8be83727200f3f70b72e784d09cd8dbceed926d8123583ce1c"} err="failed to get container status \"e69ae5e560439e8be83727200f3f70b72e784d09cd8dbceed926d8123583ce1c\": rpc error: code = NotFound desc = could not find container \"e69ae5e560439e8be83727200f3f70b72e784d09cd8dbceed926d8123583ce1c\": container with ID starting with e69ae5e560439e8be83727200f3f70b72e784d09cd8dbceed926d8123583ce1c not found: ID does not exist" Mar 12 14:26:24.288953 master-0 kubenswrapper[7440]: I0312 14:26:24.288836 7440 scope.go:117] "RemoveContainer" containerID="b98815f2940c407dcd2edaca0a185078f6d9c591becb207f34495f0ed682e5be" Mar 12 14:26:24.289209 master-0 kubenswrapper[7440]: E0312 14:26:24.289168 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b98815f2940c407dcd2edaca0a185078f6d9c591becb207f34495f0ed682e5be\": container with ID starting with b98815f2940c407dcd2edaca0a185078f6d9c591becb207f34495f0ed682e5be not found: ID does not exist" containerID="b98815f2940c407dcd2edaca0a185078f6d9c591becb207f34495f0ed682e5be" Mar 12 14:26:24.289274 master-0 kubenswrapper[7440]: I0312 14:26:24.289208 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b98815f2940c407dcd2edaca0a185078f6d9c591becb207f34495f0ed682e5be"} err="failed to get container status \"b98815f2940c407dcd2edaca0a185078f6d9c591becb207f34495f0ed682e5be\": rpc error: code = NotFound desc = could not find container \"b98815f2940c407dcd2edaca0a185078f6d9c591becb207f34495f0ed682e5be\": container with ID starting with b98815f2940c407dcd2edaca0a185078f6d9c591becb207f34495f0ed682e5be not found: ID does not exist" Mar 12 14:26:24.289274 master-0 kubenswrapper[7440]: I0312 14:26:24.289228 7440 scope.go:117] "RemoveContainer" containerID="926a040435e0968b248eb5c7123d8465f49b77a778c24d92b17563fbe0da4bd1" Mar 12 14:26:24.289274 master-0 kubenswrapper[7440]: I0312 14:26:24.289218 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/2.log" Mar 12 14:26:24.289529 master-0 kubenswrapper[7440]: E0312 14:26:24.289512 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"926a040435e0968b248eb5c7123d8465f49b77a778c24d92b17563fbe0da4bd1\": container with ID starting with 926a040435e0968b248eb5c7123d8465f49b77a778c24d92b17563fbe0da4bd1 not found: ID does not exist" containerID="926a040435e0968b248eb5c7123d8465f49b77a778c24d92b17563fbe0da4bd1" Mar 12 14:26:24.289576 master-0 kubenswrapper[7440]: I0312 14:26:24.289532 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"926a040435e0968b248eb5c7123d8465f49b77a778c24d92b17563fbe0da4bd1"} err="failed to get container status \"926a040435e0968b248eb5c7123d8465f49b77a778c24d92b17563fbe0da4bd1\": rpc error: code = NotFound desc = could not find container \"926a040435e0968b248eb5c7123d8465f49b77a778c24d92b17563fbe0da4bd1\": container with ID starting with 926a040435e0968b248eb5c7123d8465f49b77a778c24d92b17563fbe0da4bd1 not found: ID does not exist" Mar 12 14:26:24.289576 master-0 kubenswrapper[7440]: I0312 14:26:24.289545 7440 scope.go:117] "RemoveContainer" containerID="b53df61802c76275e2ee152b7486584e46a40bc0a811c6ed0a3e9d62b01955be" Mar 12 14:26:24.289968 master-0 kubenswrapper[7440]: E0312 14:26:24.289943 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b53df61802c76275e2ee152b7486584e46a40bc0a811c6ed0a3e9d62b01955be\": container with ID starting with b53df61802c76275e2ee152b7486584e46a40bc0a811c6ed0a3e9d62b01955be not found: ID does not exist" containerID="b53df61802c76275e2ee152b7486584e46a40bc0a811c6ed0a3e9d62b01955be" Mar 12 14:26:24.289968 master-0 kubenswrapper[7440]: I0312 14:26:24.289965 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b53df61802c76275e2ee152b7486584e46a40bc0a811c6ed0a3e9d62b01955be"} err="failed to get container status \"b53df61802c76275e2ee152b7486584e46a40bc0a811c6ed0a3e9d62b01955be\": rpc error: code = NotFound desc = could not find container \"b53df61802c76275e2ee152b7486584e46a40bc0a811c6ed0a3e9d62b01955be\": container with ID starting with b53df61802c76275e2ee152b7486584e46a40bc0a811c6ed0a3e9d62b01955be not found: ID does not exist" Mar 12 14:26:24.290089 master-0 kubenswrapper[7440]: I0312 14:26:24.289978 7440 scope.go:117] "RemoveContainer" containerID="9187f76670a738ddd581636a016ef4d6741503d5745e898edf219cba574d1307" Mar 12 14:26:24.290179 master-0 kubenswrapper[7440]: E0312 14:26:24.290159 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9187f76670a738ddd581636a016ef4d6741503d5745e898edf219cba574d1307\": container with ID starting with 9187f76670a738ddd581636a016ef4d6741503d5745e898edf219cba574d1307 not found: ID does not exist" containerID="9187f76670a738ddd581636a016ef4d6741503d5745e898edf219cba574d1307" Mar 12 14:26:24.290226 master-0 kubenswrapper[7440]: I0312 14:26:24.290178 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9187f76670a738ddd581636a016ef4d6741503d5745e898edf219cba574d1307"} err="failed to get container status \"9187f76670a738ddd581636a016ef4d6741503d5745e898edf219cba574d1307\": rpc error: code = NotFound desc = could not find container \"9187f76670a738ddd581636a016ef4d6741503d5745e898edf219cba574d1307\": container with ID starting with 9187f76670a738ddd581636a016ef4d6741503d5745e898edf219cba574d1307 not found: ID does not exist" Mar 12 14:26:24.290226 master-0 kubenswrapper[7440]: I0312 14:26:24.290189 7440 scope.go:117] "RemoveContainer" containerID="ea065bab14dca0766dced510f8f192078bd28fcc445355d287138a674e19946f" Mar 12 14:26:24.290515 master-0 kubenswrapper[7440]: E0312 14:26:24.290493 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea065bab14dca0766dced510f8f192078bd28fcc445355d287138a674e19946f\": container with ID starting with ea065bab14dca0766dced510f8f192078bd28fcc445355d287138a674e19946f not found: ID does not exist" containerID="ea065bab14dca0766dced510f8f192078bd28fcc445355d287138a674e19946f" Mar 12 14:26:24.290601 master-0 kubenswrapper[7440]: I0312 14:26:24.290514 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea065bab14dca0766dced510f8f192078bd28fcc445355d287138a674e19946f"} err="failed to get container status \"ea065bab14dca0766dced510f8f192078bd28fcc445355d287138a674e19946f\": rpc error: code = NotFound desc = could not find container \"ea065bab14dca0766dced510f8f192078bd28fcc445355d287138a674e19946f\": container with ID starting with ea065bab14dca0766dced510f8f192078bd28fcc445355d287138a674e19946f not found: ID does not exist" Mar 12 14:26:24.290601 master-0 kubenswrapper[7440]: I0312 14:26:24.290528 7440 scope.go:117] "RemoveContainer" containerID="0eed999a49dbae8cddba70df11741d86114a7456650eda2650c12101e15de11f" Mar 12 14:26:24.290736 master-0 kubenswrapper[7440]: E0312 14:26:24.290713 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eed999a49dbae8cddba70df11741d86114a7456650eda2650c12101e15de11f\": container with ID starting with 0eed999a49dbae8cddba70df11741d86114a7456650eda2650c12101e15de11f not found: ID does not exist" containerID="0eed999a49dbae8cddba70df11741d86114a7456650eda2650c12101e15de11f" Mar 12 14:26:24.290804 master-0 kubenswrapper[7440]: I0312 14:26:24.290736 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eed999a49dbae8cddba70df11741d86114a7456650eda2650c12101e15de11f"} err="failed to get container status \"0eed999a49dbae8cddba70df11741d86114a7456650eda2650c12101e15de11f\": rpc error: code = NotFound desc = could not find container \"0eed999a49dbae8cddba70df11741d86114a7456650eda2650c12101e15de11f\": container with ID starting with 0eed999a49dbae8cddba70df11741d86114a7456650eda2650c12101e15de11f not found: ID does not exist" Mar 12 14:26:24.290804 master-0 kubenswrapper[7440]: I0312 14:26:24.290751 7440 scope.go:117] "RemoveContainer" containerID="93fc043f83fd1d3afac8895480948677e740498aeff368b3ec9e23d75ce7f261" Mar 12 14:26:24.290984 master-0 kubenswrapper[7440]: I0312 14:26:24.290962 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/4.log" Mar 12 14:26:24.291265 master-0 kubenswrapper[7440]: I0312 14:26:24.291244 7440 scope.go:117] "RemoveContainer" containerID="6475bc0affe8a98c9e1b7717d0757a27fe42a8342fbfe27794215021cef2d056" Mar 12 14:26:24.291431 master-0 kubenswrapper[7440]: E0312 14:26:24.291402 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-z9hzg_openshift-cluster-storage-operator(d56089bf-177c-492d-8964-73a45574e7ed)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" podUID="d56089bf-177c-492d-8964-73a45574e7ed" Mar 12 14:26:24.291493 master-0 kubenswrapper[7440]: E0312 14:26:24.291479 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93fc043f83fd1d3afac8895480948677e740498aeff368b3ec9e23d75ce7f261\": container with ID starting with 93fc043f83fd1d3afac8895480948677e740498aeff368b3ec9e23d75ce7f261 not found: ID does not exist" containerID="93fc043f83fd1d3afac8895480948677e740498aeff368b3ec9e23d75ce7f261" Mar 12 14:26:24.291544 master-0 kubenswrapper[7440]: I0312 14:26:24.291500 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93fc043f83fd1d3afac8895480948677e740498aeff368b3ec9e23d75ce7f261"} err="failed to get container status \"93fc043f83fd1d3afac8895480948677e740498aeff368b3ec9e23d75ce7f261\": rpc error: code = NotFound desc = could not find container \"93fc043f83fd1d3afac8895480948677e740498aeff368b3ec9e23d75ce7f261\": container with ID starting with 93fc043f83fd1d3afac8895480948677e740498aeff368b3ec9e23d75ce7f261 not found: ID does not exist" Mar 12 14:26:24.291544 master-0 kubenswrapper[7440]: I0312 14:26:24.291518 7440 scope.go:117] "RemoveContainer" containerID="fa444aaa7916a9b8ce7bfb85bc927673df9636ab7f0f10b61e757d7a6e637d9d" Mar 12 14:26:24.291768 master-0 kubenswrapper[7440]: E0312 14:26:24.291742 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa444aaa7916a9b8ce7bfb85bc927673df9636ab7f0f10b61e757d7a6e637d9d\": container with ID starting with fa444aaa7916a9b8ce7bfb85bc927673df9636ab7f0f10b61e757d7a6e637d9d not found: ID does not exist" containerID="fa444aaa7916a9b8ce7bfb85bc927673df9636ab7f0f10b61e757d7a6e637d9d" Mar 12 14:26:24.291824 master-0 kubenswrapper[7440]: I0312 14:26:24.291767 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa444aaa7916a9b8ce7bfb85bc927673df9636ab7f0f10b61e757d7a6e637d9d"} err="failed to get container status \"fa444aaa7916a9b8ce7bfb85bc927673df9636ab7f0f10b61e757d7a6e637d9d\": rpc error: code = NotFound desc = could not find container \"fa444aaa7916a9b8ce7bfb85bc927673df9636ab7f0f10b61e757d7a6e637d9d\": container with ID starting with fa444aaa7916a9b8ce7bfb85bc927673df9636ab7f0f10b61e757d7a6e637d9d not found: ID does not exist" Mar 12 14:26:24.291824 master-0 kubenswrapper[7440]: I0312 14:26:24.291781 7440 scope.go:117] "RemoveContainer" containerID="a946cdc53167780579891b144ae4c01088126bb42ef45317938bc8fe5fe26cbb" Mar 12 14:26:24.292017 master-0 kubenswrapper[7440]: E0312 14:26:24.291988 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a946cdc53167780579891b144ae4c01088126bb42ef45317938bc8fe5fe26cbb\": container with ID starting with a946cdc53167780579891b144ae4c01088126bb42ef45317938bc8fe5fe26cbb not found: ID does not exist" containerID="a946cdc53167780579891b144ae4c01088126bb42ef45317938bc8fe5fe26cbb" Mar 12 14:26:24.292017 master-0 kubenswrapper[7440]: I0312 14:26:24.292007 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a946cdc53167780579891b144ae4c01088126bb42ef45317938bc8fe5fe26cbb"} err="failed to get container status \"a946cdc53167780579891b144ae4c01088126bb42ef45317938bc8fe5fe26cbb\": rpc error: code = NotFound desc = could not find container \"a946cdc53167780579891b144ae4c01088126bb42ef45317938bc8fe5fe26cbb\": container with ID starting with a946cdc53167780579891b144ae4c01088126bb42ef45317938bc8fe5fe26cbb not found: ID does not exist" Mar 12 14:26:24.292189 master-0 kubenswrapper[7440]: I0312 14:26:24.292021 7440 scope.go:117] "RemoveContainer" containerID="c7f808f64216bac6165a91847cbe1e04c9cbb2e41a6946684e87039fd940bcf1" Mar 12 14:26:24.292409 master-0 kubenswrapper[7440]: E0312 14:26:24.292385 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7f808f64216bac6165a91847cbe1e04c9cbb2e41a6946684e87039fd940bcf1\": container with ID starting with c7f808f64216bac6165a91847cbe1e04c9cbb2e41a6946684e87039fd940bcf1 not found: ID does not exist" containerID="c7f808f64216bac6165a91847cbe1e04c9cbb2e41a6946684e87039fd940bcf1" Mar 12 14:26:24.292463 master-0 kubenswrapper[7440]: I0312 14:26:24.292409 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7f808f64216bac6165a91847cbe1e04c9cbb2e41a6946684e87039fd940bcf1"} err="failed to get container status \"c7f808f64216bac6165a91847cbe1e04c9cbb2e41a6946684e87039fd940bcf1\": rpc error: code = NotFound desc = could not find container \"c7f808f64216bac6165a91847cbe1e04c9cbb2e41a6946684e87039fd940bcf1\": container with ID starting with c7f808f64216bac6165a91847cbe1e04c9cbb2e41a6946684e87039fd940bcf1 not found: ID does not exist" Mar 12 14:26:24.292463 master-0 kubenswrapper[7440]: I0312 14:26:24.292422 7440 scope.go:117] "RemoveContainer" containerID="5c0e8a37f9d56e49ba600123779ab452255e4d506e12df3758cc982e1da22f30" Mar 12 14:26:24.292669 master-0 kubenswrapper[7440]: E0312 14:26:24.292645 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c0e8a37f9d56e49ba600123779ab452255e4d506e12df3758cc982e1da22f30\": container with ID starting with 5c0e8a37f9d56e49ba600123779ab452255e4d506e12df3758cc982e1da22f30 not found: ID does not exist" containerID="5c0e8a37f9d56e49ba600123779ab452255e4d506e12df3758cc982e1da22f30" Mar 12 14:26:24.292720 master-0 kubenswrapper[7440]: I0312 14:26:24.292664 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c0e8a37f9d56e49ba600123779ab452255e4d506e12df3758cc982e1da22f30"} err="failed to get container status \"5c0e8a37f9d56e49ba600123779ab452255e4d506e12df3758cc982e1da22f30\": rpc error: code = NotFound desc = could not find container \"5c0e8a37f9d56e49ba600123779ab452255e4d506e12df3758cc982e1da22f30\": container with ID starting with 5c0e8a37f9d56e49ba600123779ab452255e4d506e12df3758cc982e1da22f30 not found: ID does not exist" Mar 12 14:26:24.292720 master-0 kubenswrapper[7440]: I0312 14:26:24.292679 7440 scope.go:117] "RemoveContainer" containerID="82a229708282890eba0f2dd66591b7d498131fca3dd378e3fd0c6eab0f3fa96d" Mar 12 14:26:24.292945 master-0 kubenswrapper[7440]: I0312 14:26:24.292921 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82a229708282890eba0f2dd66591b7d498131fca3dd378e3fd0c6eab0f3fa96d"} err="failed to get container status \"82a229708282890eba0f2dd66591b7d498131fca3dd378e3fd0c6eab0f3fa96d\": rpc error: code = NotFound desc = could not find container \"82a229708282890eba0f2dd66591b7d498131fca3dd378e3fd0c6eab0f3fa96d\": container with ID starting with 82a229708282890eba0f2dd66591b7d498131fca3dd378e3fd0c6eab0f3fa96d not found: ID does not exist" Mar 12 14:26:24.292945 master-0 kubenswrapper[7440]: I0312 14:26:24.292941 7440 scope.go:117] "RemoveContainer" containerID="8ae9516d1bad64d2b36bf66ae5496f784cbde176fd71bcff31926fef9dd2ff15" Mar 12 14:26:24.293414 master-0 kubenswrapper[7440]: I0312 14:26:24.293223 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ae9516d1bad64d2b36bf66ae5496f784cbde176fd71bcff31926fef9dd2ff15"} err="failed to get container status \"8ae9516d1bad64d2b36bf66ae5496f784cbde176fd71bcff31926fef9dd2ff15\": rpc error: code = NotFound desc = could not find container \"8ae9516d1bad64d2b36bf66ae5496f784cbde176fd71bcff31926fef9dd2ff15\": container with ID starting with 8ae9516d1bad64d2b36bf66ae5496f784cbde176fd71bcff31926fef9dd2ff15 not found: ID does not exist" Mar 12 14:26:24.293414 master-0 kubenswrapper[7440]: I0312 14:26:24.293257 7440 scope.go:117] "RemoveContainer" containerID="7bd65ca4e680b5333dd47dc3da6564b9ecb4961327d3c93643808daf9a4c8812" Mar 12 14:26:24.293776 master-0 kubenswrapper[7440]: I0312 14:26:24.293754 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bd65ca4e680b5333dd47dc3da6564b9ecb4961327d3c93643808daf9a4c8812"} err="failed to get container status \"7bd65ca4e680b5333dd47dc3da6564b9ecb4961327d3c93643808daf9a4c8812\": rpc error: code = NotFound desc = could not find container \"7bd65ca4e680b5333dd47dc3da6564b9ecb4961327d3c93643808daf9a4c8812\": container with ID starting with 7bd65ca4e680b5333dd47dc3da6564b9ecb4961327d3c93643808daf9a4c8812 not found: ID does not exist" Mar 12 14:26:24.293776 master-0 kubenswrapper[7440]: I0312 14:26:24.293775 7440 scope.go:117] "RemoveContainer" containerID="f3d9c730da43b24ec075e5b126409b0c8c7273cecb83802d3e5610d1f61d4571" Mar 12 14:26:24.294066 master-0 kubenswrapper[7440]: E0312 14:26:24.294043 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3d9c730da43b24ec075e5b126409b0c8c7273cecb83802d3e5610d1f61d4571\": container with ID starting with f3d9c730da43b24ec075e5b126409b0c8c7273cecb83802d3e5610d1f61d4571 not found: ID does not exist" containerID="f3d9c730da43b24ec075e5b126409b0c8c7273cecb83802d3e5610d1f61d4571" Mar 12 14:26:24.294122 master-0 kubenswrapper[7440]: I0312 14:26:24.294063 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3d9c730da43b24ec075e5b126409b0c8c7273cecb83802d3e5610d1f61d4571"} err="failed to get container status \"f3d9c730da43b24ec075e5b126409b0c8c7273cecb83802d3e5610d1f61d4571\": rpc error: code = NotFound desc = could not find container \"f3d9c730da43b24ec075e5b126409b0c8c7273cecb83802d3e5610d1f61d4571\": container with ID starting with f3d9c730da43b24ec075e5b126409b0c8c7273cecb83802d3e5610d1f61d4571 not found: ID does not exist" Mar 12 14:26:24.294122 master-0 kubenswrapper[7440]: I0312 14:26:24.294076 7440 scope.go:117] "RemoveContainer" containerID="e085982a635b4a0eba26b5bb736bad34e1f9261a79ed2b915c47c028db213dd4" Mar 12 14:26:24.294410 master-0 kubenswrapper[7440]: I0312 14:26:24.294368 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e085982a635b4a0eba26b5bb736bad34e1f9261a79ed2b915c47c028db213dd4"} err="failed to get container status \"e085982a635b4a0eba26b5bb736bad34e1f9261a79ed2b915c47c028db213dd4\": rpc error: code = NotFound desc = could not find container \"e085982a635b4a0eba26b5bb736bad34e1f9261a79ed2b915c47c028db213dd4\": container with ID starting with e085982a635b4a0eba26b5bb736bad34e1f9261a79ed2b915c47c028db213dd4 not found: ID does not exist" Mar 12 14:26:24.294410 master-0 kubenswrapper[7440]: I0312 14:26:24.294397 7440 scope.go:117] "RemoveContainer" containerID="76d5f71d0b9e07fd3636c963c1e496e4449c72c239decd6092cccc9fe18dbb61" Mar 12 14:26:24.294840 master-0 kubenswrapper[7440]: I0312 14:26:24.294818 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76d5f71d0b9e07fd3636c963c1e496e4449c72c239decd6092cccc9fe18dbb61"} err="failed to get container status \"76d5f71d0b9e07fd3636c963c1e496e4449c72c239decd6092cccc9fe18dbb61\": rpc error: code = NotFound desc = could not find container \"76d5f71d0b9e07fd3636c963c1e496e4449c72c239decd6092cccc9fe18dbb61\": container with ID starting with 76d5f71d0b9e07fd3636c963c1e496e4449c72c239decd6092cccc9fe18dbb61 not found: ID does not exist" Mar 12 14:26:24.295332 master-0 kubenswrapper[7440]: I0312 14:26:24.295312 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-zwdgk_d00a8cc7-7774-40bd-94a1-9ac2d0f63234/openshift-controller-manager-operator/2.log" Mar 12 14:26:24.297081 master-0 kubenswrapper[7440]: I0312 14:26:24.297048 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-smpl5_a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2/kube-apiserver-operator/2.log" Mar 12 14:26:24.313566 master-0 kubenswrapper[7440]: I0312 14:26:24.313481 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-mjxsv_8d775283-2696-4411-8ddf-d4e6000f0a0c/etcd-operator/2.log" Mar 12 14:26:24.317649 master-0 kubenswrapper[7440]: I0312 14:26:24.317613 7440 generic.go:334] "Generic (PLEG): container finished" podID="8e4d9407-ff79-4396-a37f-896617e024d4" containerID="3d291d3f8cf9b232bd82f0a951b10eec242d292f5ec0b07ae030409f0e0e9d18" exitCode=0 Mar 12 14:26:24.317740 master-0 kubenswrapper[7440]: I0312 14:26:24.317706 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" event={"ID":"8e4d9407-ff79-4396-a37f-896617e024d4","Type":"ContainerDied","Data":"3d291d3f8cf9b232bd82f0a951b10eec242d292f5ec0b07ae030409f0e0e9d18"} Mar 12 14:26:24.317740 master-0 kubenswrapper[7440]: I0312 14:26:24.317735 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" event={"ID":"8e4d9407-ff79-4396-a37f-896617e024d4","Type":"ContainerStarted","Data":"2624aa96483e7d2f539ca381f3c23b1b80ab32e21f5c81745c07dc9b511b56c4"} Mar 12 14:26:24.317850 master-0 kubenswrapper[7440]: I0312 14:26:24.317759 7440 scope.go:117] "RemoveContainer" containerID="f87f3196293c0cde53119456354d52266c897c928bf77795c604874d22ff9dfd" Mar 12 14:26:24.320700 master-0 kubenswrapper[7440]: I0312 14:26:24.320645 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hs6mc_3edaa533-ecbb-443e-a270-4cb4f923daf6/cluster-baremetal-operator/3.log" Mar 12 14:26:24.321420 master-0 kubenswrapper[7440]: I0312 14:26:24.321387 7440 scope.go:117] "RemoveContainer" containerID="3ebfe9284b5aa5ae3cf93734a2a620a3ca175da8fc2dbf0765228bbf0c19305a" Mar 12 14:26:24.321608 master-0 kubenswrapper[7440]: E0312 14:26:24.321580 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-5cdb4c5598-hs6mc_openshift-machine-api(3edaa533-ecbb-443e-a270-4cb4f923daf6)\"" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" podUID="3edaa533-ecbb-443e-a270-4cb4f923daf6" Mar 12 14:26:24.323881 master-0 kubenswrapper[7440]: I0312 14:26:24.323275 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-gltz7" event={"ID":"dd29b21c-7a0e-4311-952f-427b00468e66","Type":"ContainerStarted","Data":"4554fa36ea62af0faebf9a33b90a529e86ff1bd8c518571b83301ec75299b664"} Mar 12 14:26:24.324709 master-0 kubenswrapper[7440]: I0312 14:26:24.324687 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-qtql5_1bba274a-38c7-4d13-88a5-6bc39228416c/kube-controller-manager-operator/2.log" Mar 12 14:26:24.326859 master-0 kubenswrapper[7440]: I0312 14:26:24.326835 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-84bfdbbb7f-7lx8p_61de099a-410b-4d30-83e8-19cf5901cb27/service-ca-controller/1.log" Mar 12 14:26:24.331305 master-0 kubenswrapper[7440]: I0312 14:26:24.331275 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7f8bfc67b-pz8rc_df31c4c2-304e-4bad-8e6f-18c174eba675/route-controller-manager/1.log" Mar 12 14:26:24.331566 master-0 kubenswrapper[7440]: I0312 14:26:24.331541 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" event={"ID":"df31c4c2-304e-4bad-8e6f-18c174eba675","Type":"ContainerStarted","Data":"6b752a3439d93dc1f62f53cf289ae78818fd2b1ea0f771762ddeb52536a133b6"} Mar 12 14:26:24.331772 master-0 kubenswrapper[7440]: I0312 14:26:24.331737 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:26:24.336575 master-0 kubenswrapper[7440]: I0312 14:26:24.336542 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/5.log" Mar 12 14:26:24.339007 master-0 kubenswrapper[7440]: I0312 14:26:24.338986 7440 scope.go:117] "RemoveContainer" containerID="292c715d936689cc5a4e9267c3b0c4dd0ea682eff5c05fa9b9cfcf2c9fa3088f" Mar 12 14:26:24.339265 master-0 kubenswrapper[7440]: E0312 14:26:24.339211 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:26:24.341284 master-0 kubenswrapper[7440]: I0312 14:26:24.341245 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-ldxfn_7433d9bf-4edf-4787-a7a1-e5102c7264c7/network-operator/2.log" Mar 12 14:26:24.343261 master-0 kubenswrapper[7440]: I0312 14:26:24.343161 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5685fbc7d-ckmlv_8660cef9-0ab3-453e-a4b9-c243daa6ddb0/csi-snapshot-controller-operator/1.log" Mar 12 14:26:24.345069 master-0 kubenswrapper[7440]: I0312 14:26:24.345049 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-jpf47_57930a54-89ab-4ec8-a504-74035bb74d63/authentication-operator/2.log" Mar 12 14:26:24.348057 master-0 kubenswrapper[7440]: I0312 14:26:24.347664 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-7f65c457f5-hkf2t_3dc73c14-852d-4957-b6ac-84366ba0594f/kube-storage-version-migrator-operator/2.log" Mar 12 14:26:24.349072 master-0 kubenswrapper[7440]: I0312 14:26:24.349041 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:26:24.953885 master-0 kubenswrapper[7440]: I0312 14:26:24.953775 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:26:24.954182 master-0 kubenswrapper[7440]: I0312 14:26:24.953885 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:26:25.131509 master-0 kubenswrapper[7440]: I0312 14:26:25.131455 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:25.131509 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:25.131509 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:25.131509 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:25.131723 master-0 kubenswrapper[7440]: I0312 14:26:25.131525 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:25.332736 master-0 kubenswrapper[7440]: I0312 14:26:25.332655 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:26:25.332736 master-0 kubenswrapper[7440]: I0312 14:26:25.332719 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:26:25.349575 master-0 kubenswrapper[7440]: I0312 14:26:25.349511 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:26:25.349575 master-0 kubenswrapper[7440]: I0312 14:26:25.349555 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:26:25.364782 master-0 kubenswrapper[7440]: I0312 14:26:25.357310 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-69b6fc6b88-fv6pp_76d596c0-6a41-43e1-9516-aee9ad834ec2/service-ca-operator/2.log" Mar 12 14:26:25.364782 master-0 kubenswrapper[7440]: I0312 14:26:25.358153 7440 generic.go:334] "Generic (PLEG): container finished" podID="76d596c0-6a41-43e1-9516-aee9ad834ec2" containerID="132c247fef63805e546221090174559865f0a5c67459f97a478961649f25c4ce" exitCode=255 Mar 12 14:26:25.364782 master-0 kubenswrapper[7440]: I0312 14:26:25.358309 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" event={"ID":"76d596c0-6a41-43e1-9516-aee9ad834ec2","Type":"ContainerDied","Data":"132c247fef63805e546221090174559865f0a5c67459f97a478961649f25c4ce"} Mar 12 14:26:25.364782 master-0 kubenswrapper[7440]: I0312 14:26:25.358362 7440 scope.go:117] "RemoveContainer" containerID="3229df69e2e642a1705181c6aea965ce680072f14717e055b2a989c42f067dc0" Mar 12 14:26:25.364782 master-0 kubenswrapper[7440]: I0312 14:26:25.358719 7440 scope.go:117] "RemoveContainer" containerID="132c247fef63805e546221090174559865f0a5c67459f97a478961649f25c4ce" Mar 12 14:26:25.364782 master-0 kubenswrapper[7440]: I0312 14:26:25.358979 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": read tcp 10.128.0.2:60470->10.128.0.15:8443: read: connection reset by peer" start-of-body= Mar 12 14:26:25.364782 master-0 kubenswrapper[7440]: I0312 14:26:25.359028 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": read tcp 10.128.0.2:60470->10.128.0.15:8443: read: connection reset by peer" Mar 12 14:26:25.364782 master-0 kubenswrapper[7440]: E0312 14:26:25.360868 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=service-ca-operator pod=service-ca-operator-69b6fc6b88-fv6pp_openshift-service-ca-operator(76d596c0-6a41-43e1-9516-aee9ad834ec2)\"" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" podUID="76d596c0-6a41-43e1-9516-aee9ad834ec2" Mar 12 14:26:25.370053 master-0 kubenswrapper[7440]: I0312 14:26:25.370002 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-gt2tw_3f72fbbe-69f0-4622-be05-b839ff9b4d45/openshift-apiserver-operator/2.log" Mar 12 14:26:25.370483 master-0 kubenswrapper[7440]: I0312 14:26:25.370433 7440 generic.go:334] "Generic (PLEG): container finished" podID="3f72fbbe-69f0-4622-be05-b839ff9b4d45" containerID="46c2a4e909bb52a20054b9e9b5b0a7b00da6400e691aeeec0e60efe2c628204c" exitCode=255 Mar 12 14:26:25.370546 master-0 kubenswrapper[7440]: I0312 14:26:25.370513 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" event={"ID":"3f72fbbe-69f0-4622-be05-b839ff9b4d45","Type":"ContainerDied","Data":"46c2a4e909bb52a20054b9e9b5b0a7b00da6400e691aeeec0e60efe2c628204c"} Mar 12 14:26:25.370802 master-0 kubenswrapper[7440]: I0312 14:26:25.370765 7440 scope.go:117] "RemoveContainer" containerID="46c2a4e909bb52a20054b9e9b5b0a7b00da6400e691aeeec0e60efe2c628204c" Mar 12 14:26:25.371087 master-0 kubenswrapper[7440]: E0312 14:26:25.371038 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-apiserver-operator pod=openshift-apiserver-operator-799b6db4d7-gt2tw_openshift-apiserver-operator(3f72fbbe-69f0-4622-be05-b839ff9b4d45)\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" podUID="3f72fbbe-69f0-4622-be05-b839ff9b4d45" Mar 12 14:26:25.371931 master-0 kubenswrapper[7440]: I0312 14:26:25.371908 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-vpn8v_08ea0d9f-0635-4759-803e-572eca2f2d34/kube-scheduler-operator-container/1.log" Mar 12 14:26:25.372350 master-0 kubenswrapper[7440]: I0312 14:26:25.372299 7440 generic.go:334] "Generic (PLEG): container finished" podID="08ea0d9f-0635-4759-803e-572eca2f2d34" containerID="c7748344653d88d11ff333e5116bce0c85dee6521b85089b95571404112fbab9" exitCode=255 Mar 12 14:26:25.372959 master-0 kubenswrapper[7440]: I0312 14:26:25.372413 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" event={"ID":"08ea0d9f-0635-4759-803e-572eca2f2d34","Type":"ContainerDied","Data":"c7748344653d88d11ff333e5116bce0c85dee6521b85089b95571404112fbab9"} Mar 12 14:26:25.373042 master-0 kubenswrapper[7440]: I0312 14:26:25.372725 7440 scope.go:117] "RemoveContainer" containerID="c7748344653d88d11ff333e5116bce0c85dee6521b85089b95571404112fbab9" Mar 12 14:26:25.373253 master-0 kubenswrapper[7440]: E0312 14:26:25.373203 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler-operator-container pod=openshift-kube-scheduler-operator-5c74bfc494-vpn8v_openshift-kube-scheduler-operator(08ea0d9f-0635-4759-803e-572eca2f2d34)\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" podUID="08ea0d9f-0635-4759-803e-572eca2f2d34" Mar 12 14:26:25.374429 master-0 kubenswrapper[7440]: I0312 14:26:25.374391 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-xgrsw_06eb9f4b-167e-435b-8ef6-ae44fc0b85a9/cluster-storage-operator/1.log" Mar 12 14:26:25.377158 master-0 kubenswrapper[7440]: I0312 14:26:25.377107 7440 generic.go:334] "Generic (PLEG): container finished" podID="06eb9f4b-167e-435b-8ef6-ae44fc0b85a9" containerID="f0b49f86d1ebba78f4cfa063af24f0516cffba203587d317eadf4a198fe2c77d" exitCode=255 Mar 12 14:26:25.377309 master-0 kubenswrapper[7440]: I0312 14:26:25.377174 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw" event={"ID":"06eb9f4b-167e-435b-8ef6-ae44fc0b85a9","Type":"ContainerDied","Data":"f0b49f86d1ebba78f4cfa063af24f0516cffba203587d317eadf4a198fe2c77d"} Mar 12 14:26:25.378031 master-0 kubenswrapper[7440]: I0312 14:26:25.377998 7440 scope.go:117] "RemoveContainer" containerID="f0b49f86d1ebba78f4cfa063af24f0516cffba203587d317eadf4a198fe2c77d" Mar 12 14:26:25.378290 master-0 kubenswrapper[7440]: E0312 14:26:25.378220 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-storage-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-storage-operator pod=cluster-storage-operator-6fbfc8dc8f-xgrsw_openshift-cluster-storage-operator(06eb9f4b-167e-435b-8ef6-ae44fc0b85a9)\"" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw" podUID="06eb9f4b-167e-435b-8ef6-ae44fc0b85a9" Mar 12 14:26:25.381408 master-0 kubenswrapper[7440]: I0312 14:26:25.381306 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77899cf6d-h8sq4_8106d14a-b448-4dd1-bccd-926f85394b5d/cluster-olm-operator/2.log" Mar 12 14:26:25.382740 master-0 kubenswrapper[7440]: I0312 14:26:25.382707 7440 generic.go:334] "Generic (PLEG): container finished" podID="8106d14a-b448-4dd1-bccd-926f85394b5d" containerID="07fcba2f19661d8828bf52496d599b063fbcaa903c444fc6dc693f6b4ced2d26" exitCode=255 Mar 12 14:26:25.382813 master-0 kubenswrapper[7440]: I0312 14:26:25.382761 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" event={"ID":"8106d14a-b448-4dd1-bccd-926f85394b5d","Type":"ContainerDied","Data":"07fcba2f19661d8828bf52496d599b063fbcaa903c444fc6dc693f6b4ced2d26"} Mar 12 14:26:25.383671 master-0 kubenswrapper[7440]: I0312 14:26:25.383635 7440 scope.go:117] "RemoveContainer" containerID="07fcba2f19661d8828bf52496d599b063fbcaa903c444fc6dc693f6b4ced2d26" Mar 12 14:26:25.384095 master-0 kubenswrapper[7440]: E0312 14:26:25.384059 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-olm-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-olm-operator pod=cluster-olm-operator-77899cf6d-h8sq4_openshift-cluster-olm-operator(8106d14a-b448-4dd1-bccd-926f85394b5d)\"" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" podUID="8106d14a-b448-4dd1-bccd-926f85394b5d" Mar 12 14:26:25.386914 master-0 kubenswrapper[7440]: I0312 14:26:25.386876 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:26:25.386975 master-0 kubenswrapper[7440]: I0312 14:26:25.386942 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:26:25.387780 master-0 kubenswrapper[7440]: I0312 14:26:25.387750 7440 scope.go:117] "RemoveContainer" containerID="3ebfe9284b5aa5ae3cf93734a2a620a3ca175da8fc2dbf0765228bbf0c19305a" Mar 12 14:26:25.388040 master-0 kubenswrapper[7440]: E0312 14:26:25.387998 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-5cdb4c5598-hs6mc_openshift-machine-api(3edaa533-ecbb-443e-a270-4cb4f923daf6)\"" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" podUID="3edaa533-ecbb-443e-a270-4cb4f923daf6" Mar 12 14:26:25.390714 master-0 kubenswrapper[7440]: I0312 14:26:25.390681 7440 scope.go:117] "RemoveContainer" containerID="292c715d936689cc5a4e9267c3b0c4dd0ea682eff5c05fa9b9cfcf2c9fa3088f" Mar 12 14:26:25.390978 master-0 kubenswrapper[7440]: E0312 14:26:25.390947 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:26:25.392304 master-0 kubenswrapper[7440]: I0312 14:26:25.392266 7440 scope.go:117] "RemoveContainer" containerID="6475bc0affe8a98c9e1b7717d0757a27fe42a8342fbfe27794215021cef2d056" Mar 12 14:26:25.392828 master-0 kubenswrapper[7440]: E0312 14:26:25.392783 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-z9hzg_openshift-cluster-storage-operator(d56089bf-177c-492d-8964-73a45574e7ed)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" podUID="d56089bf-177c-492d-8964-73a45574e7ed" Mar 12 14:26:25.607962 master-0 kubenswrapper[7440]: I0312 14:26:25.607865 7440 scope.go:117] "RemoveContainer" containerID="e7dea74eb883602f1f3d133f192958f321d40672d5572126aaddfb68d54ed527" Mar 12 14:26:25.627726 master-0 kubenswrapper[7440]: I0312 14:26:25.627678 7440 scope.go:117] "RemoveContainer" containerID="d27cef2ffd951ac8b7af825674c33be11e2853a2bd3265c01b885bcdafe8ff3f" Mar 12 14:26:25.651220 master-0 kubenswrapper[7440]: I0312 14:26:25.651009 7440 scope.go:117] "RemoveContainer" containerID="10ebd0ad67dc09a94de6455e90b725a93074cf336ebd90eea3f8574d71ab8322" Mar 12 14:26:25.668816 master-0 kubenswrapper[7440]: I0312 14:26:25.668784 7440 scope.go:117] "RemoveContainer" containerID="d09193ab64fa4ad5898ed40452f50720dec8c982d5f7eb0df7950d928c3d3534" Mar 12 14:26:25.932248 master-0 kubenswrapper[7440]: I0312 14:26:25.932127 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 12 14:26:25.932248 master-0 kubenswrapper[7440]: I0312 14:26:25.932225 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 12 14:26:25.958948 master-0 kubenswrapper[7440]: I0312 14:26:25.958861 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 12 14:26:26.132135 master-0 kubenswrapper[7440]: I0312 14:26:26.131971 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:26.132135 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:26.132135 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:26.132135 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:26.132135 master-0 kubenswrapper[7440]: I0312 14:26:26.132056 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:26.386022 master-0 kubenswrapper[7440]: I0312 14:26:26.385843 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:26:26.386022 master-0 kubenswrapper[7440]: I0312 14:26:26.385945 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 14:26:26.395369 master-0 kubenswrapper[7440]: I0312 14:26:26.395300 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-84bfdbbb7f-7lx8p_61de099a-410b-4d30-83e8-19cf5901cb27/service-ca-controller/2.log" Mar 12 14:26:26.396247 master-0 kubenswrapper[7440]: I0312 14:26:26.396215 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-84bfdbbb7f-7lx8p_61de099a-410b-4d30-83e8-19cf5901cb27/service-ca-controller/1.log" Mar 12 14:26:26.396476 master-0 kubenswrapper[7440]: I0312 14:26:26.396441 7440 generic.go:334] "Generic (PLEG): container finished" podID="61de099a-410b-4d30-83e8-19cf5901cb27" containerID="a9360a88d496d9b99968219677b5a40fc143b8872564dfdffdd3aa113acbb8d5" exitCode=255 Mar 12 14:26:26.396675 master-0 kubenswrapper[7440]: I0312 14:26:26.396532 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" event={"ID":"61de099a-410b-4d30-83e8-19cf5901cb27","Type":"ContainerDied","Data":"a9360a88d496d9b99968219677b5a40fc143b8872564dfdffdd3aa113acbb8d5"} Mar 12 14:26:26.397382 master-0 kubenswrapper[7440]: I0312 14:26:26.397347 7440 scope.go:117] "RemoveContainer" containerID="ff3016afcdb6778aaf743a4289ede546ee1d9d24d09eb7a34743d13e7defa760" Mar 12 14:26:26.398072 master-0 kubenswrapper[7440]: I0312 14:26:26.398028 7440 scope.go:117] "RemoveContainer" containerID="a9360a88d496d9b99968219677b5a40fc143b8872564dfdffdd3aa113acbb8d5" Mar 12 14:26:26.398577 master-0 kubenswrapper[7440]: E0312 14:26:26.398298 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=service-ca-controller pod=service-ca-84bfdbbb7f-7lx8p_openshift-service-ca(61de099a-410b-4d30-83e8-19cf5901cb27)\"" pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" podUID="61de099a-410b-4d30-83e8-19cf5901cb27" Mar 12 14:26:26.401732 master-0 kubenswrapper[7440]: I0312 14:26:26.401707 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-vpn8v_08ea0d9f-0635-4759-803e-572eca2f2d34/kube-scheduler-operator-container/1.log" Mar 12 14:26:26.402369 master-0 kubenswrapper[7440]: I0312 14:26:26.402348 7440 scope.go:117] "RemoveContainer" containerID="c7748344653d88d11ff333e5116bce0c85dee6521b85089b95571404112fbab9" Mar 12 14:26:26.402631 master-0 kubenswrapper[7440]: E0312 14:26:26.402595 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler-operator-container pod=openshift-kube-scheduler-operator-5c74bfc494-vpn8v_openshift-kube-scheduler-operator(08ea0d9f-0635-4759-803e-572eca2f2d34)\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" podUID="08ea0d9f-0635-4759-803e-572eca2f2d34" Mar 12 14:26:26.403913 master-0 kubenswrapper[7440]: I0312 14:26:26.403871 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5685fbc7d-ckmlv_8660cef9-0ab3-453e-a4b9-c243daa6ddb0/csi-snapshot-controller-operator/2.log" Mar 12 14:26:26.404526 master-0 kubenswrapper[7440]: I0312 14:26:26.404506 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5685fbc7d-ckmlv_8660cef9-0ab3-453e-a4b9-c243daa6ddb0/csi-snapshot-controller-operator/1.log" Mar 12 14:26:26.404644 master-0 kubenswrapper[7440]: I0312 14:26:26.404625 7440 generic.go:334] "Generic (PLEG): container finished" podID="8660cef9-0ab3-453e-a4b9-c243daa6ddb0" containerID="d135f68615930d49632ead44689c31ed1dba2d0c236cbda4ae0463dc788e0e6a" exitCode=255 Mar 12 14:26:26.404765 master-0 kubenswrapper[7440]: I0312 14:26:26.404726 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv" event={"ID":"8660cef9-0ab3-453e-a4b9-c243daa6ddb0","Type":"ContainerDied","Data":"d135f68615930d49632ead44689c31ed1dba2d0c236cbda4ae0463dc788e0e6a"} Mar 12 14:26:26.405340 master-0 kubenswrapper[7440]: I0312 14:26:26.405316 7440 scope.go:117] "RemoveContainer" containerID="d135f68615930d49632ead44689c31ed1dba2d0c236cbda4ae0463dc788e0e6a" Mar 12 14:26:26.405581 master-0 kubenswrapper[7440]: E0312 14:26:26.405546 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-snapshot-controller-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=csi-snapshot-controller-operator pod=csi-snapshot-controller-operator-5685fbc7d-ckmlv_openshift-cluster-storage-operator(8660cef9-0ab3-453e-a4b9-c243daa6ddb0)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv" podUID="8660cef9-0ab3-453e-a4b9-c243daa6ddb0" Mar 12 14:26:26.407076 master-0 kubenswrapper[7440]: I0312 14:26:26.407042 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-ljnjj_0a898118-6d01-4211-92f0-43967b75405c/openshift-config-operator/2.log" Mar 12 14:26:26.410270 master-0 kubenswrapper[7440]: I0312 14:26:26.410232 7440 generic.go:334] "Generic (PLEG): container finished" podID="0a898118-6d01-4211-92f0-43967b75405c" containerID="0bc982c3725d14223ab24d0dc070fc9eb1be21068c5ee128ccc02aa0ec0f60c5" exitCode=255 Mar 12 14:26:26.410376 master-0 kubenswrapper[7440]: I0312 14:26:26.410287 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" event={"ID":"0a898118-6d01-4211-92f0-43967b75405c","Type":"ContainerDied","Data":"0bc982c3725d14223ab24d0dc070fc9eb1be21068c5ee128ccc02aa0ec0f60c5"} Mar 12 14:26:26.411370 master-0 kubenswrapper[7440]: I0312 14:26:26.411323 7440 scope.go:117] "RemoveContainer" containerID="0bc982c3725d14223ab24d0dc070fc9eb1be21068c5ee128ccc02aa0ec0f60c5" Mar 12 14:26:26.411839 master-0 kubenswrapper[7440]: E0312 14:26:26.411764 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-config-operator pod=openshift-config-operator-64488f9d78-ljnjj_openshift-config-operator(0a898118-6d01-4211-92f0-43967b75405c)\"" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" Mar 12 14:26:26.419591 master-0 kubenswrapper[7440]: I0312 14:26:26.419553 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-xgrsw_06eb9f4b-167e-435b-8ef6-ae44fc0b85a9/cluster-storage-operator/1.log" Mar 12 14:26:26.420504 master-0 kubenswrapper[7440]: I0312 14:26:26.420451 7440 scope.go:117] "RemoveContainer" containerID="f0b49f86d1ebba78f4cfa063af24f0516cffba203587d317eadf4a198fe2c77d" Mar 12 14:26:26.420713 master-0 kubenswrapper[7440]: I0312 14:26:26.420682 7440 scope.go:117] "RemoveContainer" containerID="ab1742f72c830599c24487d25e2f7d4998ed83fdb4a1bdbebd1e3d87b6efbbf6" Mar 12 14:26:26.421010 master-0 kubenswrapper[7440]: E0312 14:26:26.420946 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-storage-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-storage-operator pod=cluster-storage-operator-6fbfc8dc8f-xgrsw_openshift-cluster-storage-operator(06eb9f4b-167e-435b-8ef6-ae44fc0b85a9)\"" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw" podUID="06eb9f4b-167e-435b-8ef6-ae44fc0b85a9" Mar 12 14:26:26.423583 master-0 kubenswrapper[7440]: I0312 14:26:26.423526 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77899cf6d-h8sq4_8106d14a-b448-4dd1-bccd-926f85394b5d/cluster-olm-operator/2.log" Mar 12 14:26:26.425836 master-0 kubenswrapper[7440]: I0312 14:26:26.425790 7440 scope.go:117] "RemoveContainer" containerID="07fcba2f19661d8828bf52496d599b063fbcaa903c444fc6dc693f6b4ced2d26" Mar 12 14:26:26.426329 master-0 kubenswrapper[7440]: E0312 14:26:26.426303 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-olm-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-olm-operator pod=cluster-olm-operator-77899cf6d-h8sq4_openshift-cluster-olm-operator(8106d14a-b448-4dd1-bccd-926f85394b5d)\"" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" podUID="8106d14a-b448-4dd1-bccd-926f85394b5d" Mar 12 14:26:26.428230 master-0 kubenswrapper[7440]: I0312 14:26:26.428193 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-69b6fc6b88-fv6pp_76d596c0-6a41-43e1-9516-aee9ad834ec2/service-ca-operator/2.log" Mar 12 14:26:26.428764 master-0 kubenswrapper[7440]: I0312 14:26:26.428743 7440 scope.go:117] "RemoveContainer" containerID="132c247fef63805e546221090174559865f0a5c67459f97a478961649f25c4ce" Mar 12 14:26:26.429028 master-0 kubenswrapper[7440]: E0312 14:26:26.428978 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=service-ca-operator pod=service-ca-operator-69b6fc6b88-fv6pp_openshift-service-ca-operator(76d596c0-6a41-43e1-9516-aee9ad834ec2)\"" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" podUID="76d596c0-6a41-43e1-9516-aee9ad834ec2" Mar 12 14:26:26.431137 master-0 kubenswrapper[7440]: I0312 14:26:26.431083 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-gt2tw_3f72fbbe-69f0-4622-be05-b839ff9b4d45/openshift-apiserver-operator/2.log" Mar 12 14:26:26.431914 master-0 kubenswrapper[7440]: I0312 14:26:26.431847 7440 scope.go:117] "RemoveContainer" containerID="46c2a4e909bb52a20054b9e9b5b0a7b00da6400e691aeeec0e60efe2c628204c" Mar 12 14:26:26.432350 master-0 kubenswrapper[7440]: E0312 14:26:26.432291 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-apiserver-operator pod=openshift-apiserver-operator-799b6db4d7-gt2tw_openshift-apiserver-operator(3f72fbbe-69f0-4622-be05-b839ff9b4d45)\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" podUID="3f72fbbe-69f0-4622-be05-b839ff9b4d45" Mar 12 14:26:26.449466 master-0 kubenswrapper[7440]: I0312 14:26:26.449416 7440 scope.go:117] "RemoveContainer" containerID="10e2670e6ab6b47f07948c60e7e3a46c3f0ed3468cba558c9fc231e5dc2ca43a" Mar 12 14:26:26.952625 master-0 kubenswrapper[7440]: I0312 14:26:26.952547 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:26:27.132598 master-0 kubenswrapper[7440]: I0312 14:26:27.132518 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:27.132598 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:27.132598 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:27.132598 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:27.133209 master-0 kubenswrapper[7440]: I0312 14:26:27.132616 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:27.441372 master-0 kubenswrapper[7440]: I0312 14:26:27.441295 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-84bfdbbb7f-7lx8p_61de099a-410b-4d30-83e8-19cf5901cb27/service-ca-controller/2.log" Mar 12 14:26:27.443932 master-0 kubenswrapper[7440]: I0312 14:26:27.443824 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5685fbc7d-ckmlv_8660cef9-0ab3-453e-a4b9-c243daa6ddb0/csi-snapshot-controller-operator/2.log" Mar 12 14:26:27.446231 master-0 kubenswrapper[7440]: I0312 14:26:27.446174 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-ljnjj_0a898118-6d01-4211-92f0-43967b75405c/openshift-config-operator/2.log" Mar 12 14:26:27.447583 master-0 kubenswrapper[7440]: I0312 14:26:27.447527 7440 scope.go:117] "RemoveContainer" containerID="0bc982c3725d14223ab24d0dc070fc9eb1be21068c5ee128ccc02aa0ec0f60c5" Mar 12 14:26:27.448023 master-0 kubenswrapper[7440]: E0312 14:26:27.447961 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-config-operator pod=openshift-config-operator-64488f9d78-ljnjj_openshift-config-operator(0a898118-6d01-4211-92f0-43967b75405c)\"" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" Mar 12 14:26:27.808364 master-0 kubenswrapper[7440]: I0312 14:26:27.808205 7440 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-jpf47 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Mar 12 14:26:27.808364 master-0 kubenswrapper[7440]: I0312 14:26:27.808301 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" podUID="57930a54-89ab-4ec8-a504-74035bb74d63" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Mar 12 14:26:28.131454 master-0 kubenswrapper[7440]: I0312 14:26:28.131380 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:28.131454 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:28.131454 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:28.131454 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:28.131851 master-0 kubenswrapper[7440]: I0312 14:26:28.131477 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:29.132163 master-0 kubenswrapper[7440]: I0312 14:26:29.132083 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:29.132163 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:29.132163 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:29.132163 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:29.132163 master-0 kubenswrapper[7440]: I0312 14:26:29.132153 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:30.131435 master-0 kubenswrapper[7440]: I0312 14:26:30.131355 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:30.131435 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:30.131435 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:30.131435 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:30.131711 master-0 kubenswrapper[7440]: I0312 14:26:30.131467 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:30.787867 master-0 kubenswrapper[7440]: I0312 14:26:30.787790 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:26:30.788460 master-0 kubenswrapper[7440]: I0312 14:26:30.787864 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:26:30.952216 master-0 kubenswrapper[7440]: I0312 14:26:30.952039 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 12 14:26:31.132061 master-0 kubenswrapper[7440]: I0312 14:26:31.131976 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:31.132061 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:31.132061 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:31.132061 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:31.132061 master-0 kubenswrapper[7440]: I0312 14:26:31.132037 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:32.132781 master-0 kubenswrapper[7440]: I0312 14:26:32.132691 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:32.132781 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:32.132781 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:32.132781 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:32.133765 master-0 kubenswrapper[7440]: I0312 14:26:32.132806 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:33.132203 master-0 kubenswrapper[7440]: I0312 14:26:33.132136 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:33.132203 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:33.132203 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:33.132203 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:33.132461 master-0 kubenswrapper[7440]: I0312 14:26:33.132235 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:33.591035 master-0 kubenswrapper[7440]: E0312 14:26:33.590966 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:26:34.131491 master-0 kubenswrapper[7440]: I0312 14:26:34.131426 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:34.131491 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:34.131491 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:34.131491 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:34.131972 master-0 kubenswrapper[7440]: I0312 14:26:34.131515 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:35.108524 master-0 kubenswrapper[7440]: E0312 14:26:35.108394 7440 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189c1dbc3019e1f7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:7fed292c3d5a90a99bfee43e89190405,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:52232->127.0.0.1:10357: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:17:27.375192567 +0000 UTC m=+307.710571116,LastTimestamp:2026-03-12 14:17:27.375192567 +0000 UTC m=+307.710571116,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:26:35.131100 master-0 kubenswrapper[7440]: I0312 14:26:35.131018 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:35.131100 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:35.131100 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:35.131100 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:35.131392 master-0 kubenswrapper[7440]: I0312 14:26:35.131152 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:36.131167 master-0 kubenswrapper[7440]: I0312 14:26:36.131080 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:36.131167 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:36.131167 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:36.131167 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:36.131167 master-0 kubenswrapper[7440]: I0312 14:26:36.131159 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:36.644462 master-0 kubenswrapper[7440]: E0312 14:26:36.644394 7440 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 12 14:26:36.805054 master-0 kubenswrapper[7440]: I0312 14:26:36.804972 7440 scope.go:117] "RemoveContainer" containerID="c7748344653d88d11ff333e5116bce0c85dee6521b85089b95571404112fbab9" Mar 12 14:26:36.805377 master-0 kubenswrapper[7440]: I0312 14:26:36.805335 7440 scope.go:117] "RemoveContainer" containerID="3ebfe9284b5aa5ae3cf93734a2a620a3ca175da8fc2dbf0765228bbf0c19305a" Mar 12 14:26:36.805534 master-0 kubenswrapper[7440]: I0312 14:26:36.805457 7440 scope.go:117] "RemoveContainer" containerID="132c247fef63805e546221090174559865f0a5c67459f97a478961649f25c4ce" Mar 12 14:26:37.132386 master-0 kubenswrapper[7440]: I0312 14:26:37.132287 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:37.132386 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:37.132386 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:37.132386 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:37.132386 master-0 kubenswrapper[7440]: I0312 14:26:37.132353 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:37.521452 master-0 kubenswrapper[7440]: I0312 14:26:37.521370 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-vpn8v_08ea0d9f-0635-4759-803e-572eca2f2d34/kube-scheduler-operator-container/1.log" Mar 12 14:26:37.521785 master-0 kubenswrapper[7440]: I0312 14:26:37.521540 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" event={"ID":"08ea0d9f-0635-4759-803e-572eca2f2d34","Type":"ContainerStarted","Data":"78e4947f344b5bf77c640296e1ec1a396c45d29d30d4a66e0eef8ce340e94e05"} Mar 12 14:26:37.524222 master-0 kubenswrapper[7440]: I0312 14:26:37.524178 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hs6mc_3edaa533-ecbb-443e-a270-4cb4f923daf6/cluster-baremetal-operator/3.log" Mar 12 14:26:37.524749 master-0 kubenswrapper[7440]: I0312 14:26:37.524648 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" event={"ID":"3edaa533-ecbb-443e-a270-4cb4f923daf6","Type":"ContainerStarted","Data":"98c75e1f97fd956ded29ce0a2ec09f912dd4d6fb9c502e3b869d08808fa332fc"} Mar 12 14:26:37.527114 master-0 kubenswrapper[7440]: I0312 14:26:37.527075 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-69b6fc6b88-fv6pp_76d596c0-6a41-43e1-9516-aee9ad834ec2/service-ca-operator/2.log" Mar 12 14:26:37.527244 master-0 kubenswrapper[7440]: I0312 14:26:37.527130 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" event={"ID":"76d596c0-6a41-43e1-9516-aee9ad834ec2","Type":"ContainerStarted","Data":"f38874aef5393d658264082641b6ae35c3855eb55f95d5be3e85d3b60c18eb6a"} Mar 12 14:26:37.807809 master-0 kubenswrapper[7440]: I0312 14:26:37.807623 7440 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-jpf47 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Mar 12 14:26:37.807809 master-0 kubenswrapper[7440]: I0312 14:26:37.807708 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" podUID="57930a54-89ab-4ec8-a504-74035bb74d63" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Mar 12 14:26:38.131818 master-0 kubenswrapper[7440]: I0312 14:26:38.131657 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:38.131818 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:38.131818 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:38.131818 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:38.131818 master-0 kubenswrapper[7440]: I0312 14:26:38.131757 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:39.132027 master-0 kubenswrapper[7440]: I0312 14:26:39.131922 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:39.132027 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:39.132027 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:39.132027 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:39.132027 master-0 kubenswrapper[7440]: I0312 14:26:39.132021 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:39.809362 master-0 kubenswrapper[7440]: I0312 14:26:39.809308 7440 scope.go:117] "RemoveContainer" containerID="292c715d936689cc5a4e9267c3b0c4dd0ea682eff5c05fa9b9cfcf2c9fa3088f" Mar 12 14:26:39.810062 master-0 kubenswrapper[7440]: E0312 14:26:39.809697 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:26:39.810062 master-0 kubenswrapper[7440]: I0312 14:26:39.809799 7440 scope.go:117] "RemoveContainer" containerID="6475bc0affe8a98c9e1b7717d0757a27fe42a8342fbfe27794215021cef2d056" Mar 12 14:26:39.810216 master-0 kubenswrapper[7440]: I0312 14:26:39.810116 7440 scope.go:117] "RemoveContainer" containerID="07fcba2f19661d8828bf52496d599b063fbcaa903c444fc6dc693f6b4ced2d26" Mar 12 14:26:39.810467 master-0 kubenswrapper[7440]: E0312 14:26:39.810411 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-z9hzg_openshift-cluster-storage-operator(d56089bf-177c-492d-8964-73a45574e7ed)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" podUID="d56089bf-177c-492d-8964-73a45574e7ed" Mar 12 14:26:39.810662 master-0 kubenswrapper[7440]: I0312 14:26:39.810620 7440 scope.go:117] "RemoveContainer" containerID="a9360a88d496d9b99968219677b5a40fc143b8872564dfdffdd3aa113acbb8d5" Mar 12 14:26:39.810985 master-0 kubenswrapper[7440]: I0312 14:26:39.810869 7440 scope.go:117] "RemoveContainer" containerID="46c2a4e909bb52a20054b9e9b5b0a7b00da6400e691aeeec0e60efe2c628204c" Mar 12 14:26:39.811561 master-0 kubenswrapper[7440]: E0312 14:26:39.811520 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=service-ca-controller pod=service-ca-84bfdbbb7f-7lx8p_openshift-service-ca(61de099a-410b-4d30-83e8-19cf5901cb27)\"" pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" podUID="61de099a-410b-4d30-83e8-19cf5901cb27" Mar 12 14:26:40.133140 master-0 kubenswrapper[7440]: I0312 14:26:40.133007 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:40.133140 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:40.133140 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:40.133140 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:40.133140 master-0 kubenswrapper[7440]: I0312 14:26:40.133068 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:40.555259 master-0 kubenswrapper[7440]: I0312 14:26:40.555187 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-gt2tw_3f72fbbe-69f0-4622-be05-b839ff9b4d45/openshift-apiserver-operator/2.log" Mar 12 14:26:40.555621 master-0 kubenswrapper[7440]: I0312 14:26:40.555299 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" event={"ID":"3f72fbbe-69f0-4622-be05-b839ff9b4d45","Type":"ContainerStarted","Data":"7256289b9a2663c64b4d6e3489d9934c85fe09c7f80090aa8be7b45d9d4e8d84"} Mar 12 14:26:40.570343 master-0 kubenswrapper[7440]: I0312 14:26:40.561521 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77899cf6d-h8sq4_8106d14a-b448-4dd1-bccd-926f85394b5d/cluster-olm-operator/2.log" Mar 12 14:26:40.570343 master-0 kubenswrapper[7440]: I0312 14:26:40.563345 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" event={"ID":"8106d14a-b448-4dd1-bccd-926f85394b5d","Type":"ContainerStarted","Data":"b6e949323342e30da019c0c5dd230cd9dc9467fe07077839fa3e6146d2b06774"} Mar 12 14:26:40.787653 master-0 kubenswrapper[7440]: I0312 14:26:40.787540 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:26:40.787653 master-0 kubenswrapper[7440]: I0312 14:26:40.787644 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:26:40.804667 master-0 kubenswrapper[7440]: I0312 14:26:40.804574 7440 scope.go:117] "RemoveContainer" containerID="d135f68615930d49632ead44689c31ed1dba2d0c236cbda4ae0463dc788e0e6a" Mar 12 14:26:40.805061 master-0 kubenswrapper[7440]: E0312 14:26:40.804944 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-snapshot-controller-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=csi-snapshot-controller-operator pod=csi-snapshot-controller-operator-5685fbc7d-ckmlv_openshift-cluster-storage-operator(8660cef9-0ab3-453e-a4b9-c243daa6ddb0)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv" podUID="8660cef9-0ab3-453e-a4b9-c243daa6ddb0" Mar 12 14:26:40.805803 master-0 kubenswrapper[7440]: I0312 14:26:40.805686 7440 scope.go:117] "RemoveContainer" containerID="f0b49f86d1ebba78f4cfa063af24f0516cffba203587d317eadf4a198fe2c77d" Mar 12 14:26:41.033026 master-0 kubenswrapper[7440]: E0312 14:26:41.032866 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:26:41.132881 master-0 kubenswrapper[7440]: I0312 14:26:41.132723 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:41.132881 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:41.132881 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:41.132881 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:41.132881 master-0 kubenswrapper[7440]: I0312 14:26:41.132806 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:41.575956 master-0 kubenswrapper[7440]: I0312 14:26:41.575852 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-xgrsw_06eb9f4b-167e-435b-8ef6-ae44fc0b85a9/cluster-storage-operator/1.log" Mar 12 14:26:41.576517 master-0 kubenswrapper[7440]: I0312 14:26:41.576472 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw" event={"ID":"06eb9f4b-167e-435b-8ef6-ae44fc0b85a9","Type":"ContainerStarted","Data":"cf3b7b652c6e8beb6b340ea9b42886885fa378a7a2f0d930f3dcc101d315af74"} Mar 12 14:26:41.805183 master-0 kubenswrapper[7440]: I0312 14:26:41.805137 7440 scope.go:117] "RemoveContainer" containerID="0bc982c3725d14223ab24d0dc070fc9eb1be21068c5ee128ccc02aa0ec0f60c5" Mar 12 14:26:42.133018 master-0 kubenswrapper[7440]: I0312 14:26:42.132823 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:42.133018 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:42.133018 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:42.133018 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:42.133018 master-0 kubenswrapper[7440]: I0312 14:26:42.132979 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:42.585575 master-0 kubenswrapper[7440]: I0312 14:26:42.585474 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-ljnjj_0a898118-6d01-4211-92f0-43967b75405c/openshift-config-operator/2.log" Mar 12 14:26:42.586324 master-0 kubenswrapper[7440]: I0312 14:26:42.586258 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" event={"ID":"0a898118-6d01-4211-92f0-43967b75405c","Type":"ContainerStarted","Data":"b4cae7e8c6dd597b1ab7b2fec14b29c512e54d2883fd5a316cf1266ec46f69ce"} Mar 12 14:26:42.586937 master-0 kubenswrapper[7440]: I0312 14:26:42.586860 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:26:43.132193 master-0 kubenswrapper[7440]: I0312 14:26:43.132096 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:43.132193 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:43.132193 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:43.132193 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:43.132606 master-0 kubenswrapper[7440]: I0312 14:26:43.132200 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:43.591279 master-0 kubenswrapper[7440]: E0312 14:26:43.591201 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:26:44.131754 master-0 kubenswrapper[7440]: I0312 14:26:44.131647 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:44.131754 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:44.131754 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:44.131754 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:44.132023 master-0 kubenswrapper[7440]: I0312 14:26:44.131796 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:44.650402 master-0 kubenswrapper[7440]: I0312 14:26:44.650355 7440 patch_prober.go:28] interesting pod/etcd-operator-5884b9cd56-mjxsv container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" start-of-body= Mar 12 14:26:44.651102 master-0 kubenswrapper[7440]: I0312 14:26:44.651042 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" podUID="8d775283-2696-4411-8ddf-d4e6000f0a0c" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" Mar 12 14:26:45.131253 master-0 kubenswrapper[7440]: I0312 14:26:45.131194 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:45.131253 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:45.131253 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:45.131253 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:45.131604 master-0 kubenswrapper[7440]: I0312 14:26:45.131261 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:45.455973 master-0 kubenswrapper[7440]: I0312 14:26:45.455852 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:26:45.455973 master-0 kubenswrapper[7440]: I0312 14:26:45.455932 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:26:46.131607 master-0 kubenswrapper[7440]: I0312 14:26:46.131517 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:46.131607 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:46.131607 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:46.131607 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:46.131607 master-0 kubenswrapper[7440]: I0312 14:26:46.131603 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:47.132999 master-0 kubenswrapper[7440]: I0312 14:26:47.132918 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:47.132999 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:47.132999 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:47.132999 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:47.132999 master-0 kubenswrapper[7440]: I0312 14:26:47.132999 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:47.807557 master-0 kubenswrapper[7440]: I0312 14:26:47.807491 7440 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-jpf47 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Mar 12 14:26:47.807557 master-0 kubenswrapper[7440]: I0312 14:26:47.807557 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" podUID="57930a54-89ab-4ec8-a504-74035bb74d63" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Mar 12 14:26:47.813461 master-0 kubenswrapper[7440]: I0312 14:26:47.813430 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:26:47.814246 master-0 kubenswrapper[7440]: I0312 14:26:47.814200 7440 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"ccb4e996c4095d3424f211c34c210a7991baf5a57a30f0b35ae26da073728490"} pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Mar 12 14:26:47.814419 master-0 kubenswrapper[7440]: I0312 14:26:47.814396 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" podUID="57930a54-89ab-4ec8-a504-74035bb74d63" containerName="authentication-operator" containerID="cri-o://ccb4e996c4095d3424f211c34c210a7991baf5a57a30f0b35ae26da073728490" gracePeriod=30 Mar 12 14:26:47.953623 master-0 kubenswrapper[7440]: I0312 14:26:47.953568 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:26:47.954020 master-0 kubenswrapper[7440]: I0312 14:26:47.953979 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:26:48.133108 master-0 kubenswrapper[7440]: I0312 14:26:48.132965 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:48.133108 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:48.133108 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:48.133108 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:48.133108 master-0 kubenswrapper[7440]: I0312 14:26:48.133083 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:48.456059 master-0 kubenswrapper[7440]: I0312 14:26:48.455469 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:26:48.456059 master-0 kubenswrapper[7440]: I0312 14:26:48.455537 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:26:48.637247 master-0 kubenswrapper[7440]: I0312 14:26:48.637175 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-jpf47_57930a54-89ab-4ec8-a504-74035bb74d63/authentication-operator/3.log" Mar 12 14:26:48.638054 master-0 kubenswrapper[7440]: I0312 14:26:48.638000 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-jpf47_57930a54-89ab-4ec8-a504-74035bb74d63/authentication-operator/2.log" Mar 12 14:26:48.638154 master-0 kubenswrapper[7440]: I0312 14:26:48.638085 7440 generic.go:334] "Generic (PLEG): container finished" podID="57930a54-89ab-4ec8-a504-74035bb74d63" containerID="ccb4e996c4095d3424f211c34c210a7991baf5a57a30f0b35ae26da073728490" exitCode=255 Mar 12 14:26:48.638233 master-0 kubenswrapper[7440]: I0312 14:26:48.638139 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" event={"ID":"57930a54-89ab-4ec8-a504-74035bb74d63","Type":"ContainerDied","Data":"ccb4e996c4095d3424f211c34c210a7991baf5a57a30f0b35ae26da073728490"} Mar 12 14:26:48.638233 master-0 kubenswrapper[7440]: I0312 14:26:48.638199 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" event={"ID":"57930a54-89ab-4ec8-a504-74035bb74d63","Type":"ContainerStarted","Data":"8d633c24c0fbfe0880167743a2ebe5f60f0f211a6026d8c3f55625a7e7adbd93"} Mar 12 14:26:48.638357 master-0 kubenswrapper[7440]: I0312 14:26:48.638233 7440 scope.go:117] "RemoveContainer" containerID="91255c6b16c7af2529c1e521fdbc69eade224ea969c92c151d4e92cf91d45cc1" Mar 12 14:26:49.132381 master-0 kubenswrapper[7440]: I0312 14:26:49.132293 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:49.132381 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:49.132381 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:49.132381 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:49.132762 master-0 kubenswrapper[7440]: I0312 14:26:49.132396 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:49.650987 master-0 kubenswrapper[7440]: I0312 14:26:49.650892 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-jpf47_57930a54-89ab-4ec8-a504-74035bb74d63/authentication-operator/3.log" Mar 12 14:26:50.132029 master-0 kubenswrapper[7440]: I0312 14:26:50.131966 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:50.132029 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:50.132029 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:50.132029 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:50.132289 master-0 kubenswrapper[7440]: I0312 14:26:50.132058 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:50.532626 master-0 kubenswrapper[7440]: E0312 14:26:50.532540 7440 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 12 14:26:50.788475 master-0 kubenswrapper[7440]: I0312 14:26:50.788358 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:26:50.788475 master-0 kubenswrapper[7440]: I0312 14:26:50.788421 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:26:50.805745 master-0 kubenswrapper[7440]: I0312 14:26:50.805682 7440 scope.go:117] "RemoveContainer" containerID="292c715d936689cc5a4e9267c3b0c4dd0ea682eff5c05fa9b9cfcf2c9fa3088f" Mar 12 14:26:50.806138 master-0 kubenswrapper[7440]: E0312 14:26:50.806095 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:26:50.953288 master-0 kubenswrapper[7440]: I0312 14:26:50.953234 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:26:50.953540 master-0 kubenswrapper[7440]: I0312 14:26:50.953295 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:26:50.953540 master-0 kubenswrapper[7440]: I0312 14:26:50.953338 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:26:50.954026 master-0 kubenswrapper[7440]: I0312 14:26:50.953987 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:26:50.954026 master-0 kubenswrapper[7440]: I0312 14:26:50.954005 7440 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"b4cae7e8c6dd597b1ab7b2fec14b29c512e54d2883fd5a316cf1266ec46f69ce"} pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 12 14:26:50.954130 master-0 kubenswrapper[7440]: I0312 14:26:50.954033 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:26:50.954130 master-0 kubenswrapper[7440]: I0312 14:26:50.954040 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" containerID="cri-o://b4cae7e8c6dd597b1ab7b2fec14b29c512e54d2883fd5a316cf1266ec46f69ce" gracePeriod=30 Mar 12 14:26:51.131987 master-0 kubenswrapper[7440]: I0312 14:26:51.131843 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:51.131987 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:51.131987 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:51.131987 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:51.132212 master-0 kubenswrapper[7440]: I0312 14:26:51.131973 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:51.454806 master-0 kubenswrapper[7440]: I0312 14:26:51.454751 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:26:51.455167 master-0 kubenswrapper[7440]: I0312 14:26:51.455133 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:26:51.666961 master-0 kubenswrapper[7440]: I0312 14:26:51.666740 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-ljnjj_0a898118-6d01-4211-92f0-43967b75405c/openshift-config-operator/3.log" Mar 12 14:26:51.668153 master-0 kubenswrapper[7440]: I0312 14:26:51.668081 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-ljnjj_0a898118-6d01-4211-92f0-43967b75405c/openshift-config-operator/2.log" Mar 12 14:26:51.669995 master-0 kubenswrapper[7440]: I0312 14:26:51.669822 7440 generic.go:334] "Generic (PLEG): container finished" podID="0a898118-6d01-4211-92f0-43967b75405c" containerID="b4cae7e8c6dd597b1ab7b2fec14b29c512e54d2883fd5a316cf1266ec46f69ce" exitCode=255 Mar 12 14:26:51.669995 master-0 kubenswrapper[7440]: I0312 14:26:51.669971 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" event={"ID":"0a898118-6d01-4211-92f0-43967b75405c","Type":"ContainerDied","Data":"b4cae7e8c6dd597b1ab7b2fec14b29c512e54d2883fd5a316cf1266ec46f69ce"} Mar 12 14:26:51.670276 master-0 kubenswrapper[7440]: I0312 14:26:51.670062 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" event={"ID":"0a898118-6d01-4211-92f0-43967b75405c","Type":"ContainerStarted","Data":"1602a9eed5353da99938e46cc2f064b4455a5e47eb3af80ff79cdafd544bf392"} Mar 12 14:26:51.670276 master-0 kubenswrapper[7440]: I0312 14:26:51.670104 7440 scope.go:117] "RemoveContainer" containerID="0bc982c3725d14223ab24d0dc070fc9eb1be21068c5ee128ccc02aa0ec0f60c5" Mar 12 14:26:51.670567 master-0 kubenswrapper[7440]: I0312 14:26:51.670470 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:26:51.805663 master-0 kubenswrapper[7440]: I0312 14:26:51.805587 7440 scope.go:117] "RemoveContainer" containerID="6475bc0affe8a98c9e1b7717d0757a27fe42a8342fbfe27794215021cef2d056" Mar 12 14:26:52.132631 master-0 kubenswrapper[7440]: I0312 14:26:52.132554 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:52.132631 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:52.132631 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:52.132631 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:52.133105 master-0 kubenswrapper[7440]: I0312 14:26:52.132639 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:52.679098 master-0 kubenswrapper[7440]: I0312 14:26:52.679050 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-ljnjj_0a898118-6d01-4211-92f0-43967b75405c/openshift-config-operator/3.log" Mar 12 14:26:52.682336 master-0 kubenswrapper[7440]: I0312 14:26:52.682291 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/4.log" Mar 12 14:26:52.682427 master-0 kubenswrapper[7440]: I0312 14:26:52.682375 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" event={"ID":"d56089bf-177c-492d-8964-73a45574e7ed","Type":"ContainerStarted","Data":"cd9014bcffe6ddde739ac15065ac6e2169de2b76f2a0295b122a3bc2a089b78d"} Mar 12 14:26:53.131810 master-0 kubenswrapper[7440]: I0312 14:26:53.131754 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:53.131810 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:53.131810 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:53.131810 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:53.132606 master-0 kubenswrapper[7440]: I0312 14:26:53.131821 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:53.591809 master-0 kubenswrapper[7440]: E0312 14:26:53.591465 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:26:53.591809 master-0 kubenswrapper[7440]: E0312 14:26:53.591499 7440 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 14:26:53.805246 master-0 kubenswrapper[7440]: I0312 14:26:53.805197 7440 scope.go:117] "RemoveContainer" containerID="a9360a88d496d9b99968219677b5a40fc143b8872564dfdffdd3aa113acbb8d5" Mar 12 14:26:54.131203 master-0 kubenswrapper[7440]: I0312 14:26:54.131080 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:54.131203 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:54.131203 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:54.131203 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:54.131203 master-0 kubenswrapper[7440]: I0312 14:26:54.131142 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:54.461456 master-0 kubenswrapper[7440]: I0312 14:26:54.454468 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:26:54.461456 master-0 kubenswrapper[7440]: I0312 14:26:54.454530 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:26:54.698587 master-0 kubenswrapper[7440]: I0312 14:26:54.698531 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-84bfdbbb7f-7lx8p_61de099a-410b-4d30-83e8-19cf5901cb27/service-ca-controller/2.log" Mar 12 14:26:54.698812 master-0 kubenswrapper[7440]: I0312 14:26:54.698648 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" event={"ID":"61de099a-410b-4d30-83e8-19cf5901cb27","Type":"ContainerStarted","Data":"4b830aae57df8ccdd98824435ccd52272794e23367b55e5ca8ce1c42ac9a4c48"} Mar 12 14:26:54.700366 master-0 kubenswrapper[7440]: I0312 14:26:54.700339 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7f8bfc67b-pz8rc_df31c4c2-304e-4bad-8e6f-18c174eba675/route-controller-manager/2.log" Mar 12 14:26:54.700828 master-0 kubenswrapper[7440]: I0312 14:26:54.700807 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7f8bfc67b-pz8rc_df31c4c2-304e-4bad-8e6f-18c174eba675/route-controller-manager/1.log" Mar 12 14:26:54.700975 master-0 kubenswrapper[7440]: I0312 14:26:54.700939 7440 generic.go:334] "Generic (PLEG): container finished" podID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerID="6b752a3439d93dc1f62f53cf289ae78818fd2b1ea0f771762ddeb52536a133b6" exitCode=255 Mar 12 14:26:54.701071 master-0 kubenswrapper[7440]: I0312 14:26:54.701017 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" event={"ID":"df31c4c2-304e-4bad-8e6f-18c174eba675","Type":"ContainerDied","Data":"6b752a3439d93dc1f62f53cf289ae78818fd2b1ea0f771762ddeb52536a133b6"} Mar 12 14:26:54.701135 master-0 kubenswrapper[7440]: I0312 14:26:54.701117 7440 scope.go:117] "RemoveContainer" containerID="b0ef8cb458573461dc78ec84dd70e59e9585b138f2517187a17259dabba2dfeb" Mar 12 14:26:54.702057 master-0 kubenswrapper[7440]: I0312 14:26:54.702024 7440 scope.go:117] "RemoveContainer" containerID="6b752a3439d93dc1f62f53cf289ae78818fd2b1ea0f771762ddeb52536a133b6" Mar 12 14:26:54.702491 master-0 kubenswrapper[7440]: E0312 14:26:54.702453 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=route-controller-manager pod=route-controller-manager-7f8bfc67b-pz8rc_openshift-route-controller-manager(df31c4c2-304e-4bad-8e6f-18c174eba675)\"" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" Mar 12 14:26:54.805655 master-0 kubenswrapper[7440]: I0312 14:26:54.805605 7440 scope.go:117] "RemoveContainer" containerID="d135f68615930d49632ead44689c31ed1dba2d0c236cbda4ae0463dc788e0e6a" Mar 12 14:26:55.131463 master-0 kubenswrapper[7440]: I0312 14:26:55.131411 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:55.131463 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:55.131463 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:55.131463 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:55.131752 master-0 kubenswrapper[7440]: I0312 14:26:55.131477 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:55.712495 master-0 kubenswrapper[7440]: I0312 14:26:55.712434 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5685fbc7d-ckmlv_8660cef9-0ab3-453e-a4b9-c243daa6ddb0/csi-snapshot-controller-operator/2.log" Mar 12 14:26:55.713425 master-0 kubenswrapper[7440]: I0312 14:26:55.713233 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv" event={"ID":"8660cef9-0ab3-453e-a4b9-c243daa6ddb0","Type":"ContainerStarted","Data":"36113a200e00efea87bc465d209049d07954fd38fc45547a2de2a279634e07cb"} Mar 12 14:26:55.717669 master-0 kubenswrapper[7440]: I0312 14:26:55.717608 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7f8bfc67b-pz8rc_df31c4c2-304e-4bad-8e6f-18c174eba675/route-controller-manager/2.log" Mar 12 14:26:55.721812 master-0 kubenswrapper[7440]: I0312 14:26:55.721735 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/3.log" Mar 12 14:26:55.722757 master-0 kubenswrapper[7440]: I0312 14:26:55.722687 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/2.log" Mar 12 14:26:55.723407 master-0 kubenswrapper[7440]: I0312 14:26:55.723332 7440 generic.go:334] "Generic (PLEG): container finished" podID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" containerID="ce4ac6bc5605b012a8c47f4c0b169a09ed9e7155807e4b4269519a7e642d6b66" exitCode=1 Mar 12 14:26:55.723528 master-0 kubenswrapper[7440]: I0312 14:26:55.723403 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" event={"ID":"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6","Type":"ContainerDied","Data":"ce4ac6bc5605b012a8c47f4c0b169a09ed9e7155807e4b4269519a7e642d6b66"} Mar 12 14:26:55.723528 master-0 kubenswrapper[7440]: I0312 14:26:55.723468 7440 scope.go:117] "RemoveContainer" containerID="45abcab2b6c821296572dad37b9e6f9ba63e552dbae8db16db31cb4dc1b36a86" Mar 12 14:26:55.724354 master-0 kubenswrapper[7440]: I0312 14:26:55.724305 7440 scope.go:117] "RemoveContainer" containerID="ce4ac6bc5605b012a8c47f4c0b169a09ed9e7155807e4b4269519a7e642d6b66" Mar 12 14:26:55.724848 master-0 kubenswrapper[7440]: E0312 14:26:55.724771 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-44hhf_openshift-ingress-operator(4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" podUID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" Mar 12 14:26:56.132477 master-0 kubenswrapper[7440]: I0312 14:26:56.132383 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:56.132477 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:56.132477 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:56.132477 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:56.132890 master-0 kubenswrapper[7440]: I0312 14:26:56.132511 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:56.753496 master-0 kubenswrapper[7440]: I0312 14:26:56.753436 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_c6a711bc27e73e2efc239fb72d1184e6/kube-scheduler-cert-syncer/0.log" Mar 12 14:26:56.754136 master-0 kubenswrapper[7440]: I0312 14:26:56.754095 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_c6a711bc27e73e2efc239fb72d1184e6/kube-scheduler/0.log" Mar 12 14:26:56.754506 master-0 kubenswrapper[7440]: I0312 14:26:56.754470 7440 generic.go:334] "Generic (PLEG): container finished" podID="c6a711bc27e73e2efc239fb72d1184e6" containerID="2aee18625338d290a376474bbeead6c6bef3630d9c0a26ff9cffcf446662e724" exitCode=1 Mar 12 14:26:56.754576 master-0 kubenswrapper[7440]: I0312 14:26:56.754547 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"c6a711bc27e73e2efc239fb72d1184e6","Type":"ContainerDied","Data":"2aee18625338d290a376474bbeead6c6bef3630d9c0a26ff9cffcf446662e724"} Mar 12 14:26:56.755319 master-0 kubenswrapper[7440]: I0312 14:26:56.755268 7440 scope.go:117] "RemoveContainer" containerID="2aee18625338d290a376474bbeead6c6bef3630d9c0a26ff9cffcf446662e724" Mar 12 14:26:56.756378 master-0 kubenswrapper[7440]: I0312 14:26:56.756269 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/3.log" Mar 12 14:26:56.953164 master-0 kubenswrapper[7440]: I0312 14:26:56.953118 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:26:56.953275 master-0 kubenswrapper[7440]: I0312 14:26:56.953183 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:26:57.132038 master-0 kubenswrapper[7440]: I0312 14:26:57.131972 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:57.132038 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:57.132038 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:57.132038 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:57.132293 master-0 kubenswrapper[7440]: I0312 14:26:57.132038 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:57.457704 master-0 kubenswrapper[7440]: I0312 14:26:57.456402 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:26:57.458431 master-0 kubenswrapper[7440]: I0312 14:26:57.458208 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:26:57.766220 master-0 kubenswrapper[7440]: I0312 14:26:57.766127 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/5.log" Mar 12 14:26:57.767698 master-0 kubenswrapper[7440]: I0312 14:26:57.767641 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/kube-controller-manager-cert-syncer/0.log" Mar 12 14:26:57.769162 master-0 kubenswrapper[7440]: I0312 14:26:57.768281 7440 generic.go:334] "Generic (PLEG): container finished" podID="7fed292c3d5a90a99bfee43e89190405" containerID="897e913e5a5888d39eecca73ba6606dae5753683c29db8129ecaf95abc7f3cbb" exitCode=1 Mar 12 14:26:57.769162 master-0 kubenswrapper[7440]: I0312 14:26:57.768361 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7fed292c3d5a90a99bfee43e89190405","Type":"ContainerDied","Data":"897e913e5a5888d39eecca73ba6606dae5753683c29db8129ecaf95abc7f3cbb"} Mar 12 14:26:57.769162 master-0 kubenswrapper[7440]: I0312 14:26:57.769010 7440 scope.go:117] "RemoveContainer" containerID="292c715d936689cc5a4e9267c3b0c4dd0ea682eff5c05fa9b9cfcf2c9fa3088f" Mar 12 14:26:57.769162 master-0 kubenswrapper[7440]: I0312 14:26:57.769034 7440 scope.go:117] "RemoveContainer" containerID="897e913e5a5888d39eecca73ba6606dae5753683c29db8129ecaf95abc7f3cbb" Mar 12 14:26:57.775106 master-0 kubenswrapper[7440]: I0312 14:26:57.775080 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_c6a711bc27e73e2efc239fb72d1184e6/kube-scheduler-cert-syncer/0.log" Mar 12 14:26:57.776080 master-0 kubenswrapper[7440]: I0312 14:26:57.776025 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_c6a711bc27e73e2efc239fb72d1184e6/kube-scheduler/0.log" Mar 12 14:26:57.776564 master-0 kubenswrapper[7440]: I0312 14:26:57.776535 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"c6a711bc27e73e2efc239fb72d1184e6","Type":"ContainerStarted","Data":"d961cd077c4348f499a31e617d8bf3df9410762f91851718b3122d68eafa5a20"} Mar 12 14:26:58.008060 master-0 kubenswrapper[7440]: E0312 14:26:58.008001 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:26:58.034472 master-0 kubenswrapper[7440]: E0312 14:26:58.034289 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:26:58.132245 master-0 kubenswrapper[7440]: I0312 14:26:58.132175 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:58.132245 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:58.132245 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:58.132245 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:58.132734 master-0 kubenswrapper[7440]: I0312 14:26:58.132253 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:58.792233 master-0 kubenswrapper[7440]: I0312 14:26:58.791991 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/5.log" Mar 12 14:26:58.793640 master-0 kubenswrapper[7440]: I0312 14:26:58.793577 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/kube-controller-manager-cert-syncer/0.log" Mar 12 14:26:58.794140 master-0 kubenswrapper[7440]: I0312 14:26:58.794064 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7fed292c3d5a90a99bfee43e89190405","Type":"ContainerStarted","Data":"0dec01a437416a94b1faca50b639752f8ecf1a0b753ff095fb2b1362f1488914"} Mar 12 14:26:58.795754 master-0 kubenswrapper[7440]: I0312 14:26:58.795674 7440 scope.go:117] "RemoveContainer" containerID="292c715d936689cc5a4e9267c3b0c4dd0ea682eff5c05fa9b9cfcf2c9fa3088f" Mar 12 14:26:58.796480 master-0 kubenswrapper[7440]: E0312 14:26:58.796431 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:26:59.133077 master-0 kubenswrapper[7440]: I0312 14:26:59.132826 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:26:59.133077 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:26:59.133077 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:26:59.133077 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:26:59.133701 master-0 kubenswrapper[7440]: I0312 14:26:59.133647 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:26:59.787160 master-0 kubenswrapper[7440]: I0312 14:26:59.787087 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:26:59.788010 master-0 kubenswrapper[7440]: I0312 14:26:59.787830 7440 scope.go:117] "RemoveContainer" containerID="6b752a3439d93dc1f62f53cf289ae78818fd2b1ea0f771762ddeb52536a133b6" Mar 12 14:26:59.788138 master-0 kubenswrapper[7440]: E0312 14:26:59.788093 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=route-controller-manager pod=route-controller-manager-7f8bfc67b-pz8rc_openshift-route-controller-manager(df31c4c2-304e-4bad-8e6f-18c174eba675)\"" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" Mar 12 14:26:59.953041 master-0 kubenswrapper[7440]: I0312 14:26:59.952972 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:26:59.953041 master-0 kubenswrapper[7440]: I0312 14:26:59.953038 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:27:00.132512 master-0 kubenswrapper[7440]: I0312 14:27:00.132381 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:27:00.132512 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:27:00.132512 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:27:00.132512 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:27:00.132512 master-0 kubenswrapper[7440]: I0312 14:27:00.132445 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:27:00.455509 master-0 kubenswrapper[7440]: I0312 14:27:00.455303 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:27:00.455509 master-0 kubenswrapper[7440]: I0312 14:27:00.455363 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:27:01.131086 master-0 kubenswrapper[7440]: I0312 14:27:01.131036 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:27:01.131086 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:27:01.131086 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:27:01.131086 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:27:01.131626 master-0 kubenswrapper[7440]: I0312 14:27:01.131108 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:27:02.132252 master-0 kubenswrapper[7440]: I0312 14:27:02.132170 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:27:02.132252 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:27:02.132252 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:27:02.132252 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:27:02.132252 master-0 kubenswrapper[7440]: I0312 14:27:02.132247 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:27:02.953918 master-0 kubenswrapper[7440]: I0312 14:27:02.953834 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:27:02.954210 master-0 kubenswrapper[7440]: I0312 14:27:02.953952 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:27:02.954210 master-0 kubenswrapper[7440]: I0312 14:27:02.954012 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:27:02.954749 master-0 kubenswrapper[7440]: I0312 14:27:02.954702 7440 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"1602a9eed5353da99938e46cc2f064b4455a5e47eb3af80ff79cdafd544bf392"} pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 12 14:27:02.954749 master-0 kubenswrapper[7440]: I0312 14:27:02.954746 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" containerID="cri-o://1602a9eed5353da99938e46cc2f064b4455a5e47eb3af80ff79cdafd544bf392" gracePeriod=30 Mar 12 14:27:02.954945 master-0 kubenswrapper[7440]: I0312 14:27:02.954819 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:27:02.955022 master-0 kubenswrapper[7440]: I0312 14:27:02.954969 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:27:03.133193 master-0 kubenswrapper[7440]: I0312 14:27:03.133143 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:27:03.133193 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:27:03.133193 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:27:03.133193 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:27:03.133824 master-0 kubenswrapper[7440]: I0312 14:27:03.133204 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:27:03.454422 master-0 kubenswrapper[7440]: I0312 14:27:03.454362 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 14:27:03.454565 master-0 kubenswrapper[7440]: I0312 14:27:03.454425 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 14:27:03.478981 master-0 kubenswrapper[7440]: E0312 14:27:03.478876 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-config-operator pod=openshift-config-operator-64488f9d78-ljnjj_openshift-config-operator(0a898118-6d01-4211-92f0-43967b75405c)\"" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" Mar 12 14:27:03.518100 master-0 kubenswrapper[7440]: E0312 14:27:03.517943 7440 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a898118_6d01_4211_92f0_43967b75405c.slice/crio-1602a9eed5353da99938e46cc2f064b4455a5e47eb3af80ff79cdafd544bf392.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a898118_6d01_4211_92f0_43967b75405c.slice/crio-conmon-1602a9eed5353da99938e46cc2f064b4455a5e47eb3af80ff79cdafd544bf392.scope\": RecentStats: unable to find data in memory cache]" Mar 12 14:27:03.836524 master-0 kubenswrapper[7440]: I0312 14:27:03.836406 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-ljnjj_0a898118-6d01-4211-92f0-43967b75405c/openshift-config-operator/4.log" Mar 12 14:27:03.837111 master-0 kubenswrapper[7440]: I0312 14:27:03.837051 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-ljnjj_0a898118-6d01-4211-92f0-43967b75405c/openshift-config-operator/3.log" Mar 12 14:27:03.837682 master-0 kubenswrapper[7440]: I0312 14:27:03.837639 7440 generic.go:334] "Generic (PLEG): container finished" podID="0a898118-6d01-4211-92f0-43967b75405c" containerID="1602a9eed5353da99938e46cc2f064b4455a5e47eb3af80ff79cdafd544bf392" exitCode=255 Mar 12 14:27:03.838057 master-0 kubenswrapper[7440]: I0312 14:27:03.837993 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" event={"ID":"0a898118-6d01-4211-92f0-43967b75405c","Type":"ContainerDied","Data":"1602a9eed5353da99938e46cc2f064b4455a5e47eb3af80ff79cdafd544bf392"} Mar 12 14:27:03.838278 master-0 kubenswrapper[7440]: I0312 14:27:03.838249 7440 scope.go:117] "RemoveContainer" containerID="b4cae7e8c6dd597b1ab7b2fec14b29c512e54d2883fd5a316cf1266ec46f69ce" Mar 12 14:27:03.838850 master-0 kubenswrapper[7440]: I0312 14:27:03.838818 7440 scope.go:117] "RemoveContainer" containerID="1602a9eed5353da99938e46cc2f064b4455a5e47eb3af80ff79cdafd544bf392" Mar 12 14:27:03.839111 master-0 kubenswrapper[7440]: E0312 14:27:03.839079 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-config-operator pod=openshift-config-operator-64488f9d78-ljnjj_openshift-config-operator(0a898118-6d01-4211-92f0-43967b75405c)\"" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" Mar 12 14:27:04.132954 master-0 kubenswrapper[7440]: I0312 14:27:04.132775 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:27:04.132954 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:27:04.132954 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:27:04.132954 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:27:04.132954 master-0 kubenswrapper[7440]: I0312 14:27:04.132857 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:27:04.844121 master-0 kubenswrapper[7440]: I0312 14:27:04.844089 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-ljnjj_0a898118-6d01-4211-92f0-43967b75405c/openshift-config-operator/4.log" Mar 12 14:27:05.132174 master-0 kubenswrapper[7440]: I0312 14:27:05.131971 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:27:05.132174 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:27:05.132174 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:27:05.132174 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:27:05.132174 master-0 kubenswrapper[7440]: I0312 14:27:05.132036 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:27:06.132419 master-0 kubenswrapper[7440]: I0312 14:27:06.132352 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:27:06.132419 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:27:06.132419 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:27:06.132419 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:27:06.133486 master-0 kubenswrapper[7440]: I0312 14:27:06.132431 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:27:06.133486 master-0 kubenswrapper[7440]: I0312 14:27:06.132483 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:27:06.133486 master-0 kubenswrapper[7440]: I0312 14:27:06.133066 7440 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"8ea8824cc66d3733dec4f191955e838e6c7cbda51a4332331b8b1ab5e09b2eaf"} pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" containerMessage="Container router failed startup probe, will be restarted" Mar 12 14:27:06.133486 master-0 kubenswrapper[7440]: I0312 14:27:06.133092 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" containerID="cri-o://8ea8824cc66d3733dec4f191955e838e6c7cbda51a4332331b8b1ab5e09b2eaf" gracePeriod=3600 Mar 12 14:27:07.805041 master-0 kubenswrapper[7440]: I0312 14:27:07.804981 7440 scope.go:117] "RemoveContainer" containerID="ce4ac6bc5605b012a8c47f4c0b169a09ed9e7155807e4b4269519a7e642d6b66" Mar 12 14:27:07.805955 master-0 kubenswrapper[7440]: E0312 14:27:07.805192 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-44hhf_openshift-ingress-operator(4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" podUID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" Mar 12 14:27:09.111948 master-0 kubenswrapper[7440]: E0312 14:27:09.111626 7440 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Mar 12 14:27:09.111948 master-0 kubenswrapper[7440]: &Event{ObjectMeta:{router-default-79f8cd6fdd-gjwhp.189c1dafcad31f81 openshift-ingress 11565 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress,Name:router-default-79f8cd6fdd-gjwhp,UID:e7f6ebd3-98c8-457c-a88c-7e81270f01b5,APIVersion:v1,ResourceVersion:11065,FieldPath:spec.containers{router},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Mar 12 14:27:09.111948 master-0 kubenswrapper[7440]: body: [-]backend-http failed: reason withheld Mar 12 14:27:09.111948 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:27:09.111948 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:27:09.111948 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:27:09.111948 master-0 kubenswrapper[7440]: Mar 12 14:27:09.111948 master-0 kubenswrapper[7440]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:16:34 +0000 UTC,LastTimestamp:2026-03-12 14:17:28.131950525 +0000 UTC m=+308.467329084,Count:55,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Mar 12 14:27:09.111948 master-0 kubenswrapper[7440]: > Mar 12 14:27:11.804607 master-0 kubenswrapper[7440]: I0312 14:27:11.804550 7440 scope.go:117] "RemoveContainer" containerID="292c715d936689cc5a4e9267c3b0c4dd0ea682eff5c05fa9b9cfcf2c9fa3088f" Mar 12 14:27:11.805128 master-0 kubenswrapper[7440]: E0312 14:27:11.804801 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:27:13.684524 master-0 kubenswrapper[7440]: E0312 14:27:13.684339 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:27:03Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:27:03Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:27:03Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T14:27:03Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:0d4c830b2653f2eeffebd09537afb06afb5ae827adbc03f224ab7269f399c05c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d6065909bc521a3f9a85174276fdbceafad02a276449a7dd1952a1f689b0d362\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1735807445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:185237e125a9d710a58d4b588ea6b75eb361e4e99d979c1acd193de3b5d787f1\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:746054bb64fa0b27b1a696cd5db508bb9ee883a94969e4c1c4b5d35a93da8ef5\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1281521882},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:419c6163a23c12fa8884122764fc9055f901e98f35811ea7b5af57f8a71cdb3c\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bbd5afda20f052626b7914c319e3b44721ac442a05724cfe4199e8736319dcf1\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221789390},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032\\\"],\\\"sizeBytes\\\":487151732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efed4867528a19e3de56447aa00fe53a6d97b74a207e9adb57f06c62dcc8944e\\\"],\\\"sizeBytes\\\":480534195},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:243ce0f08a360370edf4960aec94fc6c5be9d4aae26cf8c5320adcd047c1b14f\\\"],\\\"sizeBytes\\\":471430788}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:27:13.805627 master-0 kubenswrapper[7440]: I0312 14:27:13.805591 7440 scope.go:117] "RemoveContainer" containerID="6b752a3439d93dc1f62f53cf289ae78818fd2b1ea0f771762ddeb52536a133b6" Mar 12 14:27:13.806062 master-0 kubenswrapper[7440]: E0312 14:27:13.806043 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=route-controller-manager pod=route-controller-manager-7f8bfc67b-pz8rc_openshift-route-controller-manager(df31c4c2-304e-4bad-8e6f-18c174eba675)\"" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" Mar 12 14:27:15.035892 master-0 kubenswrapper[7440]: E0312 14:27:15.035737 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:27:18.804673 master-0 kubenswrapper[7440]: I0312 14:27:18.804566 7440 scope.go:117] "RemoveContainer" containerID="1602a9eed5353da99938e46cc2f064b4455a5e47eb3af80ff79cdafd544bf392" Mar 12 14:27:18.805697 master-0 kubenswrapper[7440]: E0312 14:27:18.804776 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-config-operator pod=openshift-config-operator-64488f9d78-ljnjj_openshift-config-operator(0a898118-6d01-4211-92f0-43967b75405c)\"" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" Mar 12 14:27:19.810836 master-0 kubenswrapper[7440]: I0312 14:27:19.810783 7440 scope.go:117] "RemoveContainer" containerID="ce4ac6bc5605b012a8c47f4c0b169a09ed9e7155807e4b4269519a7e642d6b66" Mar 12 14:27:19.811454 master-0 kubenswrapper[7440]: E0312 14:27:19.811029 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-44hhf_openshift-ingress-operator(4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" podUID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" Mar 12 14:27:22.963356 master-0 kubenswrapper[7440]: I0312 14:27:22.963299 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/5.log" Mar 12 14:27:22.963939 master-0 kubenswrapper[7440]: I0312 14:27:22.963758 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/4.log" Mar 12 14:27:22.963939 master-0 kubenswrapper[7440]: I0312 14:27:22.963825 7440 generic.go:334] "Generic (PLEG): container finished" podID="d56089bf-177c-492d-8964-73a45574e7ed" containerID="cd9014bcffe6ddde739ac15065ac6e2169de2b76f2a0295b122a3bc2a089b78d" exitCode=1 Mar 12 14:27:22.963939 master-0 kubenswrapper[7440]: I0312 14:27:22.963871 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" event={"ID":"d56089bf-177c-492d-8964-73a45574e7ed","Type":"ContainerDied","Data":"cd9014bcffe6ddde739ac15065ac6e2169de2b76f2a0295b122a3bc2a089b78d"} Mar 12 14:27:22.964051 master-0 kubenswrapper[7440]: I0312 14:27:22.963958 7440 scope.go:117] "RemoveContainer" containerID="6475bc0affe8a98c9e1b7717d0757a27fe42a8342fbfe27794215021cef2d056" Mar 12 14:27:22.964869 master-0 kubenswrapper[7440]: I0312 14:27:22.964834 7440 scope.go:117] "RemoveContainer" containerID="cd9014bcffe6ddde739ac15065ac6e2169de2b76f2a0295b122a3bc2a089b78d" Mar 12 14:27:22.966180 master-0 kubenswrapper[7440]: E0312 14:27:22.966135 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-z9hzg_openshift-cluster-storage-operator(d56089bf-177c-492d-8964-73a45574e7ed)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" podUID="d56089bf-177c-492d-8964-73a45574e7ed" Mar 12 14:27:23.684802 master-0 kubenswrapper[7440]: E0312 14:27:23.684734 7440 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:27:23.805186 master-0 kubenswrapper[7440]: I0312 14:27:23.805130 7440 scope.go:117] "RemoveContainer" containerID="292c715d936689cc5a4e9267c3b0c4dd0ea682eff5c05fa9b9cfcf2c9fa3088f" Mar 12 14:27:23.805432 master-0 kubenswrapper[7440]: E0312 14:27:23.805399 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:27:23.970504 master-0 kubenswrapper[7440]: I0312 14:27:23.970410 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/5.log" Mar 12 14:27:27.806128 master-0 kubenswrapper[7440]: I0312 14:27:27.806063 7440 scope.go:117] "RemoveContainer" containerID="6b752a3439d93dc1f62f53cf289ae78818fd2b1ea0f771762ddeb52536a133b6" Mar 12 14:27:27.808318 master-0 kubenswrapper[7440]: I0312 14:27:27.808285 7440 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-jpf47 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Mar 12 14:27:27.808414 master-0 kubenswrapper[7440]: I0312 14:27:27.808326 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" podUID="57930a54-89ab-4ec8-a504-74035bb74d63" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Mar 12 14:27:27.999758 master-0 kubenswrapper[7440]: I0312 14:27:27.999701 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7f8bfc67b-pz8rc_df31c4c2-304e-4bad-8e6f-18c174eba675/route-controller-manager/2.log" Mar 12 14:27:28.000638 master-0 kubenswrapper[7440]: I0312 14:27:27.999763 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" event={"ID":"df31c4c2-304e-4bad-8e6f-18c174eba675","Type":"ContainerStarted","Data":"61400ed5c81e00b9e0a4acdbab9426e759da65e0bd1381d3d70a790a5d50716c"} Mar 12 14:27:28.001336 master-0 kubenswrapper[7440]: I0312 14:27:28.001206 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:27:28.002554 master-0 kubenswrapper[7440]: I0312 14:27:28.002467 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" start-of-body= Mar 12 14:27:28.002623 master-0 kubenswrapper[7440]: I0312 14:27:28.002597 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": dial tcp 10.128.0.66:8443: connect: connection refused" Mar 12 14:27:30.009082 master-0 kubenswrapper[7440]: I0312 14:27:30.008991 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": context deadline exceeded" start-of-body= Mar 12 14:27:30.009596 master-0 kubenswrapper[7440]: I0312 14:27:30.009083 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": context deadline exceeded" Mar 12 14:27:30.805506 master-0 kubenswrapper[7440]: I0312 14:27:30.805441 7440 scope.go:117] "RemoveContainer" containerID="1602a9eed5353da99938e46cc2f064b4455a5e47eb3af80ff79cdafd544bf392" Mar 12 14:27:30.805947 master-0 kubenswrapper[7440]: E0312 14:27:30.805776 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-config-operator pod=openshift-config-operator-64488f9d78-ljnjj_openshift-config-operator(0a898118-6d01-4211-92f0-43967b75405c)\"" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" Mar 12 14:27:30.805947 master-0 kubenswrapper[7440]: I0312 14:27:30.805793 7440 scope.go:117] "RemoveContainer" containerID="ce4ac6bc5605b012a8c47f4c0b169a09ed9e7155807e4b4269519a7e642d6b66" Mar 12 14:27:30.806206 master-0 kubenswrapper[7440]: E0312 14:27:30.806164 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-44hhf_openshift-ingress-operator(4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" podUID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" Mar 12 14:27:31.010275 master-0 kubenswrapper[7440]: I0312 14:27:31.010203 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:27:31.010822 master-0 kubenswrapper[7440]: I0312 14:27:31.010281 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:27:32.037292 master-0 kubenswrapper[7440]: E0312 14:27:32.037210 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:27:34.805370 master-0 kubenswrapper[7440]: I0312 14:27:34.805298 7440 scope.go:117] "RemoveContainer" containerID="292c715d936689cc5a4e9267c3b0c4dd0ea682eff5c05fa9b9cfcf2c9fa3088f" Mar 12 14:27:36.051490 master-0 kubenswrapper[7440]: I0312 14:27:36.051413 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/5.log" Mar 12 14:27:36.052657 master-0 kubenswrapper[7440]: I0312 14:27:36.052391 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/kube-controller-manager-cert-syncer/0.log" Mar 12 14:27:36.053093 master-0 kubenswrapper[7440]: I0312 14:27:36.053021 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7fed292c3d5a90a99bfee43e89190405","Type":"ContainerStarted","Data":"e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91"} Mar 12 14:27:36.123636 master-0 kubenswrapper[7440]: I0312 14:27:36.123560 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=640.123537328 podStartE2EDuration="10m40.123537328s" podCreationTimestamp="2026-03-12 14:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:27:36.120682466 +0000 UTC m=+916.456061045" watchObservedRunningTime="2026-03-12 14:27:36.123537328 +0000 UTC m=+916.458915897" Mar 12 14:27:36.516792 master-0 kubenswrapper[7440]: I0312 14:27:36.516696 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:27:36.517113 master-0 kubenswrapper[7440]: I0312 14:27:36.516942 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:27:36.805496 master-0 kubenswrapper[7440]: I0312 14:27:36.805344 7440 scope.go:117] "RemoveContainer" containerID="cd9014bcffe6ddde739ac15065ac6e2169de2b76f2a0295b122a3bc2a089b78d" Mar 12 14:27:36.805741 master-0 kubenswrapper[7440]: E0312 14:27:36.805707 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-z9hzg_openshift-cluster-storage-operator(d56089bf-177c-492d-8964-73a45574e7ed)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" podUID="d56089bf-177c-492d-8964-73a45574e7ed" Mar 12 14:27:37.807305 master-0 kubenswrapper[7440]: I0312 14:27:37.807233 7440 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-jpf47 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Mar 12 14:27:37.807305 master-0 kubenswrapper[7440]: I0312 14:27:37.807287 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" podUID="57930a54-89ab-4ec8-a504-74035bb74d63" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Mar 12 14:27:39.517059 master-0 kubenswrapper[7440]: I0312 14:27:39.516971 7440 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:27:39.517059 master-0 kubenswrapper[7440]: I0312 14:27:39.517057 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:27:40.788491 master-0 kubenswrapper[7440]: I0312 14:27:40.788422 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:27:40.788491 master-0 kubenswrapper[7440]: I0312 14:27:40.788478 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:27:42.806019 master-0 kubenswrapper[7440]: I0312 14:27:42.805966 7440 scope.go:117] "RemoveContainer" containerID="1602a9eed5353da99938e46cc2f064b4455a5e47eb3af80ff79cdafd544bf392" Mar 12 14:27:42.807083 master-0 kubenswrapper[7440]: E0312 14:27:42.806144 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-config-operator pod=openshift-config-operator-64488f9d78-ljnjj_openshift-config-operator(0a898118-6d01-4211-92f0-43967b75405c)\"" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" Mar 12 14:27:44.805269 master-0 kubenswrapper[7440]: I0312 14:27:44.805229 7440 scope.go:117] "RemoveContainer" containerID="ce4ac6bc5605b012a8c47f4c0b169a09ed9e7155807e4b4269519a7e642d6b66" Mar 12 14:27:45.108520 master-0 kubenswrapper[7440]: I0312 14:27:45.108385 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/3.log" Mar 12 14:27:45.108752 master-0 kubenswrapper[7440]: I0312 14:27:45.108715 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" event={"ID":"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6","Type":"ContainerStarted","Data":"d7590356afea30db90fd18bb64f353e2cda51d0df2cf338f3dd9cfc534cc6343"} Mar 12 14:27:48.807995 master-0 kubenswrapper[7440]: I0312 14:27:48.807734 7440 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-jpf47 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:27:48.807995 master-0 kubenswrapper[7440]: I0312 14:27:48.807825 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" podUID="57930a54-89ab-4ec8-a504-74035bb74d63" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:27:48.807995 master-0 kubenswrapper[7440]: I0312 14:27:48.807866 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:27:48.809202 master-0 kubenswrapper[7440]: I0312 14:27:48.808302 7440 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"8d633c24c0fbfe0880167743a2ebe5f60f0f211a6026d8c3f55625a7e7adbd93"} pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Mar 12 14:27:48.809202 master-0 kubenswrapper[7440]: I0312 14:27:48.808330 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" podUID="57930a54-89ab-4ec8-a504-74035bb74d63" containerName="authentication-operator" containerID="cri-o://8d633c24c0fbfe0880167743a2ebe5f60f0f211a6026d8c3f55625a7e7adbd93" gracePeriod=30 Mar 12 14:27:48.926979 master-0 kubenswrapper[7440]: E0312 14:27:48.926913 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-7c6989d6c4-jpf47_openshift-authentication-operator(57930a54-89ab-4ec8-a504-74035bb74d63)\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" podUID="57930a54-89ab-4ec8-a504-74035bb74d63" Mar 12 14:27:49.038365 master-0 kubenswrapper[7440]: E0312 14:27:49.038266 7440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 14:27:49.135487 master-0 kubenswrapper[7440]: I0312 14:27:49.135314 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-jpf47_57930a54-89ab-4ec8-a504-74035bb74d63/authentication-operator/4.log" Mar 12 14:27:49.135743 master-0 kubenswrapper[7440]: I0312 14:27:49.135713 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-jpf47_57930a54-89ab-4ec8-a504-74035bb74d63/authentication-operator/3.log" Mar 12 14:27:49.135815 master-0 kubenswrapper[7440]: I0312 14:27:49.135757 7440 generic.go:334] "Generic (PLEG): container finished" podID="57930a54-89ab-4ec8-a504-74035bb74d63" containerID="8d633c24c0fbfe0880167743a2ebe5f60f0f211a6026d8c3f55625a7e7adbd93" exitCode=255 Mar 12 14:27:49.135815 master-0 kubenswrapper[7440]: I0312 14:27:49.135789 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" event={"ID":"57930a54-89ab-4ec8-a504-74035bb74d63","Type":"ContainerDied","Data":"8d633c24c0fbfe0880167743a2ebe5f60f0f211a6026d8c3f55625a7e7adbd93"} Mar 12 14:27:49.135942 master-0 kubenswrapper[7440]: I0312 14:27:49.135827 7440 scope.go:117] "RemoveContainer" containerID="ccb4e996c4095d3424f211c34c210a7991baf5a57a30f0b35ae26da073728490" Mar 12 14:27:49.136792 master-0 kubenswrapper[7440]: I0312 14:27:49.136442 7440 scope.go:117] "RemoveContainer" containerID="8d633c24c0fbfe0880167743a2ebe5f60f0f211a6026d8c3f55625a7e7adbd93" Mar 12 14:27:49.136792 master-0 kubenswrapper[7440]: E0312 14:27:49.136660 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-7c6989d6c4-jpf47_openshift-authentication-operator(57930a54-89ab-4ec8-a504-74035bb74d63)\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" podUID="57930a54-89ab-4ec8-a504-74035bb74d63" Mar 12 14:27:49.519614 master-0 kubenswrapper[7440]: I0312 14:27:49.519549 7440 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:27:49.519939 master-0 kubenswrapper[7440]: I0312 14:27:49.519913 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:27:50.149212 master-0 kubenswrapper[7440]: I0312 14:27:50.149150 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-jpf47_57930a54-89ab-4ec8-a504-74035bb74d63/authentication-operator/4.log" Mar 12 14:27:50.788717 master-0 kubenswrapper[7440]: I0312 14:27:50.788532 7440 patch_prober.go:28] interesting pod/route-controller-manager-7f8bfc67b-pz8rc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:27:50.789086 master-0 kubenswrapper[7440]: I0312 14:27:50.788638 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:27:50.805374 master-0 kubenswrapper[7440]: I0312 14:27:50.805322 7440 scope.go:117] "RemoveContainer" containerID="cd9014bcffe6ddde739ac15065ac6e2169de2b76f2a0295b122a3bc2a089b78d" Mar 12 14:27:50.805636 master-0 kubenswrapper[7440]: E0312 14:27:50.805585 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-z9hzg_openshift-cluster-storage-operator(d56089bf-177c-492d-8964-73a45574e7ed)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" podUID="d56089bf-177c-492d-8964-73a45574e7ed" Mar 12 14:27:53.176216 master-0 kubenswrapper[7440]: I0312 14:27:53.176145 7440 generic.go:334] "Generic (PLEG): container finished" podID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerID="8ea8824cc66d3733dec4f191955e838e6c7cbda51a4332331b8b1ab5e09b2eaf" exitCode=0 Mar 12 14:27:53.176216 master-0 kubenswrapper[7440]: I0312 14:27:53.176212 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" event={"ID":"e7f6ebd3-98c8-457c-a88c-7e81270f01b5","Type":"ContainerDied","Data":"8ea8824cc66d3733dec4f191955e838e6c7cbda51a4332331b8b1ab5e09b2eaf"} Mar 12 14:27:53.176216 master-0 kubenswrapper[7440]: I0312 14:27:53.176247 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" event={"ID":"e7f6ebd3-98c8-457c-a88c-7e81270f01b5","Type":"ContainerStarted","Data":"ced725ff08f0784185b129c88b510bee99f07dfd79fa7c15509acb3b5c4c7595"} Mar 12 14:27:53.177200 master-0 kubenswrapper[7440]: I0312 14:27:53.176264 7440 scope.go:117] "RemoveContainer" containerID="8e14f7d442275322d3e494f60cf9fca855dca850e1bd67ff3f7aec976914d196" Mar 12 14:27:53.806023 master-0 kubenswrapper[7440]: I0312 14:27:53.805879 7440 scope.go:117] "RemoveContainer" containerID="1602a9eed5353da99938e46cc2f064b4455a5e47eb3af80ff79cdafd544bf392" Mar 12 14:27:54.129510 master-0 kubenswrapper[7440]: I0312 14:27:54.129462 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:27:54.132642 master-0 kubenswrapper[7440]: I0312 14:27:54.132607 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:27:54.132642 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:27:54.132642 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:27:54.132642 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:27:54.132884 master-0 kubenswrapper[7440]: I0312 14:27:54.132655 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:27:54.133087 master-0 kubenswrapper[7440]: I0312 14:27:54.133061 7440 patch_prober.go:28] interesting pod/etcd-operator-5884b9cd56-mjxsv container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.9:8443/healthz\": read tcp 10.128.0.2:51678->10.128.0.9:8443: read: connection reset by peer" start-of-body= Mar 12 14:27:54.133209 master-0 kubenswrapper[7440]: I0312 14:27:54.133185 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" podUID="8d775283-2696-4411-8ddf-d4e6000f0a0c" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.9:8443/healthz\": read tcp 10.128.0.2:51678->10.128.0.9:8443: read: connection reset by peer" Mar 12 14:27:54.133318 master-0 kubenswrapper[7440]: I0312 14:27:54.133302 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:27:54.134578 master-0 kubenswrapper[7440]: I0312 14:27:54.134550 7440 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="etcd-operator" containerStatusID={"Type":"cri-o","ID":"dab12d78b58362271ed50f79c5a69254f295643a7991e2e36b8a3b67ed281ba9"} pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" containerMessage="Container etcd-operator failed liveness probe, will be restarted" Mar 12 14:27:54.134720 master-0 kubenswrapper[7440]: I0312 14:27:54.134696 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" podUID="8d775283-2696-4411-8ddf-d4e6000f0a0c" containerName="etcd-operator" containerID="cri-o://dab12d78b58362271ed50f79c5a69254f295643a7991e2e36b8a3b67ed281ba9" gracePeriod=30 Mar 12 14:27:54.189553 master-0 kubenswrapper[7440]: I0312 14:27:54.189515 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-zwdgk_d00a8cc7-7774-40bd-94a1-9ac2d0f63234/openshift-controller-manager-operator/3.log" Mar 12 14:27:54.190370 master-0 kubenswrapper[7440]: I0312 14:27:54.190341 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-zwdgk_d00a8cc7-7774-40bd-94a1-9ac2d0f63234/openshift-controller-manager-operator/2.log" Mar 12 14:27:54.190430 master-0 kubenswrapper[7440]: I0312 14:27:54.190394 7440 generic.go:334] "Generic (PLEG): container finished" podID="d00a8cc7-7774-40bd-94a1-9ac2d0f63234" containerID="cdfe0e410845d5baf2e09f8531028d9af2d70fe1e72cb65a07430cd6462f940c" exitCode=255 Mar 12 14:27:54.190513 master-0 kubenswrapper[7440]: I0312 14:27:54.190480 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" event={"ID":"d00a8cc7-7774-40bd-94a1-9ac2d0f63234","Type":"ContainerDied","Data":"cdfe0e410845d5baf2e09f8531028d9af2d70fe1e72cb65a07430cd6462f940c"} Mar 12 14:27:54.190575 master-0 kubenswrapper[7440]: I0312 14:27:54.190536 7440 scope.go:117] "RemoveContainer" containerID="95dd32cf12bfc127e14e6bb356ac107cba94348a2608b67065159ea570fe224b" Mar 12 14:27:54.192372 master-0 kubenswrapper[7440]: I0312 14:27:54.192330 7440 scope.go:117] "RemoveContainer" containerID="cdfe0e410845d5baf2e09f8531028d9af2d70fe1e72cb65a07430cd6462f940c" Mar 12 14:27:54.194484 master-0 kubenswrapper[7440]: E0312 14:27:54.194415 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-8565d84698-zwdgk_openshift-controller-manager-operator(d00a8cc7-7774-40bd-94a1-9ac2d0f63234)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" podUID="d00a8cc7-7774-40bd-94a1-9ac2d0f63234" Mar 12 14:27:54.195506 master-0 kubenswrapper[7440]: I0312 14:27:54.195473 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-smpl5_a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2/kube-apiserver-operator/3.log" Mar 12 14:27:54.196078 master-0 kubenswrapper[7440]: I0312 14:27:54.196045 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-smpl5_a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2/kube-apiserver-operator/2.log" Mar 12 14:27:54.196158 master-0 kubenswrapper[7440]: I0312 14:27:54.196084 7440 generic.go:334] "Generic (PLEG): container finished" podID="a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2" containerID="5efaa8718300502113322a1eee9979f20223fd4bf67820218994af2c3ddf3fdb" exitCode=255 Mar 12 14:27:54.196158 master-0 kubenswrapper[7440]: I0312 14:27:54.196128 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" event={"ID":"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2","Type":"ContainerDied","Data":"5efaa8718300502113322a1eee9979f20223fd4bf67820218994af2c3ddf3fdb"} Mar 12 14:27:54.196692 master-0 kubenswrapper[7440]: I0312 14:27:54.196654 7440 scope.go:117] "RemoveContainer" containerID="5efaa8718300502113322a1eee9979f20223fd4bf67820218994af2c3ddf3fdb" Mar 12 14:27:54.196974 master-0 kubenswrapper[7440]: E0312 14:27:54.196881 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-68bd585b-smpl5_openshift-kube-apiserver-operator(a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" podUID="a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2" Mar 12 14:27:54.200866 master-0 kubenswrapper[7440]: I0312 14:27:54.199389 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-mjxsv_8d775283-2696-4411-8ddf-d4e6000f0a0c/etcd-operator/3.log" Mar 12 14:27:54.200866 master-0 kubenswrapper[7440]: I0312 14:27:54.199963 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-mjxsv_8d775283-2696-4411-8ddf-d4e6000f0a0c/etcd-operator/2.log" Mar 12 14:27:54.200866 master-0 kubenswrapper[7440]: I0312 14:27:54.199997 7440 generic.go:334] "Generic (PLEG): container finished" podID="8d775283-2696-4411-8ddf-d4e6000f0a0c" containerID="dab12d78b58362271ed50f79c5a69254f295643a7991e2e36b8a3b67ed281ba9" exitCode=255 Mar 12 14:27:54.200866 master-0 kubenswrapper[7440]: I0312 14:27:54.200054 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" event={"ID":"8d775283-2696-4411-8ddf-d4e6000f0a0c","Type":"ContainerDied","Data":"dab12d78b58362271ed50f79c5a69254f295643a7991e2e36b8a3b67ed281ba9"} Mar 12 14:27:54.201692 master-0 kubenswrapper[7440]: I0312 14:27:54.201657 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-ldxfn_7433d9bf-4edf-4787-a7a1-e5102c7264c7/network-operator/3.log" Mar 12 14:27:54.202126 master-0 kubenswrapper[7440]: I0312 14:27:54.202097 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-ldxfn_7433d9bf-4edf-4787-a7a1-e5102c7264c7/network-operator/2.log" Mar 12 14:27:54.202170 master-0 kubenswrapper[7440]: I0312 14:27:54.202135 7440 generic.go:334] "Generic (PLEG): container finished" podID="7433d9bf-4edf-4787-a7a1-e5102c7264c7" containerID="48fe02f7a254d8d98f49ab36edbe52b1845dafa9c51071f3a38df472248895ba" exitCode=255 Mar 12 14:27:54.202277 master-0 kubenswrapper[7440]: I0312 14:27:54.202181 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" event={"ID":"7433d9bf-4edf-4787-a7a1-e5102c7264c7","Type":"ContainerDied","Data":"48fe02f7a254d8d98f49ab36edbe52b1845dafa9c51071f3a38df472248895ba"} Mar 12 14:27:54.202577 master-0 kubenswrapper[7440]: I0312 14:27:54.202545 7440 scope.go:117] "RemoveContainer" containerID="48fe02f7a254d8d98f49ab36edbe52b1845dafa9c51071f3a38df472248895ba" Mar 12 14:27:54.202776 master-0 kubenswrapper[7440]: E0312 14:27:54.202745 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=network-operator pod=network-operator-7c649bf6d4-ldxfn_openshift-network-operator(7433d9bf-4edf-4787-a7a1-e5102c7264c7)\"" pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" podUID="7433d9bf-4edf-4787-a7a1-e5102c7264c7" Mar 12 14:27:54.205524 master-0 kubenswrapper[7440]: I0312 14:27:54.205491 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-ljnjj_0a898118-6d01-4211-92f0-43967b75405c/openshift-config-operator/4.log" Mar 12 14:27:54.206480 master-0 kubenswrapper[7440]: I0312 14:27:54.206447 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" event={"ID":"0a898118-6d01-4211-92f0-43967b75405c","Type":"ContainerStarted","Data":"c2dbf5a09af1e2fa063a0458bcbee562bc513bd8f67fc9f514462e42c6e7aba0"} Mar 12 14:27:54.207775 master-0 kubenswrapper[7440]: I0312 14:27:54.207749 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:27:54.210328 master-0 kubenswrapper[7440]: I0312 14:27:54.210302 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-7f65c457f5-hkf2t_3dc73c14-852d-4957-b6ac-84366ba0594f/kube-storage-version-migrator-operator/3.log" Mar 12 14:27:54.211570 master-0 kubenswrapper[7440]: I0312 14:27:54.211542 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-7f65c457f5-hkf2t_3dc73c14-852d-4957-b6ac-84366ba0594f/kube-storage-version-migrator-operator/2.log" Mar 12 14:27:54.211646 master-0 kubenswrapper[7440]: I0312 14:27:54.211611 7440 generic.go:334] "Generic (PLEG): container finished" podID="3dc73c14-852d-4957-b6ac-84366ba0594f" containerID="7c75b0b66bdc20c82fe578e42fb9ae10c12f677e86c5f3339f7a2fe4881a6199" exitCode=255 Mar 12 14:27:54.211808 master-0 kubenswrapper[7440]: I0312 14:27:54.211755 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" event={"ID":"3dc73c14-852d-4957-b6ac-84366ba0594f","Type":"ContainerDied","Data":"7c75b0b66bdc20c82fe578e42fb9ae10c12f677e86c5f3339f7a2fe4881a6199"} Mar 12 14:27:54.212457 master-0 kubenswrapper[7440]: I0312 14:27:54.212418 7440 scope.go:117] "RemoveContainer" containerID="7c75b0b66bdc20c82fe578e42fb9ae10c12f677e86c5f3339f7a2fe4881a6199" Mar 12 14:27:54.212675 master-0 kubenswrapper[7440]: E0312 14:27:54.212638 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-storage-version-migrator-operator pod=kube-storage-version-migrator-operator-7f65c457f5-hkf2t_openshift-kube-storage-version-migrator-operator(3dc73c14-852d-4957-b6ac-84366ba0594f)\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" podUID="3dc73c14-852d-4957-b6ac-84366ba0594f" Mar 12 14:27:54.216057 master-0 kubenswrapper[7440]: I0312 14:27:54.216013 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-qtql5_1bba274a-38c7-4d13-88a5-6bc39228416c/kube-controller-manager-operator/3.log" Mar 12 14:27:54.216352 master-0 kubenswrapper[7440]: I0312 14:27:54.216326 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-qtql5_1bba274a-38c7-4d13-88a5-6bc39228416c/kube-controller-manager-operator/2.log" Mar 12 14:27:54.216443 master-0 kubenswrapper[7440]: I0312 14:27:54.216415 7440 generic.go:334] "Generic (PLEG): container finished" podID="1bba274a-38c7-4d13-88a5-6bc39228416c" containerID="a44c4ecc04fa9e6c4e5b12d13bcdb1beeaf87374ca0d2540444a8445b0121666" exitCode=255 Mar 12 14:27:54.216482 master-0 kubenswrapper[7440]: I0312 14:27:54.216454 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" event={"ID":"1bba274a-38c7-4d13-88a5-6bc39228416c","Type":"ContainerDied","Data":"a44c4ecc04fa9e6c4e5b12d13bcdb1beeaf87374ca0d2540444a8445b0121666"} Mar 12 14:27:54.217200 master-0 kubenswrapper[7440]: I0312 14:27:54.216886 7440 scope.go:117] "RemoveContainer" containerID="a44c4ecc04fa9e6c4e5b12d13bcdb1beeaf87374ca0d2540444a8445b0121666" Mar 12 14:27:54.217200 master-0 kubenswrapper[7440]: E0312 14:27:54.217110 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-86d7cdfdfb-qtql5_openshift-kube-controller-manager-operator(1bba274a-38c7-4d13-88a5-6bc39228416c)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" podUID="1bba274a-38c7-4d13-88a5-6bc39228416c" Mar 12 14:27:54.360145 master-0 kubenswrapper[7440]: I0312 14:27:54.360110 7440 scope.go:117] "RemoveContainer" containerID="44d72417f81941751149d110a32ac53aaf3ebd578a63426cf573e0c9323995fa" Mar 12 14:27:54.408787 master-0 kubenswrapper[7440]: I0312 14:27:54.408718 7440 scope.go:117] "RemoveContainer" containerID="add6a7027222fcbcfebd634ec4319fff646d91633d5b0bce4f0126cf9eac311e" Mar 12 14:27:54.433375 master-0 kubenswrapper[7440]: I0312 14:27:54.433312 7440 scope.go:117] "RemoveContainer" containerID="90afdba5757dcaf59474b1c77f52ccec8c1322e55deca5b3c4435bc3be8ed5e2" Mar 12 14:27:54.484249 master-0 kubenswrapper[7440]: I0312 14:27:54.484147 7440 scope.go:117] "RemoveContainer" containerID="69c454beac6cc5afa4b488e211eca34b869e3d6b5b9eaf12b4d8b91763dfc9d3" Mar 12 14:27:54.523007 master-0 kubenswrapper[7440]: I0312 14:27:54.522981 7440 scope.go:117] "RemoveContainer" containerID="1435326bdb2bef433d6cb6c8682a1509956eb7447248331d4290a4c67fb3bc38" Mar 12 14:27:55.132587 master-0 kubenswrapper[7440]: I0312 14:27:55.132470 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:27:55.132587 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:27:55.132587 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:27:55.132587 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:27:55.133013 master-0 kubenswrapper[7440]: I0312 14:27:55.132592 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:27:55.230992 master-0 kubenswrapper[7440]: I0312 14:27:55.230926 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-qtql5_1bba274a-38c7-4d13-88a5-6bc39228416c/kube-controller-manager-operator/3.log" Mar 12 14:27:55.233165 master-0 kubenswrapper[7440]: I0312 14:27:55.233130 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-smpl5_a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2/kube-apiserver-operator/3.log" Mar 12 14:27:55.235891 master-0 kubenswrapper[7440]: I0312 14:27:55.235854 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-zwdgk_d00a8cc7-7774-40bd-94a1-9ac2d0f63234/openshift-controller-manager-operator/3.log" Mar 12 14:27:55.238157 master-0 kubenswrapper[7440]: I0312 14:27:55.238081 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-ldxfn_7433d9bf-4edf-4787-a7a1-e5102c7264c7/network-operator/3.log" Mar 12 14:27:55.240590 master-0 kubenswrapper[7440]: I0312 14:27:55.240523 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-mjxsv_8d775283-2696-4411-8ddf-d4e6000f0a0c/etcd-operator/3.log" Mar 12 14:27:55.240785 master-0 kubenswrapper[7440]: I0312 14:27:55.240732 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" event={"ID":"8d775283-2696-4411-8ddf-d4e6000f0a0c","Type":"ContainerStarted","Data":"b653f2520c921bea50374d24b8a493063feaa8e5c6c64501293ba49359c77e27"} Mar 12 14:27:55.243416 master-0 kubenswrapper[7440]: I0312 14:27:55.243234 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-7f65c457f5-hkf2t_3dc73c14-852d-4957-b6ac-84366ba0594f/kube-storage-version-migrator-operator/3.log" Mar 12 14:27:56.131313 master-0 kubenswrapper[7440]: I0312 14:27:56.131257 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:27:56.131313 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:27:56.131313 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:27:56.131313 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:27:56.131604 master-0 kubenswrapper[7440]: I0312 14:27:56.131338 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:27:57.132883 master-0 kubenswrapper[7440]: I0312 14:27:57.132812 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:27:57.132883 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:27:57.132883 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:27:57.132883 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:27:57.133449 master-0 kubenswrapper[7440]: I0312 14:27:57.132933 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:27:57.953136 master-0 kubenswrapper[7440]: I0312 14:27:57.953010 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:27:57.953136 master-0 kubenswrapper[7440]: I0312 14:27:57.953095 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:27:58.132071 master-0 kubenswrapper[7440]: I0312 14:27:58.131998 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:27:58.132071 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:27:58.132071 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:27:58.132071 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:27:58.132326 master-0 kubenswrapper[7440]: I0312 14:27:58.132117 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:27:58.271510 master-0 kubenswrapper[7440]: I0312 14:27:58.271459 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7f8bfc67b-pz8rc_df31c4c2-304e-4bad-8e6f-18c174eba675/route-controller-manager/3.log" Mar 12 14:27:58.274704 master-0 kubenswrapper[7440]: I0312 14:27:58.274666 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7f8bfc67b-pz8rc_df31c4c2-304e-4bad-8e6f-18c174eba675/route-controller-manager/2.log" Mar 12 14:27:58.274870 master-0 kubenswrapper[7440]: I0312 14:27:58.274715 7440 generic.go:334] "Generic (PLEG): container finished" podID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerID="61400ed5c81e00b9e0a4acdbab9426e759da65e0bd1381d3d70a790a5d50716c" exitCode=255 Mar 12 14:27:58.274870 master-0 kubenswrapper[7440]: I0312 14:27:58.274752 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" event={"ID":"df31c4c2-304e-4bad-8e6f-18c174eba675","Type":"ContainerDied","Data":"61400ed5c81e00b9e0a4acdbab9426e759da65e0bd1381d3d70a790a5d50716c"} Mar 12 14:27:58.274870 master-0 kubenswrapper[7440]: I0312 14:27:58.274791 7440 scope.go:117] "RemoveContainer" containerID="6b752a3439d93dc1f62f53cf289ae78818fd2b1ea0f771762ddeb52536a133b6" Mar 12 14:27:58.275652 master-0 kubenswrapper[7440]: I0312 14:27:58.275620 7440 scope.go:117] "RemoveContainer" containerID="61400ed5c81e00b9e0a4acdbab9426e759da65e0bd1381d3d70a790a5d50716c" Mar 12 14:27:58.276996 master-0 kubenswrapper[7440]: E0312 14:27:58.275891 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=route-controller-manager pod=route-controller-manager-7f8bfc67b-pz8rc_openshift-route-controller-manager(df31c4c2-304e-4bad-8e6f-18c174eba675)\"" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" Mar 12 14:27:58.456052 master-0 kubenswrapper[7440]: I0312 14:27:58.455714 7440 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-ljnjj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:27:58.456052 master-0 kubenswrapper[7440]: I0312 14:27:58.455806 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" podUID="0a898118-6d01-4211-92f0-43967b75405c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:27:59.131725 master-0 kubenswrapper[7440]: I0312 14:27:59.131643 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:27:59.131725 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:27:59.131725 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:27:59.131725 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:27:59.132041 master-0 kubenswrapper[7440]: I0312 14:27:59.131733 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:27:59.282963 master-0 kubenswrapper[7440]: I0312 14:27:59.282890 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7f8bfc67b-pz8rc_df31c4c2-304e-4bad-8e6f-18c174eba675/route-controller-manager/3.log" Mar 12 14:27:59.517380 master-0 kubenswrapper[7440]: I0312 14:27:59.517318 7440 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 14:27:59.517698 master-0 kubenswrapper[7440]: I0312 14:27:59.517396 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 14:27:59.517698 master-0 kubenswrapper[7440]: I0312 14:27:59.517460 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:27:59.518188 master-0 kubenswrapper[7440]: I0312 14:27:59.518144 7440 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 12 14:27:59.518261 master-0 kubenswrapper[7440]: I0312 14:27:59.518239 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" containerID="cri-o://e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" gracePeriod=30 Mar 12 14:27:59.638439 master-0 kubenswrapper[7440]: E0312 14:27:59.638400 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:27:59.787680 master-0 kubenswrapper[7440]: I0312 14:27:59.787528 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:27:59.788195 master-0 kubenswrapper[7440]: I0312 14:27:59.788170 7440 scope.go:117] "RemoveContainer" containerID="61400ed5c81e00b9e0a4acdbab9426e759da65e0bd1381d3d70a790a5d50716c" Mar 12 14:27:59.788459 master-0 kubenswrapper[7440]: E0312 14:27:59.788411 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=route-controller-manager pod=route-controller-manager-7f8bfc67b-pz8rc_openshift-route-controller-manager(df31c4c2-304e-4bad-8e6f-18c174eba675)\"" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" Mar 12 14:28:00.132875 master-0 kubenswrapper[7440]: I0312 14:28:00.132755 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:00.132875 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:00.132875 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:00.132875 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:00.132875 master-0 kubenswrapper[7440]: I0312 14:28:00.132831 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:00.292021 master-0 kubenswrapper[7440]: I0312 14:28:00.291954 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/cluster-policy-controller/5.log" Mar 12 14:28:00.292863 master-0 kubenswrapper[7440]: I0312 14:28:00.292837 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/kube-controller-manager-cert-syncer/0.log" Mar 12 14:28:00.293341 master-0 kubenswrapper[7440]: I0312 14:28:00.293299 7440 generic.go:334] "Generic (PLEG): container finished" podID="7fed292c3d5a90a99bfee43e89190405" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" exitCode=0 Mar 12 14:28:00.293407 master-0 kubenswrapper[7440]: I0312 14:28:00.293352 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7fed292c3d5a90a99bfee43e89190405","Type":"ContainerDied","Data":"e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91"} Mar 12 14:28:00.293407 master-0 kubenswrapper[7440]: I0312 14:28:00.293401 7440 scope.go:117] "RemoveContainer" containerID="292c715d936689cc5a4e9267c3b0c4dd0ea682eff5c05fa9b9cfcf2c9fa3088f" Mar 12 14:28:00.294091 master-0 kubenswrapper[7440]: I0312 14:28:00.294062 7440 scope.go:117] "RemoveContainer" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" Mar 12 14:28:00.294361 master-0 kubenswrapper[7440]: E0312 14:28:00.294328 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:28:00.458622 master-0 kubenswrapper[7440]: I0312 14:28:00.458505 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:28:01.129037 master-0 kubenswrapper[7440]: I0312 14:28:01.128965 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:28:01.131833 master-0 kubenswrapper[7440]: I0312 14:28:01.131793 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:01.131833 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:01.131833 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:01.131833 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:01.132004 master-0 kubenswrapper[7440]: I0312 14:28:01.131841 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:01.303674 master-0 kubenswrapper[7440]: I0312 14:28:01.303623 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/kube-controller-manager-cert-syncer/0.log" Mar 12 14:28:02.131230 master-0 kubenswrapper[7440]: I0312 14:28:02.131177 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:02.131230 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:02.131230 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:02.131230 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:02.131559 master-0 kubenswrapper[7440]: I0312 14:28:02.131249 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:03.132743 master-0 kubenswrapper[7440]: I0312 14:28:03.132693 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:03.132743 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:03.132743 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:03.132743 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:03.133354 master-0 kubenswrapper[7440]: I0312 14:28:03.132752 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:03.804830 master-0 kubenswrapper[7440]: I0312 14:28:03.804770 7440 scope.go:117] "RemoveContainer" containerID="8d633c24c0fbfe0880167743a2ebe5f60f0f211a6026d8c3f55625a7e7adbd93" Mar 12 14:28:03.805252 master-0 kubenswrapper[7440]: E0312 14:28:03.805190 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-7c6989d6c4-jpf47_openshift-authentication-operator(57930a54-89ab-4ec8-a504-74035bb74d63)\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" podUID="57930a54-89ab-4ec8-a504-74035bb74d63" Mar 12 14:28:04.131912 master-0 kubenswrapper[7440]: I0312 14:28:04.131789 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:04.131912 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:04.131912 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:04.131912 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:04.131912 master-0 kubenswrapper[7440]: I0312 14:28:04.131854 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:05.146911 master-0 kubenswrapper[7440]: I0312 14:28:05.146834 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:05.146911 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:05.146911 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:05.146911 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:05.147715 master-0 kubenswrapper[7440]: I0312 14:28:05.146918 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:05.804792 master-0 kubenswrapper[7440]: I0312 14:28:05.804722 7440 scope.go:117] "RemoveContainer" containerID="cd9014bcffe6ddde739ac15065ac6e2169de2b76f2a0295b122a3bc2a089b78d" Mar 12 14:28:05.805168 master-0 kubenswrapper[7440]: E0312 14:28:05.805037 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-z9hzg_openshift-cluster-storage-operator(d56089bf-177c-492d-8964-73a45574e7ed)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" podUID="d56089bf-177c-492d-8964-73a45574e7ed" Mar 12 14:28:05.805284 master-0 kubenswrapper[7440]: I0312 14:28:05.805178 7440 scope.go:117] "RemoveContainer" containerID="cdfe0e410845d5baf2e09f8531028d9af2d70fe1e72cb65a07430cd6462f940c" Mar 12 14:28:05.805683 master-0 kubenswrapper[7440]: E0312 14:28:05.805615 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-8565d84698-zwdgk_openshift-controller-manager-operator(d00a8cc7-7774-40bd-94a1-9ac2d0f63234)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" podUID="d00a8cc7-7774-40bd-94a1-9ac2d0f63234" Mar 12 14:28:06.131457 master-0 kubenswrapper[7440]: I0312 14:28:06.131335 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:06.131457 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:06.131457 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:06.131457 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:06.131457 master-0 kubenswrapper[7440]: I0312 14:28:06.131393 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:06.517266 master-0 kubenswrapper[7440]: I0312 14:28:06.517216 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:28:06.518603 master-0 kubenswrapper[7440]: I0312 14:28:06.518587 7440 scope.go:117] "RemoveContainer" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" Mar 12 14:28:06.518945 master-0 kubenswrapper[7440]: E0312 14:28:06.518926 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:28:06.805064 master-0 kubenswrapper[7440]: I0312 14:28:06.804966 7440 scope.go:117] "RemoveContainer" containerID="48fe02f7a254d8d98f49ab36edbe52b1845dafa9c51071f3a38df472248895ba" Mar 12 14:28:06.805235 master-0 kubenswrapper[7440]: E0312 14:28:06.805179 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=network-operator pod=network-operator-7c649bf6d4-ldxfn_openshift-network-operator(7433d9bf-4edf-4787-a7a1-e5102c7264c7)\"" pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" podUID="7433d9bf-4edf-4787-a7a1-e5102c7264c7" Mar 12 14:28:06.805235 master-0 kubenswrapper[7440]: I0312 14:28:06.805209 7440 scope.go:117] "RemoveContainer" containerID="a44c4ecc04fa9e6c4e5b12d13bcdb1beeaf87374ca0d2540444a8445b0121666" Mar 12 14:28:06.805467 master-0 kubenswrapper[7440]: E0312 14:28:06.805432 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-86d7cdfdfb-qtql5_openshift-kube-controller-manager-operator(1bba274a-38c7-4d13-88a5-6bc39228416c)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" podUID="1bba274a-38c7-4d13-88a5-6bc39228416c" Mar 12 14:28:07.132912 master-0 kubenswrapper[7440]: I0312 14:28:07.132742 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:07.132912 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:07.132912 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:07.132912 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:07.132912 master-0 kubenswrapper[7440]: I0312 14:28:07.132852 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:07.804694 master-0 kubenswrapper[7440]: I0312 14:28:07.804638 7440 scope.go:117] "RemoveContainer" containerID="7c75b0b66bdc20c82fe578e42fb9ae10c12f677e86c5f3339f7a2fe4881a6199" Mar 12 14:28:07.805179 master-0 kubenswrapper[7440]: E0312 14:28:07.804873 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-storage-version-migrator-operator pod=kube-storage-version-migrator-operator-7f65c457f5-hkf2t_openshift-kube-storage-version-migrator-operator(3dc73c14-852d-4957-b6ac-84366ba0594f)\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" podUID="3dc73c14-852d-4957-b6ac-84366ba0594f" Mar 12 14:28:08.130837 master-0 kubenswrapper[7440]: I0312 14:28:08.130669 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:08.130837 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:08.130837 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:08.130837 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:08.130837 master-0 kubenswrapper[7440]: I0312 14:28:08.130736 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:08.805003 master-0 kubenswrapper[7440]: I0312 14:28:08.804947 7440 scope.go:117] "RemoveContainer" containerID="5efaa8718300502113322a1eee9979f20223fd4bf67820218994af2c3ddf3fdb" Mar 12 14:28:08.805493 master-0 kubenswrapper[7440]: E0312 14:28:08.805143 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-68bd585b-smpl5_openshift-kube-apiserver-operator(a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" podUID="a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2" Mar 12 14:28:09.131520 master-0 kubenswrapper[7440]: I0312 14:28:09.131342 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:09.131520 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:09.131520 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:09.131520 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:09.131520 master-0 kubenswrapper[7440]: I0312 14:28:09.131463 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:10.131611 master-0 kubenswrapper[7440]: I0312 14:28:10.131571 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:10.131611 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:10.131611 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:10.131611 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:10.132259 master-0 kubenswrapper[7440]: I0312 14:28:10.132229 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:11.083581 master-0 kubenswrapper[7440]: I0312 14:28:11.083541 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 12 14:28:11.131093 master-0 kubenswrapper[7440]: I0312 14:28:11.131045 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:11.131093 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:11.131093 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:11.131093 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:11.131424 master-0 kubenswrapper[7440]: I0312 14:28:11.131401 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:12.132122 master-0 kubenswrapper[7440]: I0312 14:28:12.132044 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:12.132122 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:12.132122 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:12.132122 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:12.132673 master-0 kubenswrapper[7440]: I0312 14:28:12.132137 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:13.132621 master-0 kubenswrapper[7440]: I0312 14:28:13.132564 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:13.132621 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:13.132621 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:13.132621 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:13.133821 master-0 kubenswrapper[7440]: I0312 14:28:13.133775 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:13.805308 master-0 kubenswrapper[7440]: I0312 14:28:13.805248 7440 scope.go:117] "RemoveContainer" containerID="61400ed5c81e00b9e0a4acdbab9426e759da65e0bd1381d3d70a790a5d50716c" Mar 12 14:28:13.805558 master-0 kubenswrapper[7440]: E0312 14:28:13.805508 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=route-controller-manager pod=route-controller-manager-7f8bfc67b-pz8rc_openshift-route-controller-manager(df31c4c2-304e-4bad-8e6f-18c174eba675)\"" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" Mar 12 14:28:14.131054 master-0 kubenswrapper[7440]: I0312 14:28:14.130945 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:14.131054 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:14.131054 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:14.131054 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:14.131054 master-0 kubenswrapper[7440]: I0312 14:28:14.131013 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:15.131626 master-0 kubenswrapper[7440]: I0312 14:28:15.131557 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:15.131626 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:15.131626 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:15.131626 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:15.132253 master-0 kubenswrapper[7440]: I0312 14:28:15.131648 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:15.807071 master-0 kubenswrapper[7440]: I0312 14:28:15.805032 7440 scope.go:117] "RemoveContainer" containerID="8d633c24c0fbfe0880167743a2ebe5f60f0f211a6026d8c3f55625a7e7adbd93" Mar 12 14:28:15.807071 master-0 kubenswrapper[7440]: E0312 14:28:15.805363 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-7c6989d6c4-jpf47_openshift-authentication-operator(57930a54-89ab-4ec8-a504-74035bb74d63)\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" podUID="57930a54-89ab-4ec8-a504-74035bb74d63" Mar 12 14:28:16.130840 master-0 kubenswrapper[7440]: I0312 14:28:16.130732 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:16.130840 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:16.130840 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:16.130840 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:16.130840 master-0 kubenswrapper[7440]: I0312 14:28:16.130798 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:17.131087 master-0 kubenswrapper[7440]: I0312 14:28:17.131040 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:17.131087 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:17.131087 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:17.131087 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:17.131800 master-0 kubenswrapper[7440]: I0312 14:28:17.131101 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:17.804767 master-0 kubenswrapper[7440]: I0312 14:28:17.804719 7440 scope.go:117] "RemoveContainer" containerID="a44c4ecc04fa9e6c4e5b12d13bcdb1beeaf87374ca0d2540444a8445b0121666" Mar 12 14:28:17.805016 master-0 kubenswrapper[7440]: E0312 14:28:17.804954 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-86d7cdfdfb-qtql5_openshift-kube-controller-manager-operator(1bba274a-38c7-4d13-88a5-6bc39228416c)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" podUID="1bba274a-38c7-4d13-88a5-6bc39228416c" Mar 12 14:28:18.131832 master-0 kubenswrapper[7440]: I0312 14:28:18.131641 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:18.131832 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:18.131832 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:18.131832 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:18.131832 master-0 kubenswrapper[7440]: I0312 14:28:18.131723 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:19.131849 master-0 kubenswrapper[7440]: I0312 14:28:19.131792 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:19.131849 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:19.131849 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:19.131849 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:19.132807 master-0 kubenswrapper[7440]: I0312 14:28:19.132771 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:19.808834 master-0 kubenswrapper[7440]: I0312 14:28:19.808768 7440 scope.go:117] "RemoveContainer" containerID="5efaa8718300502113322a1eee9979f20223fd4bf67820218994af2c3ddf3fdb" Mar 12 14:28:19.809124 master-0 kubenswrapper[7440]: I0312 14:28:19.808969 7440 scope.go:117] "RemoveContainer" containerID="48fe02f7a254d8d98f49ab36edbe52b1845dafa9c51071f3a38df472248895ba" Mar 12 14:28:19.809124 master-0 kubenswrapper[7440]: E0312 14:28:19.809040 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-68bd585b-smpl5_openshift-kube-apiserver-operator(a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" podUID="a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2" Mar 12 14:28:19.809234 master-0 kubenswrapper[7440]: E0312 14:28:19.809149 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=network-operator pod=network-operator-7c649bf6d4-ldxfn_openshift-network-operator(7433d9bf-4edf-4787-a7a1-e5102c7264c7)\"" pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" podUID="7433d9bf-4edf-4787-a7a1-e5102c7264c7" Mar 12 14:28:19.809686 master-0 kubenswrapper[7440]: I0312 14:28:19.809538 7440 scope.go:117] "RemoveContainer" containerID="cd9014bcffe6ddde739ac15065ac6e2169de2b76f2a0295b122a3bc2a089b78d" Mar 12 14:28:19.809823 master-0 kubenswrapper[7440]: E0312 14:28:19.809794 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-z9hzg_openshift-cluster-storage-operator(d56089bf-177c-492d-8964-73a45574e7ed)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" podUID="d56089bf-177c-492d-8964-73a45574e7ed" Mar 12 14:28:20.132506 master-0 kubenswrapper[7440]: I0312 14:28:20.132379 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:20.132506 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:20.132506 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:20.132506 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:20.132506 master-0 kubenswrapper[7440]: I0312 14:28:20.132455 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:20.805205 master-0 kubenswrapper[7440]: I0312 14:28:20.805138 7440 scope.go:117] "RemoveContainer" containerID="cdfe0e410845d5baf2e09f8531028d9af2d70fe1e72cb65a07430cd6462f940c" Mar 12 14:28:20.805525 master-0 kubenswrapper[7440]: I0312 14:28:20.805267 7440 scope.go:117] "RemoveContainer" containerID="7c75b0b66bdc20c82fe578e42fb9ae10c12f677e86c5f3339f7a2fe4881a6199" Mar 12 14:28:20.805525 master-0 kubenswrapper[7440]: I0312 14:28:20.805306 7440 scope.go:117] "RemoveContainer" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" Mar 12 14:28:20.805525 master-0 kubenswrapper[7440]: E0312 14:28:20.805365 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-8565d84698-zwdgk_openshift-controller-manager-operator(d00a8cc7-7774-40bd-94a1-9ac2d0f63234)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" podUID="d00a8cc7-7774-40bd-94a1-9ac2d0f63234" Mar 12 14:28:20.805525 master-0 kubenswrapper[7440]: E0312 14:28:20.805440 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-storage-version-migrator-operator pod=kube-storage-version-migrator-operator-7f65c457f5-hkf2t_openshift-kube-storage-version-migrator-operator(3dc73c14-852d-4957-b6ac-84366ba0594f)\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" podUID="3dc73c14-852d-4957-b6ac-84366ba0594f" Mar 12 14:28:20.805525 master-0 kubenswrapper[7440]: E0312 14:28:20.805492 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:28:21.131420 master-0 kubenswrapper[7440]: I0312 14:28:21.131353 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:21.131420 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:21.131420 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:21.131420 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:21.131709 master-0 kubenswrapper[7440]: I0312 14:28:21.131446 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:22.132560 master-0 kubenswrapper[7440]: I0312 14:28:22.132486 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:22.132560 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:22.132560 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:22.132560 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:22.133110 master-0 kubenswrapper[7440]: I0312 14:28:22.132576 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:23.132362 master-0 kubenswrapper[7440]: I0312 14:28:23.132263 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:23.132362 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:23.132362 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:23.132362 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:23.132362 master-0 kubenswrapper[7440]: I0312 14:28:23.132332 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:24.131877 master-0 kubenswrapper[7440]: I0312 14:28:24.131806 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:24.131877 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:24.131877 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:24.131877 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:24.132172 master-0 kubenswrapper[7440]: I0312 14:28:24.131882 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:25.132617 master-0 kubenswrapper[7440]: I0312 14:28:25.132287 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:25.132617 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:25.132617 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:25.132617 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:25.132617 master-0 kubenswrapper[7440]: I0312 14:28:25.132392 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:26.131602 master-0 kubenswrapper[7440]: I0312 14:28:26.131507 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:26.131602 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:26.131602 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:26.131602 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:26.131949 master-0 kubenswrapper[7440]: I0312 14:28:26.131650 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:27.132505 master-0 kubenswrapper[7440]: I0312 14:28:27.132459 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:27.132505 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:27.132505 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:27.132505 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:27.133059 master-0 kubenswrapper[7440]: I0312 14:28:27.132523 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:27.805266 master-0 kubenswrapper[7440]: I0312 14:28:27.805204 7440 scope.go:117] "RemoveContainer" containerID="61400ed5c81e00b9e0a4acdbab9426e759da65e0bd1381d3d70a790a5d50716c" Mar 12 14:28:27.805509 master-0 kubenswrapper[7440]: E0312 14:28:27.805417 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=route-controller-manager pod=route-controller-manager-7f8bfc67b-pz8rc_openshift-route-controller-manager(df31c4c2-304e-4bad-8e6f-18c174eba675)\"" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" Mar 12 14:28:28.131140 master-0 kubenswrapper[7440]: I0312 14:28:28.131074 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:28.131140 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:28.131140 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:28.131140 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:28.131446 master-0 kubenswrapper[7440]: I0312 14:28:28.131156 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:28.805469 master-0 kubenswrapper[7440]: I0312 14:28:28.805396 7440 scope.go:117] "RemoveContainer" containerID="8d633c24c0fbfe0880167743a2ebe5f60f0f211a6026d8c3f55625a7e7adbd93" Mar 12 14:28:28.806322 master-0 kubenswrapper[7440]: E0312 14:28:28.805728 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-7c6989d6c4-jpf47_openshift-authentication-operator(57930a54-89ab-4ec8-a504-74035bb74d63)\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" podUID="57930a54-89ab-4ec8-a504-74035bb74d63" Mar 12 14:28:29.131969 master-0 kubenswrapper[7440]: I0312 14:28:29.131811 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:29.131969 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:29.131969 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:29.131969 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:29.131969 master-0 kubenswrapper[7440]: I0312 14:28:29.131886 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:30.133021 master-0 kubenswrapper[7440]: I0312 14:28:30.132940 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:30.133021 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:30.133021 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:30.133021 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:30.134046 master-0 kubenswrapper[7440]: I0312 14:28:30.133886 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:30.209128 master-0 kubenswrapper[7440]: I0312 14:28:30.209034 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=20.209011872 podStartE2EDuration="20.209011872s" podCreationTimestamp="2026-03-12 14:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:28:19.835019589 +0000 UTC m=+960.170398158" watchObservedRunningTime="2026-03-12 14:28:30.209011872 +0000 UTC m=+970.544390431" Mar 12 14:28:30.211382 master-0 kubenswrapper[7440]: I0312 14:28:30.211332 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4"] Mar 12 14:28:30.211641 master-0 kubenswrapper[7440]: I0312 14:28:30.211597 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" podUID="e2a8ac56-734c-4d51-9171-0540f8b9f242" containerName="telemeter-client" containerID="cri-o://e168f066de3b5e7cee61e5586918c799488b316ce826a0bf7d5dd489987b0eb1" gracePeriod=30 Mar 12 14:28:30.211768 master-0 kubenswrapper[7440]: I0312 14:28:30.211682 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" podUID="e2a8ac56-734c-4d51-9171-0540f8b9f242" containerName="kube-rbac-proxy" containerID="cri-o://41b5c53c49fe52ce73b445331a08ad2c82edf1c84e6716a8772e0ee97bf8ec25" gracePeriod=30 Mar 12 14:28:30.211978 master-0 kubenswrapper[7440]: I0312 14:28:30.211686 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" podUID="e2a8ac56-734c-4d51-9171-0540f8b9f242" containerName="reload" containerID="cri-o://a010fc25af9de91f067075c308675d17b18bd607610859f12d4815abc91678e3" gracePeriod=30 Mar 12 14:28:30.478120 master-0 kubenswrapper[7440]: I0312 14:28:30.478064 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-cbb5fd9f8-zjmz4_e2a8ac56-734c-4d51-9171-0540f8b9f242/telemeter-client/0.log" Mar 12 14:28:30.478423 master-0 kubenswrapper[7440]: I0312 14:28:30.478133 7440 generic.go:334] "Generic (PLEG): container finished" podID="e2a8ac56-734c-4d51-9171-0540f8b9f242" containerID="41b5c53c49fe52ce73b445331a08ad2c82edf1c84e6716a8772e0ee97bf8ec25" exitCode=0 Mar 12 14:28:30.478423 master-0 kubenswrapper[7440]: I0312 14:28:30.478154 7440 generic.go:334] "Generic (PLEG): container finished" podID="e2a8ac56-734c-4d51-9171-0540f8b9f242" containerID="a010fc25af9de91f067075c308675d17b18bd607610859f12d4815abc91678e3" exitCode=0 Mar 12 14:28:30.478423 master-0 kubenswrapper[7440]: I0312 14:28:30.478167 7440 generic.go:334] "Generic (PLEG): container finished" podID="e2a8ac56-734c-4d51-9171-0540f8b9f242" containerID="e168f066de3b5e7cee61e5586918c799488b316ce826a0bf7d5dd489987b0eb1" exitCode=2 Mar 12 14:28:30.478423 master-0 kubenswrapper[7440]: I0312 14:28:30.478185 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" event={"ID":"e2a8ac56-734c-4d51-9171-0540f8b9f242","Type":"ContainerDied","Data":"41b5c53c49fe52ce73b445331a08ad2c82edf1c84e6716a8772e0ee97bf8ec25"} Mar 12 14:28:30.478423 master-0 kubenswrapper[7440]: I0312 14:28:30.478274 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" event={"ID":"e2a8ac56-734c-4d51-9171-0540f8b9f242","Type":"ContainerDied","Data":"a010fc25af9de91f067075c308675d17b18bd607610859f12d4815abc91678e3"} Mar 12 14:28:30.478423 master-0 kubenswrapper[7440]: I0312 14:28:30.478290 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" event={"ID":"e2a8ac56-734c-4d51-9171-0540f8b9f242","Type":"ContainerDied","Data":"e168f066de3b5e7cee61e5586918c799488b316ce826a0bf7d5dd489987b0eb1"} Mar 12 14:28:30.616203 master-0 kubenswrapper[7440]: I0312 14:28:30.616169 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-cbb5fd9f8-zjmz4_e2a8ac56-734c-4d51-9171-0540f8b9f242/telemeter-client/0.log" Mar 12 14:28:30.616327 master-0 kubenswrapper[7440]: I0312 14:28:30.616240 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:28:30.799063 master-0 kubenswrapper[7440]: I0312 14:28:30.798861 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/e2a8ac56-734c-4d51-9171-0540f8b9f242-telemeter-client-tls\") pod \"e2a8ac56-734c-4d51-9171-0540f8b9f242\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " Mar 12 14:28:30.799063 master-0 kubenswrapper[7440]: I0312 14:28:30.798961 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e2a8ac56-734c-4d51-9171-0540f8b9f242-metrics-client-ca\") pod \"e2a8ac56-734c-4d51-9171-0540f8b9f242\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " Mar 12 14:28:30.799063 master-0 kubenswrapper[7440]: I0312 14:28:30.798997 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2a8ac56-734c-4d51-9171-0540f8b9f242-serving-certs-ca-bundle\") pod \"e2a8ac56-734c-4d51-9171-0540f8b9f242\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " Mar 12 14:28:30.799491 master-0 kubenswrapper[7440]: I0312 14:28:30.799145 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e2a8ac56-734c-4d51-9171-0540f8b9f242-secret-telemeter-client-kube-rbac-proxy-config\") pod \"e2a8ac56-734c-4d51-9171-0540f8b9f242\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " Mar 12 14:28:30.799491 master-0 kubenswrapper[7440]: I0312 14:28:30.799264 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/e2a8ac56-734c-4d51-9171-0540f8b9f242-federate-client-tls\") pod \"e2a8ac56-734c-4d51-9171-0540f8b9f242\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " Mar 12 14:28:30.799491 master-0 kubenswrapper[7440]: I0312 14:28:30.799329 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2a8ac56-734c-4d51-9171-0540f8b9f242-telemeter-trusted-ca-bundle\") pod \"e2a8ac56-734c-4d51-9171-0540f8b9f242\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " Mar 12 14:28:30.799491 master-0 kubenswrapper[7440]: I0312 14:28:30.799407 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjplz\" (UniqueName: \"kubernetes.io/projected/e2a8ac56-734c-4d51-9171-0540f8b9f242-kube-api-access-kjplz\") pod \"e2a8ac56-734c-4d51-9171-0540f8b9f242\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " Mar 12 14:28:30.799491 master-0 kubenswrapper[7440]: I0312 14:28:30.799446 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/e2a8ac56-734c-4d51-9171-0540f8b9f242-secret-telemeter-client\") pod \"e2a8ac56-734c-4d51-9171-0540f8b9f242\" (UID: \"e2a8ac56-734c-4d51-9171-0540f8b9f242\") " Mar 12 14:28:30.799755 master-0 kubenswrapper[7440]: I0312 14:28:30.799608 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2a8ac56-734c-4d51-9171-0540f8b9f242-serving-certs-ca-bundle" (OuterVolumeSpecName: "serving-certs-ca-bundle") pod "e2a8ac56-734c-4d51-9171-0540f8b9f242" (UID: "e2a8ac56-734c-4d51-9171-0540f8b9f242"). InnerVolumeSpecName "serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:28:30.799803 master-0 kubenswrapper[7440]: I0312 14:28:30.799734 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2a8ac56-734c-4d51-9171-0540f8b9f242-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "e2a8ac56-734c-4d51-9171-0540f8b9f242" (UID: "e2a8ac56-734c-4d51-9171-0540f8b9f242"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:28:30.799857 master-0 kubenswrapper[7440]: I0312 14:28:30.799780 7440 reconciler_common.go:293] "Volume detached for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2a8ac56-734c-4d51-9171-0540f8b9f242-serving-certs-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:28:30.800490 master-0 kubenswrapper[7440]: I0312 14:28:30.800384 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2a8ac56-734c-4d51-9171-0540f8b9f242-telemeter-trusted-ca-bundle" (OuterVolumeSpecName: "telemeter-trusted-ca-bundle") pod "e2a8ac56-734c-4d51-9171-0540f8b9f242" (UID: "e2a8ac56-734c-4d51-9171-0540f8b9f242"). InnerVolumeSpecName "telemeter-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:28:30.803159 master-0 kubenswrapper[7440]: I0312 14:28:30.803132 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a8ac56-734c-4d51-9171-0540f8b9f242-telemeter-client-tls" (OuterVolumeSpecName: "telemeter-client-tls") pod "e2a8ac56-734c-4d51-9171-0540f8b9f242" (UID: "e2a8ac56-734c-4d51-9171-0540f8b9f242"). InnerVolumeSpecName "telemeter-client-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:28:30.803423 master-0 kubenswrapper[7440]: I0312 14:28:30.803343 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a8ac56-734c-4d51-9171-0540f8b9f242-federate-client-tls" (OuterVolumeSpecName: "federate-client-tls") pod "e2a8ac56-734c-4d51-9171-0540f8b9f242" (UID: "e2a8ac56-734c-4d51-9171-0540f8b9f242"). InnerVolumeSpecName "federate-client-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:28:30.803571 master-0 kubenswrapper[7440]: I0312 14:28:30.803504 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a8ac56-734c-4d51-9171-0540f8b9f242-secret-telemeter-client-kube-rbac-proxy-config" (OuterVolumeSpecName: "secret-telemeter-client-kube-rbac-proxy-config") pod "e2a8ac56-734c-4d51-9171-0540f8b9f242" (UID: "e2a8ac56-734c-4d51-9171-0540f8b9f242"). InnerVolumeSpecName "secret-telemeter-client-kube-rbac-proxy-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:28:30.803733 master-0 kubenswrapper[7440]: I0312 14:28:30.803660 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a8ac56-734c-4d51-9171-0540f8b9f242-secret-telemeter-client" (OuterVolumeSpecName: "secret-telemeter-client") pod "e2a8ac56-734c-4d51-9171-0540f8b9f242" (UID: "e2a8ac56-734c-4d51-9171-0540f8b9f242"). InnerVolumeSpecName "secret-telemeter-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:28:30.803883 master-0 kubenswrapper[7440]: I0312 14:28:30.803848 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2a8ac56-734c-4d51-9171-0540f8b9f242-kube-api-access-kjplz" (OuterVolumeSpecName: "kube-api-access-kjplz") pod "e2a8ac56-734c-4d51-9171-0540f8b9f242" (UID: "e2a8ac56-734c-4d51-9171-0540f8b9f242"). InnerVolumeSpecName "kube-api-access-kjplz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:28:30.804440 master-0 kubenswrapper[7440]: I0312 14:28:30.804413 7440 scope.go:117] "RemoveContainer" containerID="48fe02f7a254d8d98f49ab36edbe52b1845dafa9c51071f3a38df472248895ba" Mar 12 14:28:30.804670 master-0 kubenswrapper[7440]: E0312 14:28:30.804639 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=network-operator pod=network-operator-7c649bf6d4-ldxfn_openshift-network-operator(7433d9bf-4edf-4787-a7a1-e5102c7264c7)\"" pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" podUID="7433d9bf-4edf-4787-a7a1-e5102c7264c7" Mar 12 14:28:30.900886 master-0 kubenswrapper[7440]: I0312 14:28:30.900769 7440 reconciler_common.go:293] "Volume detached for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/e2a8ac56-734c-4d51-9171-0540f8b9f242-telemeter-client-tls\") on node \"master-0\" DevicePath \"\"" Mar 12 14:28:30.900886 master-0 kubenswrapper[7440]: I0312 14:28:30.900821 7440 reconciler_common.go:293] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e2a8ac56-734c-4d51-9171-0540f8b9f242-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:28:30.900886 master-0 kubenswrapper[7440]: I0312 14:28:30.900839 7440 reconciler_common.go:293] "Volume detached for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e2a8ac56-734c-4d51-9171-0540f8b9f242-secret-telemeter-client-kube-rbac-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:28:30.900886 master-0 kubenswrapper[7440]: I0312 14:28:30.900849 7440 reconciler_common.go:293] "Volume detached for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/e2a8ac56-734c-4d51-9171-0540f8b9f242-federate-client-tls\") on node \"master-0\" DevicePath \"\"" Mar 12 14:28:30.900886 master-0 kubenswrapper[7440]: I0312 14:28:30.900862 7440 reconciler_common.go:293] "Volume detached for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2a8ac56-734c-4d51-9171-0540f8b9f242-telemeter-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:28:30.901312 master-0 kubenswrapper[7440]: I0312 14:28:30.900987 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjplz\" (UniqueName: \"kubernetes.io/projected/e2a8ac56-734c-4d51-9171-0540f8b9f242-kube-api-access-kjplz\") on node \"master-0\" DevicePath \"\"" Mar 12 14:28:30.901312 master-0 kubenswrapper[7440]: I0312 14:28:30.901066 7440 reconciler_common.go:293] "Volume detached for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/e2a8ac56-734c-4d51-9171-0540f8b9f242-secret-telemeter-client\") on node \"master-0\" DevicePath \"\"" Mar 12 14:28:31.131618 master-0 kubenswrapper[7440]: I0312 14:28:31.131463 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:31.131618 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:31.131618 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:31.131618 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:31.131618 master-0 kubenswrapper[7440]: I0312 14:28:31.131580 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:31.489349 master-0 kubenswrapper[7440]: I0312 14:28:31.489272 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-cbb5fd9f8-zjmz4_e2a8ac56-734c-4d51-9171-0540f8b9f242/telemeter-client/0.log" Mar 12 14:28:31.489349 master-0 kubenswrapper[7440]: I0312 14:28:31.489347 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" event={"ID":"e2a8ac56-734c-4d51-9171-0540f8b9f242","Type":"ContainerDied","Data":"73192d725f60950aaefb2920f89f9df4128d62e1eb94b2e0025225f509337195"} Mar 12 14:28:31.490225 master-0 kubenswrapper[7440]: I0312 14:28:31.489390 7440 scope.go:117] "RemoveContainer" containerID="41b5c53c49fe52ce73b445331a08ad2c82edf1c84e6716a8772e0ee97bf8ec25" Mar 12 14:28:31.490225 master-0 kubenswrapper[7440]: I0312 14:28:31.489536 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4" Mar 12 14:28:31.511012 master-0 kubenswrapper[7440]: I0312 14:28:31.510961 7440 scope.go:117] "RemoveContainer" containerID="a010fc25af9de91f067075c308675d17b18bd607610859f12d4815abc91678e3" Mar 12 14:28:31.532211 master-0 kubenswrapper[7440]: I0312 14:28:31.532091 7440 scope.go:117] "RemoveContainer" containerID="e168f066de3b5e7cee61e5586918c799488b316ce826a0bf7d5dd489987b0eb1" Mar 12 14:28:31.538194 master-0 kubenswrapper[7440]: I0312 14:28:31.538127 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4"] Mar 12 14:28:31.542152 master-0 kubenswrapper[7440]: I0312 14:28:31.542097 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/telemeter-client-cbb5fd9f8-zjmz4"] Mar 12 14:28:31.812968 master-0 kubenswrapper[7440]: I0312 14:28:31.812837 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2a8ac56-734c-4d51-9171-0540f8b9f242" path="/var/lib/kubelet/pods/e2a8ac56-734c-4d51-9171-0540f8b9f242/volumes" Mar 12 14:28:32.131455 master-0 kubenswrapper[7440]: I0312 14:28:32.131355 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:32.131455 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:32.131455 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:32.131455 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:32.131455 master-0 kubenswrapper[7440]: I0312 14:28:32.131417 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:32.806039 master-0 kubenswrapper[7440]: I0312 14:28:32.805954 7440 scope.go:117] "RemoveContainer" containerID="a44c4ecc04fa9e6c4e5b12d13bcdb1beeaf87374ca0d2540444a8445b0121666" Mar 12 14:28:32.807764 master-0 kubenswrapper[7440]: E0312 14:28:32.806350 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-86d7cdfdfb-qtql5_openshift-kube-controller-manager-operator(1bba274a-38c7-4d13-88a5-6bc39228416c)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" podUID="1bba274a-38c7-4d13-88a5-6bc39228416c" Mar 12 14:28:33.133043 master-0 kubenswrapper[7440]: I0312 14:28:33.132921 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:33.133043 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:33.133043 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:33.133043 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:33.133043 master-0 kubenswrapper[7440]: I0312 14:28:33.133000 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:33.805698 master-0 kubenswrapper[7440]: I0312 14:28:33.805610 7440 scope.go:117] "RemoveContainer" containerID="cdfe0e410845d5baf2e09f8531028d9af2d70fe1e72cb65a07430cd6462f940c" Mar 12 14:28:33.806260 master-0 kubenswrapper[7440]: E0312 14:28:33.806093 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-8565d84698-zwdgk_openshift-controller-manager-operator(d00a8cc7-7774-40bd-94a1-9ac2d0f63234)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" podUID="d00a8cc7-7774-40bd-94a1-9ac2d0f63234" Mar 12 14:28:34.132620 master-0 kubenswrapper[7440]: I0312 14:28:34.132498 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:34.132620 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:34.132620 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:34.132620 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:34.132620 master-0 kubenswrapper[7440]: I0312 14:28:34.132589 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:34.804701 master-0 kubenswrapper[7440]: I0312 14:28:34.804652 7440 scope.go:117] "RemoveContainer" containerID="cd9014bcffe6ddde739ac15065ac6e2169de2b76f2a0295b122a3bc2a089b78d" Mar 12 14:28:34.805015 master-0 kubenswrapper[7440]: E0312 14:28:34.804848 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-z9hzg_openshift-cluster-storage-operator(d56089bf-177c-492d-8964-73a45574e7ed)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" podUID="d56089bf-177c-492d-8964-73a45574e7ed" Mar 12 14:28:34.805063 master-0 kubenswrapper[7440]: I0312 14:28:34.805022 7440 scope.go:117] "RemoveContainer" containerID="5efaa8718300502113322a1eee9979f20223fd4bf67820218994af2c3ddf3fdb" Mar 12 14:28:35.131732 master-0 kubenswrapper[7440]: I0312 14:28:35.131533 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:35.131732 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:35.131732 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:35.131732 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:35.131732 master-0 kubenswrapper[7440]: I0312 14:28:35.131645 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:35.532034 master-0 kubenswrapper[7440]: I0312 14:28:35.531953 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-smpl5_a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2/kube-apiserver-operator/3.log" Mar 12 14:28:35.532034 master-0 kubenswrapper[7440]: I0312 14:28:35.532026 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" event={"ID":"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2","Type":"ContainerStarted","Data":"a5a8fe9347723240cf160315b7dc5a4ab938896729de851e21ca853677fbf3ce"} Mar 12 14:28:35.805227 master-0 kubenswrapper[7440]: I0312 14:28:35.805065 7440 scope.go:117] "RemoveContainer" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" Mar 12 14:28:35.805493 master-0 kubenswrapper[7440]: E0312 14:28:35.805297 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:28:35.805493 master-0 kubenswrapper[7440]: I0312 14:28:35.805326 7440 scope.go:117] "RemoveContainer" containerID="7c75b0b66bdc20c82fe578e42fb9ae10c12f677e86c5f3339f7a2fe4881a6199" Mar 12 14:28:36.132334 master-0 kubenswrapper[7440]: I0312 14:28:36.132201 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:36.132334 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:36.132334 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:36.132334 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:36.132334 master-0 kubenswrapper[7440]: I0312 14:28:36.132294 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:36.559344 master-0 kubenswrapper[7440]: I0312 14:28:36.559291 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-7f65c457f5-hkf2t_3dc73c14-852d-4957-b6ac-84366ba0594f/kube-storage-version-migrator-operator/3.log" Mar 12 14:28:36.559745 master-0 kubenswrapper[7440]: I0312 14:28:36.559705 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" event={"ID":"3dc73c14-852d-4957-b6ac-84366ba0594f","Type":"ContainerStarted","Data":"9ebeb9694fca5f3db47e9fa609996cadf840e959f920863cd859cd6c26d01671"} Mar 12 14:28:37.132755 master-0 kubenswrapper[7440]: I0312 14:28:37.132644 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:37.132755 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:37.132755 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:37.132755 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:37.132755 master-0 kubenswrapper[7440]: I0312 14:28:37.132749 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:38.132219 master-0 kubenswrapper[7440]: I0312 14:28:38.132140 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:38.132219 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:38.132219 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:38.132219 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:38.132667 master-0 kubenswrapper[7440]: I0312 14:28:38.132220 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:39.132591 master-0 kubenswrapper[7440]: I0312 14:28:39.132470 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:39.132591 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:39.132591 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:39.132591 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:39.132591 master-0 kubenswrapper[7440]: I0312 14:28:39.132553 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:40.131373 master-0 kubenswrapper[7440]: I0312 14:28:40.131313 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:40.131373 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:40.131373 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:40.131373 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:40.131726 master-0 kubenswrapper[7440]: I0312 14:28:40.131376 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:41.130666 master-0 kubenswrapper[7440]: I0312 14:28:41.130618 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:41.130666 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:41.130666 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:41.130666 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:41.131266 master-0 kubenswrapper[7440]: I0312 14:28:41.130704 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:41.804871 master-0 kubenswrapper[7440]: I0312 14:28:41.804821 7440 scope.go:117] "RemoveContainer" containerID="48fe02f7a254d8d98f49ab36edbe52b1845dafa9c51071f3a38df472248895ba" Mar 12 14:28:42.131599 master-0 kubenswrapper[7440]: I0312 14:28:42.131501 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:42.131599 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:42.131599 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:42.131599 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:42.131599 master-0 kubenswrapper[7440]: I0312 14:28:42.131569 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:42.604914 master-0 kubenswrapper[7440]: I0312 14:28:42.604871 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-ldxfn_7433d9bf-4edf-4787-a7a1-e5102c7264c7/network-operator/3.log" Mar 12 14:28:42.605107 master-0 kubenswrapper[7440]: I0312 14:28:42.604956 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" event={"ID":"7433d9bf-4edf-4787-a7a1-e5102c7264c7","Type":"ContainerStarted","Data":"4e84d09329d158806666f09503ce18f2a051ebedca7fa710b43371c50013f13b"} Mar 12 14:28:42.805377 master-0 kubenswrapper[7440]: I0312 14:28:42.805321 7440 scope.go:117] "RemoveContainer" containerID="61400ed5c81e00b9e0a4acdbab9426e759da65e0bd1381d3d70a790a5d50716c" Mar 12 14:28:43.131574 master-0 kubenswrapper[7440]: I0312 14:28:43.131385 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:43.131574 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:43.131574 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:43.131574 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:43.131574 master-0 kubenswrapper[7440]: I0312 14:28:43.131468 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:43.612807 master-0 kubenswrapper[7440]: I0312 14:28:43.612764 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7f8bfc67b-pz8rc_df31c4c2-304e-4bad-8e6f-18c174eba675/route-controller-manager/3.log" Mar 12 14:28:43.613068 master-0 kubenswrapper[7440]: I0312 14:28:43.612824 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" event={"ID":"df31c4c2-304e-4bad-8e6f-18c174eba675","Type":"ContainerStarted","Data":"e05cf7c7c58106dc1c6b46b6d00fbb76e60bbaa968f5d7f6eb52040b9ee4fd95"} Mar 12 14:28:43.613240 master-0 kubenswrapper[7440]: I0312 14:28:43.613195 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:28:43.621808 master-0 kubenswrapper[7440]: I0312 14:28:43.621753 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:28:43.805566 master-0 kubenswrapper[7440]: I0312 14:28:43.805493 7440 scope.go:117] "RemoveContainer" containerID="8d633c24c0fbfe0880167743a2ebe5f60f0f211a6026d8c3f55625a7e7adbd93" Mar 12 14:28:43.805809 master-0 kubenswrapper[7440]: E0312 14:28:43.805764 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-7c6989d6c4-jpf47_openshift-authentication-operator(57930a54-89ab-4ec8-a504-74035bb74d63)\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" podUID="57930a54-89ab-4ec8-a504-74035bb74d63" Mar 12 14:28:44.132080 master-0 kubenswrapper[7440]: I0312 14:28:44.131994 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:44.132080 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:44.132080 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:44.132080 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:44.133185 master-0 kubenswrapper[7440]: I0312 14:28:44.132103 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:44.805053 master-0 kubenswrapper[7440]: I0312 14:28:44.805005 7440 scope.go:117] "RemoveContainer" containerID="cdfe0e410845d5baf2e09f8531028d9af2d70fe1e72cb65a07430cd6462f940c" Mar 12 14:28:45.132751 master-0 kubenswrapper[7440]: I0312 14:28:45.132604 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:45.132751 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:45.132751 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:45.132751 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:45.132751 master-0 kubenswrapper[7440]: I0312 14:28:45.132686 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:45.624607 master-0 kubenswrapper[7440]: I0312 14:28:45.624547 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-zwdgk_d00a8cc7-7774-40bd-94a1-9ac2d0f63234/openshift-controller-manager-operator/3.log" Mar 12 14:28:45.624827 master-0 kubenswrapper[7440]: I0312 14:28:45.624642 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" event={"ID":"d00a8cc7-7774-40bd-94a1-9ac2d0f63234","Type":"ContainerStarted","Data":"b6bbd0c5f61f89850e4a55dd74cf02eb9ebef972bb50c7b01561e16b68e8704e"} Mar 12 14:28:46.132363 master-0 kubenswrapper[7440]: I0312 14:28:46.132263 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:46.132363 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:46.132363 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:46.132363 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:46.132772 master-0 kubenswrapper[7440]: I0312 14:28:46.132399 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:47.130942 master-0 kubenswrapper[7440]: I0312 14:28:47.130836 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:47.130942 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:47.130942 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:47.130942 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:47.130942 master-0 kubenswrapper[7440]: I0312 14:28:47.130919 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:47.805816 master-0 kubenswrapper[7440]: I0312 14:28:47.805751 7440 scope.go:117] "RemoveContainer" containerID="cd9014bcffe6ddde739ac15065ac6e2169de2b76f2a0295b122a3bc2a089b78d" Mar 12 14:28:47.806507 master-0 kubenswrapper[7440]: E0312 14:28:47.806007 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-z9hzg_openshift-cluster-storage-operator(d56089bf-177c-492d-8964-73a45574e7ed)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" podUID="d56089bf-177c-492d-8964-73a45574e7ed" Mar 12 14:28:47.806634 master-0 kubenswrapper[7440]: I0312 14:28:47.806586 7440 scope.go:117] "RemoveContainer" containerID="a44c4ecc04fa9e6c4e5b12d13bcdb1beeaf87374ca0d2540444a8445b0121666" Mar 12 14:28:47.968545 master-0 kubenswrapper[7440]: I0312 14:28:47.968476 7440 patch_prober.go:28] interesting pod/machine-config-daemon-ngzc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:28:47.968764 master-0 kubenswrapper[7440]: I0312 14:28:47.968565 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" podUID="8e4d9407-ff79-4396-a37f-896617e024d4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:28:48.131372 master-0 kubenswrapper[7440]: I0312 14:28:48.131231 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:48.131372 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:48.131372 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:48.131372 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:48.131372 master-0 kubenswrapper[7440]: I0312 14:28:48.131317 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:48.648092 master-0 kubenswrapper[7440]: I0312 14:28:48.648037 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-qtql5_1bba274a-38c7-4d13-88a5-6bc39228416c/kube-controller-manager-operator/3.log" Mar 12 14:28:48.648092 master-0 kubenswrapper[7440]: I0312 14:28:48.648099 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" event={"ID":"1bba274a-38c7-4d13-88a5-6bc39228416c","Type":"ContainerStarted","Data":"62047f99b4ce506e99d53fe6ad293c502f400eb032ad29d0d887e3da41f2256c"} Mar 12 14:28:49.131801 master-0 kubenswrapper[7440]: I0312 14:28:49.131748 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:49.131801 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:49.131801 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:49.131801 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:49.132440 master-0 kubenswrapper[7440]: I0312 14:28:49.131814 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:49.808479 master-0 kubenswrapper[7440]: I0312 14:28:49.808428 7440 scope.go:117] "RemoveContainer" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" Mar 12 14:28:49.808708 master-0 kubenswrapper[7440]: E0312 14:28:49.808679 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:28:50.131528 master-0 kubenswrapper[7440]: I0312 14:28:50.131388 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:50.131528 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:50.131528 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:50.131528 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:50.131528 master-0 kubenswrapper[7440]: I0312 14:28:50.131466 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:51.131461 master-0 kubenswrapper[7440]: I0312 14:28:51.131357 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:51.131461 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:51.131461 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:51.131461 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:51.132094 master-0 kubenswrapper[7440]: I0312 14:28:51.131457 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:52.132082 master-0 kubenswrapper[7440]: I0312 14:28:52.131982 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:52.132082 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:52.132082 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:52.132082 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:52.133508 master-0 kubenswrapper[7440]: I0312 14:28:52.132095 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:53.131555 master-0 kubenswrapper[7440]: I0312 14:28:53.131498 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:53.131555 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:53.131555 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:53.131555 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:53.131869 master-0 kubenswrapper[7440]: I0312 14:28:53.131570 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:54.132291 master-0 kubenswrapper[7440]: I0312 14:28:54.132183 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:54.132291 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:54.132291 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:54.132291 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:54.132291 master-0 kubenswrapper[7440]: I0312 14:28:54.132255 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:55.131711 master-0 kubenswrapper[7440]: I0312 14:28:55.131588 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:55.131711 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:55.131711 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:55.131711 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:55.131711 master-0 kubenswrapper[7440]: I0312 14:28:55.131698 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:56.132399 master-0 kubenswrapper[7440]: I0312 14:28:56.132309 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:56.132399 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:56.132399 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:56.132399 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:56.133473 master-0 kubenswrapper[7440]: I0312 14:28:56.132400 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:56.804833 master-0 kubenswrapper[7440]: I0312 14:28:56.804766 7440 scope.go:117] "RemoveContainer" containerID="8d633c24c0fbfe0880167743a2ebe5f60f0f211a6026d8c3f55625a7e7adbd93" Mar 12 14:28:56.805255 master-0 kubenswrapper[7440]: E0312 14:28:56.805206 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-7c6989d6c4-jpf47_openshift-authentication-operator(57930a54-89ab-4ec8-a504-74035bb74d63)\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" podUID="57930a54-89ab-4ec8-a504-74035bb74d63" Mar 12 14:28:57.131571 master-0 kubenswrapper[7440]: I0312 14:28:57.131392 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:57.131571 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:57.131571 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:57.131571 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:57.131571 master-0 kubenswrapper[7440]: I0312 14:28:57.131513 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:58.131973 master-0 kubenswrapper[7440]: I0312 14:28:58.131854 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:58.131973 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:58.131973 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:58.131973 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:58.131973 master-0 kubenswrapper[7440]: I0312 14:28:58.131971 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:59.131470 master-0 kubenswrapper[7440]: I0312 14:28:59.131397 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:28:59.131470 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:28:59.131470 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:28:59.131470 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:28:59.131470 master-0 kubenswrapper[7440]: I0312 14:28:59.131464 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:28:59.808545 master-0 kubenswrapper[7440]: I0312 14:28:59.808488 7440 scope.go:117] "RemoveContainer" containerID="cd9014bcffe6ddde739ac15065ac6e2169de2b76f2a0295b122a3bc2a089b78d" Mar 12 14:28:59.809096 master-0 kubenswrapper[7440]: E0312 14:28:59.808801 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-z9hzg_openshift-cluster-storage-operator(d56089bf-177c-492d-8964-73a45574e7ed)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" podUID="d56089bf-177c-492d-8964-73a45574e7ed" Mar 12 14:29:00.131843 master-0 kubenswrapper[7440]: I0312 14:29:00.131692 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:00.131843 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:00.131843 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:00.131843 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:00.131843 master-0 kubenswrapper[7440]: I0312 14:29:00.131790 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:01.131860 master-0 kubenswrapper[7440]: I0312 14:29:01.131786 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:01.131860 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:01.131860 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:01.131860 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:01.132413 master-0 kubenswrapper[7440]: I0312 14:29:01.131883 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:01.805942 master-0 kubenswrapper[7440]: I0312 14:29:01.805853 7440 scope.go:117] "RemoveContainer" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" Mar 12 14:29:01.806459 master-0 kubenswrapper[7440]: E0312 14:29:01.806383 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:29:02.132101 master-0 kubenswrapper[7440]: I0312 14:29:02.131958 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:02.132101 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:02.132101 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:02.132101 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:02.132688 master-0 kubenswrapper[7440]: I0312 14:29:02.132093 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:03.132341 master-0 kubenswrapper[7440]: I0312 14:29:03.132214 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:03.132341 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:03.132341 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:03.132341 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:03.133112 master-0 kubenswrapper[7440]: I0312 14:29:03.132355 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:04.132052 master-0 kubenswrapper[7440]: I0312 14:29:04.132003 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:04.132052 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:04.132052 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:04.132052 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:04.133518 master-0 kubenswrapper[7440]: I0312 14:29:04.132914 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:05.131926 master-0 kubenswrapper[7440]: I0312 14:29:05.131811 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:05.131926 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:05.131926 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:05.131926 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:05.132327 master-0 kubenswrapper[7440]: I0312 14:29:05.131931 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:06.133768 master-0 kubenswrapper[7440]: I0312 14:29:06.133693 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:06.133768 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:06.133768 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:06.133768 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:06.134431 master-0 kubenswrapper[7440]: I0312 14:29:06.133820 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:07.131048 master-0 kubenswrapper[7440]: I0312 14:29:07.130954 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:07.131048 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:07.131048 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:07.131048 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:07.131403 master-0 kubenswrapper[7440]: I0312 14:29:07.131059 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:08.131626 master-0 kubenswrapper[7440]: I0312 14:29:08.131576 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:08.131626 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:08.131626 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:08.131626 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:08.132297 master-0 kubenswrapper[7440]: I0312 14:29:08.131646 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:09.131818 master-0 kubenswrapper[7440]: I0312 14:29:09.131760 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:09.131818 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:09.131818 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:09.131818 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:09.132446 master-0 kubenswrapper[7440]: I0312 14:29:09.131833 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:10.132019 master-0 kubenswrapper[7440]: I0312 14:29:10.131945 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:10.132019 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:10.132019 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:10.132019 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:10.132019 master-0 kubenswrapper[7440]: I0312 14:29:10.132007 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:10.804798 master-0 kubenswrapper[7440]: I0312 14:29:10.804735 7440 scope.go:117] "RemoveContainer" containerID="cd9014bcffe6ddde739ac15065ac6e2169de2b76f2a0295b122a3bc2a089b78d" Mar 12 14:29:10.805040 master-0 kubenswrapper[7440]: E0312 14:29:10.805009 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-z9hzg_openshift-cluster-storage-operator(d56089bf-177c-492d-8964-73a45574e7ed)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" podUID="d56089bf-177c-492d-8964-73a45574e7ed" Mar 12 14:29:11.132005 master-0 kubenswrapper[7440]: I0312 14:29:11.131872 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:11.132005 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:11.132005 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:11.132005 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:11.132005 master-0 kubenswrapper[7440]: I0312 14:29:11.131961 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:11.804992 master-0 kubenswrapper[7440]: I0312 14:29:11.804924 7440 scope.go:117] "RemoveContainer" containerID="8d633c24c0fbfe0880167743a2ebe5f60f0f211a6026d8c3f55625a7e7adbd93" Mar 12 14:29:12.131834 master-0 kubenswrapper[7440]: I0312 14:29:12.131714 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:12.131834 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:12.131834 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:12.131834 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:12.131834 master-0 kubenswrapper[7440]: I0312 14:29:12.131782 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:12.825261 master-0 kubenswrapper[7440]: I0312 14:29:12.825207 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-jpf47_57930a54-89ab-4ec8-a504-74035bb74d63/authentication-operator/4.log" Mar 12 14:29:12.825261 master-0 kubenswrapper[7440]: I0312 14:29:12.825257 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" event={"ID":"57930a54-89ab-4ec8-a504-74035bb74d63","Type":"ContainerStarted","Data":"d99d8c1ae20305282e19d20db3c4034a70d569c692d3ca52db2c6c835d89056f"} Mar 12 14:29:13.132317 master-0 kubenswrapper[7440]: I0312 14:29:13.132172 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:13.132317 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:13.132317 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:13.132317 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:13.132317 master-0 kubenswrapper[7440]: I0312 14:29:13.132246 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:14.132663 master-0 kubenswrapper[7440]: I0312 14:29:14.132605 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:14.132663 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:14.132663 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:14.132663 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:14.133607 master-0 kubenswrapper[7440]: I0312 14:29:14.133568 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:15.132020 master-0 kubenswrapper[7440]: I0312 14:29:15.131955 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:15.132020 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:15.132020 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:15.132020 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:15.132540 master-0 kubenswrapper[7440]: I0312 14:29:15.132501 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:15.805170 master-0 kubenswrapper[7440]: I0312 14:29:15.805115 7440 scope.go:117] "RemoveContainer" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" Mar 12 14:29:15.805735 master-0 kubenswrapper[7440]: E0312 14:29:15.805537 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:29:16.131314 master-0 kubenswrapper[7440]: I0312 14:29:16.131167 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:16.131314 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:16.131314 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:16.131314 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:16.131314 master-0 kubenswrapper[7440]: I0312 14:29:16.131266 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:17.131165 master-0 kubenswrapper[7440]: I0312 14:29:17.131103 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:17.131165 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:17.131165 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:17.131165 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:17.131731 master-0 kubenswrapper[7440]: I0312 14:29:17.131174 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:17.968655 master-0 kubenswrapper[7440]: I0312 14:29:17.968307 7440 patch_prober.go:28] interesting pod/machine-config-daemon-ngzc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:29:17.968655 master-0 kubenswrapper[7440]: I0312 14:29:17.968459 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" podUID="8e4d9407-ff79-4396-a37f-896617e024d4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:29:18.131686 master-0 kubenswrapper[7440]: I0312 14:29:18.131621 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:18.131686 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:18.131686 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:18.131686 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:18.132250 master-0 kubenswrapper[7440]: I0312 14:29:18.131716 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:19.132749 master-0 kubenswrapper[7440]: I0312 14:29:19.132636 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:19.132749 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:19.132749 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:19.132749 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:19.132749 master-0 kubenswrapper[7440]: I0312 14:29:19.132708 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:20.131937 master-0 kubenswrapper[7440]: I0312 14:29:20.131844 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:20.131937 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:20.131937 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:20.131937 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:20.133422 master-0 kubenswrapper[7440]: I0312 14:29:20.131964 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:21.131133 master-0 kubenswrapper[7440]: I0312 14:29:21.131067 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:21.131133 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:21.131133 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:21.131133 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:21.131133 master-0 kubenswrapper[7440]: I0312 14:29:21.131127 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:22.131608 master-0 kubenswrapper[7440]: I0312 14:29:22.131526 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:22.131608 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:22.131608 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:22.131608 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:22.132494 master-0 kubenswrapper[7440]: I0312 14:29:22.131609 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:22.805081 master-0 kubenswrapper[7440]: I0312 14:29:22.804991 7440 scope.go:117] "RemoveContainer" containerID="cd9014bcffe6ddde739ac15065ac6e2169de2b76f2a0295b122a3bc2a089b78d" Mar 12 14:29:22.805458 master-0 kubenswrapper[7440]: E0312 14:29:22.805256 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-z9hzg_openshift-cluster-storage-operator(d56089bf-177c-492d-8964-73a45574e7ed)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" podUID="d56089bf-177c-492d-8964-73a45574e7ed" Mar 12 14:29:23.133491 master-0 kubenswrapper[7440]: I0312 14:29:23.133215 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:23.133491 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:23.133491 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:23.133491 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:23.133491 master-0 kubenswrapper[7440]: I0312 14:29:23.133368 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:24.131929 master-0 kubenswrapper[7440]: I0312 14:29:24.131842 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:24.131929 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:24.131929 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:24.131929 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:24.132372 master-0 kubenswrapper[7440]: I0312 14:29:24.131942 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:25.131233 master-0 kubenswrapper[7440]: I0312 14:29:25.131166 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:25.131233 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:25.131233 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:25.131233 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:25.131845 master-0 kubenswrapper[7440]: I0312 14:29:25.131253 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:26.131609 master-0 kubenswrapper[7440]: I0312 14:29:26.131529 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:26.131609 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:26.131609 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:26.131609 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:26.131609 master-0 kubenswrapper[7440]: I0312 14:29:26.131603 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:27.131187 master-0 kubenswrapper[7440]: I0312 14:29:27.131121 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:27.131187 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:27.131187 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:27.131187 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:27.131187 master-0 kubenswrapper[7440]: I0312 14:29:27.131182 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:28.132226 master-0 kubenswrapper[7440]: I0312 14:29:28.132169 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:28.132226 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:28.132226 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:28.132226 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:28.133184 master-0 kubenswrapper[7440]: I0312 14:29:28.133049 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:29.133262 master-0 kubenswrapper[7440]: I0312 14:29:29.133175 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:29.133262 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:29.133262 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:29.133262 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:29.133881 master-0 kubenswrapper[7440]: I0312 14:29:29.133273 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:29.810577 master-0 kubenswrapper[7440]: I0312 14:29:29.810514 7440 scope.go:117] "RemoveContainer" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" Mar 12 14:29:29.810844 master-0 kubenswrapper[7440]: E0312 14:29:29.810755 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:29:30.132350 master-0 kubenswrapper[7440]: I0312 14:29:30.132190 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:30.132350 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:30.132350 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:30.132350 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:30.132350 master-0 kubenswrapper[7440]: I0312 14:29:30.132256 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:31.132160 master-0 kubenswrapper[7440]: I0312 14:29:31.132091 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:31.132160 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:31.132160 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:31.132160 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:31.132692 master-0 kubenswrapper[7440]: I0312 14:29:31.132182 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:32.130928 master-0 kubenswrapper[7440]: I0312 14:29:32.130824 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:32.130928 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:32.130928 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:32.130928 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:32.131247 master-0 kubenswrapper[7440]: I0312 14:29:32.130930 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:33.131970 master-0 kubenswrapper[7440]: I0312 14:29:33.131858 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:33.131970 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:33.131970 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:33.131970 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:33.131970 master-0 kubenswrapper[7440]: I0312 14:29:33.131957 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:33.804534 master-0 kubenswrapper[7440]: I0312 14:29:33.804481 7440 scope.go:117] "RemoveContainer" containerID="cd9014bcffe6ddde739ac15065ac6e2169de2b76f2a0295b122a3bc2a089b78d" Mar 12 14:29:33.804788 master-0 kubenswrapper[7440]: E0312 14:29:33.804727 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-z9hzg_openshift-cluster-storage-operator(d56089bf-177c-492d-8964-73a45574e7ed)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" podUID="d56089bf-177c-492d-8964-73a45574e7ed" Mar 12 14:29:34.132438 master-0 kubenswrapper[7440]: I0312 14:29:34.132268 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:34.132438 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:34.132438 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:34.132438 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:34.132438 master-0 kubenswrapper[7440]: I0312 14:29:34.132393 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:35.130984 master-0 kubenswrapper[7440]: I0312 14:29:35.130914 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:35.130984 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:35.130984 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:35.130984 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:35.131336 master-0 kubenswrapper[7440]: I0312 14:29:35.130985 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:36.131855 master-0 kubenswrapper[7440]: I0312 14:29:36.131732 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:36.131855 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:36.131855 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:36.131855 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:36.131855 master-0 kubenswrapper[7440]: I0312 14:29:36.131847 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:37.131907 master-0 kubenswrapper[7440]: I0312 14:29:37.131829 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:37.131907 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:37.131907 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:37.131907 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:37.132457 master-0 kubenswrapper[7440]: I0312 14:29:37.131947 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:38.132521 master-0 kubenswrapper[7440]: I0312 14:29:38.132449 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:38.132521 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:38.132521 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:38.132521 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:38.132521 master-0 kubenswrapper[7440]: I0312 14:29:38.132512 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:39.130922 master-0 kubenswrapper[7440]: I0312 14:29:39.130834 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:39.130922 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:39.130922 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:39.130922 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:39.131422 master-0 kubenswrapper[7440]: I0312 14:29:39.130935 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:40.132743 master-0 kubenswrapper[7440]: I0312 14:29:40.132681 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:40.132743 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:40.132743 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:40.132743 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:40.133800 master-0 kubenswrapper[7440]: I0312 14:29:40.132767 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:41.131891 master-0 kubenswrapper[7440]: I0312 14:29:41.131815 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:41.131891 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:41.131891 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:41.131891 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:41.132418 master-0 kubenswrapper[7440]: I0312 14:29:41.131977 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:42.133495 master-0 kubenswrapper[7440]: I0312 14:29:42.133367 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:42.133495 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:42.133495 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:42.133495 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:42.134558 master-0 kubenswrapper[7440]: I0312 14:29:42.133543 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:43.132782 master-0 kubenswrapper[7440]: I0312 14:29:43.132726 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:43.132782 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:43.132782 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:43.132782 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:43.133178 master-0 kubenswrapper[7440]: I0312 14:29:43.132803 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:43.804844 master-0 kubenswrapper[7440]: I0312 14:29:43.804784 7440 scope.go:117] "RemoveContainer" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" Mar 12 14:29:43.805637 master-0 kubenswrapper[7440]: E0312 14:29:43.805085 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:29:44.131737 master-0 kubenswrapper[7440]: I0312 14:29:44.131597 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:44.131737 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:44.131737 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:44.131737 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:44.131737 master-0 kubenswrapper[7440]: I0312 14:29:44.131666 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:45.131612 master-0 kubenswrapper[7440]: I0312 14:29:45.131537 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:45.131612 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:45.131612 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:45.131612 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:45.132446 master-0 kubenswrapper[7440]: I0312 14:29:45.131610 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:46.131218 master-0 kubenswrapper[7440]: I0312 14:29:46.131161 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:46.131218 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:46.131218 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:46.131218 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:46.131503 master-0 kubenswrapper[7440]: I0312 14:29:46.131227 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:47.130808 master-0 kubenswrapper[7440]: I0312 14:29:47.130758 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:47.130808 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:47.130808 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:47.130808 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:47.131360 master-0 kubenswrapper[7440]: I0312 14:29:47.130836 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:47.967870 master-0 kubenswrapper[7440]: I0312 14:29:47.967813 7440 patch_prober.go:28] interesting pod/machine-config-daemon-ngzc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:29:47.968098 master-0 kubenswrapper[7440]: I0312 14:29:47.967889 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" podUID="8e4d9407-ff79-4396-a37f-896617e024d4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:29:47.968098 master-0 kubenswrapper[7440]: I0312 14:29:47.967962 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:29:47.968710 master-0 kubenswrapper[7440]: I0312 14:29:47.968678 7440 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2624aa96483e7d2f539ca381f3c23b1b80ab32e21f5c81745c07dc9b511b56c4"} pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 14:29:47.968777 master-0 kubenswrapper[7440]: I0312 14:29:47.968758 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" podUID="8e4d9407-ff79-4396-a37f-896617e024d4" containerName="machine-config-daemon" containerID="cri-o://2624aa96483e7d2f539ca381f3c23b1b80ab32e21f5c81745c07dc9b511b56c4" gracePeriod=600 Mar 12 14:29:48.131468 master-0 kubenswrapper[7440]: I0312 14:29:48.131418 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:48.131468 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:48.131468 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:48.131468 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:48.131942 master-0 kubenswrapper[7440]: I0312 14:29:48.131485 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:48.160257 master-0 kubenswrapper[7440]: I0312 14:29:48.160199 7440 generic.go:334] "Generic (PLEG): container finished" podID="8e4d9407-ff79-4396-a37f-896617e024d4" containerID="2624aa96483e7d2f539ca381f3c23b1b80ab32e21f5c81745c07dc9b511b56c4" exitCode=0 Mar 12 14:29:48.160448 master-0 kubenswrapper[7440]: I0312 14:29:48.160252 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" event={"ID":"8e4d9407-ff79-4396-a37f-896617e024d4","Type":"ContainerDied","Data":"2624aa96483e7d2f539ca381f3c23b1b80ab32e21f5c81745c07dc9b511b56c4"} Mar 12 14:29:48.160448 master-0 kubenswrapper[7440]: I0312 14:29:48.160333 7440 scope.go:117] "RemoveContainer" containerID="3d291d3f8cf9b232bd82f0a951b10eec242d292f5ec0b07ae030409f0e0e9d18" Mar 12 14:29:48.805276 master-0 kubenswrapper[7440]: I0312 14:29:48.805198 7440 scope.go:117] "RemoveContainer" containerID="cd9014bcffe6ddde739ac15065ac6e2169de2b76f2a0295b122a3bc2a089b78d" Mar 12 14:29:48.805506 master-0 kubenswrapper[7440]: E0312 14:29:48.805463 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-z9hzg_openshift-cluster-storage-operator(d56089bf-177c-492d-8964-73a45574e7ed)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" podUID="d56089bf-177c-492d-8964-73a45574e7ed" Mar 12 14:29:49.132374 master-0 kubenswrapper[7440]: I0312 14:29:49.132260 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:49.132374 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:49.132374 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:49.132374 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:49.132374 master-0 kubenswrapper[7440]: I0312 14:29:49.132331 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:49.173797 master-0 kubenswrapper[7440]: I0312 14:29:49.173730 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" event={"ID":"8e4d9407-ff79-4396-a37f-896617e024d4","Type":"ContainerStarted","Data":"f3cde608396e1250953a5916aba2ef7c179e1de121583d5c59e0f48fda1512ff"} Mar 12 14:29:50.132337 master-0 kubenswrapper[7440]: I0312 14:29:50.132185 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:50.132337 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:50.132337 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:50.132337 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:50.132337 master-0 kubenswrapper[7440]: I0312 14:29:50.132284 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:51.131278 master-0 kubenswrapper[7440]: I0312 14:29:51.131227 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:51.131278 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:51.131278 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:51.131278 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:51.131597 master-0 kubenswrapper[7440]: I0312 14:29:51.131298 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:52.133045 master-0 kubenswrapper[7440]: I0312 14:29:52.132966 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:52.133045 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:52.133045 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:52.133045 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:52.133742 master-0 kubenswrapper[7440]: I0312 14:29:52.133087 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:53.131758 master-0 kubenswrapper[7440]: I0312 14:29:53.131700 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:29:53.131758 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:29:53.131758 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:29:53.131758 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:29:53.132118 master-0 kubenswrapper[7440]: I0312 14:29:53.131794 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:29:53.132118 master-0 kubenswrapper[7440]: I0312 14:29:53.131865 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:29:53.132850 master-0 kubenswrapper[7440]: I0312 14:29:53.132790 7440 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"ced725ff08f0784185b129c88b510bee99f07dfd79fa7c15509acb3b5c4c7595"} pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" containerMessage="Container router failed startup probe, will be restarted" Mar 12 14:29:53.132951 master-0 kubenswrapper[7440]: I0312 14:29:53.132885 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" containerID="cri-o://ced725ff08f0784185b129c88b510bee99f07dfd79fa7c15509acb3b5c4c7595" gracePeriod=3600 Mar 12 14:29:56.805017 master-0 kubenswrapper[7440]: I0312 14:29:56.804872 7440 scope.go:117] "RemoveContainer" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" Mar 12 14:29:56.806015 master-0 kubenswrapper[7440]: E0312 14:29:56.805152 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:29:58.236500 master-0 kubenswrapper[7440]: I0312 14:29:58.236391 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/4.log" Mar 12 14:29:58.237057 master-0 kubenswrapper[7440]: I0312 14:29:58.237018 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/3.log" Mar 12 14:29:58.237449 master-0 kubenswrapper[7440]: I0312 14:29:58.237411 7440 generic.go:334] "Generic (PLEG): container finished" podID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" containerID="d7590356afea30db90fd18bb64f353e2cda51d0df2cf338f3dd9cfc534cc6343" exitCode=1 Mar 12 14:29:58.237500 master-0 kubenswrapper[7440]: I0312 14:29:58.237447 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" event={"ID":"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6","Type":"ContainerDied","Data":"d7590356afea30db90fd18bb64f353e2cda51d0df2cf338f3dd9cfc534cc6343"} Mar 12 14:29:58.237500 master-0 kubenswrapper[7440]: I0312 14:29:58.237478 7440 scope.go:117] "RemoveContainer" containerID="ce4ac6bc5605b012a8c47f4c0b169a09ed9e7155807e4b4269519a7e642d6b66" Mar 12 14:29:58.238031 master-0 kubenswrapper[7440]: I0312 14:29:58.238001 7440 scope.go:117] "RemoveContainer" containerID="d7590356afea30db90fd18bb64f353e2cda51d0df2cf338f3dd9cfc534cc6343" Mar 12 14:29:58.239040 master-0 kubenswrapper[7440]: E0312 14:29:58.238506 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-44hhf_openshift-ingress-operator(4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" podUID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" Mar 12 14:29:59.244465 master-0 kubenswrapper[7440]: I0312 14:29:59.244418 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/4.log" Mar 12 14:30:00.805422 master-0 kubenswrapper[7440]: I0312 14:30:00.805366 7440 scope.go:117] "RemoveContainer" containerID="cd9014bcffe6ddde739ac15065ac6e2169de2b76f2a0295b122a3bc2a089b78d" Mar 12 14:30:00.805865 master-0 kubenswrapper[7440]: E0312 14:30:00.805694 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-z9hzg_openshift-cluster-storage-operator(d56089bf-177c-492d-8964-73a45574e7ed)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" podUID="d56089bf-177c-492d-8964-73a45574e7ed" Mar 12 14:30:08.805361 master-0 kubenswrapper[7440]: I0312 14:30:08.805288 7440 scope.go:117] "RemoveContainer" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" Mar 12 14:30:08.805361 master-0 kubenswrapper[7440]: I0312 14:30:08.805368 7440 scope.go:117] "RemoveContainer" containerID="d7590356afea30db90fd18bb64f353e2cda51d0df2cf338f3dd9cfc534cc6343" Mar 12 14:30:08.806598 master-0 kubenswrapper[7440]: E0312 14:30:08.805570 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:30:08.806598 master-0 kubenswrapper[7440]: E0312 14:30:08.805572 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-44hhf_openshift-ingress-operator(4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" podUID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" Mar 12 14:30:12.805023 master-0 kubenswrapper[7440]: I0312 14:30:12.804963 7440 scope.go:117] "RemoveContainer" containerID="cd9014bcffe6ddde739ac15065ac6e2169de2b76f2a0295b122a3bc2a089b78d" Mar 12 14:30:13.344065 master-0 kubenswrapper[7440]: I0312 14:30:13.343972 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/5.log" Mar 12 14:30:13.344065 master-0 kubenswrapper[7440]: I0312 14:30:13.344047 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" event={"ID":"d56089bf-177c-492d-8964-73a45574e7ed","Type":"ContainerStarted","Data":"a625865f3b69893afdeab1c428fb5b3ab0a928ff5b48376f2646d22f9267fdfd"} Mar 12 14:30:22.805866 master-0 kubenswrapper[7440]: I0312 14:30:22.805783 7440 scope.go:117] "RemoveContainer" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" Mar 12 14:30:22.806588 master-0 kubenswrapper[7440]: E0312 14:30:22.806130 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:30:23.805737 master-0 kubenswrapper[7440]: I0312 14:30:23.805639 7440 scope.go:117] "RemoveContainer" containerID="d7590356afea30db90fd18bb64f353e2cda51d0df2cf338f3dd9cfc534cc6343" Mar 12 14:30:23.806126 master-0 kubenswrapper[7440]: E0312 14:30:23.805986 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-44hhf_openshift-ingress-operator(4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" podUID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" Mar 12 14:30:33.805375 master-0 kubenswrapper[7440]: I0312 14:30:33.805301 7440 scope.go:117] "RemoveContainer" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" Mar 12 14:30:33.806731 master-0 kubenswrapper[7440]: E0312 14:30:33.805988 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:30:35.805506 master-0 kubenswrapper[7440]: I0312 14:30:35.805434 7440 scope.go:117] "RemoveContainer" containerID="d7590356afea30db90fd18bb64f353e2cda51d0df2cf338f3dd9cfc534cc6343" Mar 12 14:30:35.806221 master-0 kubenswrapper[7440]: E0312 14:30:35.805656 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-44hhf_openshift-ingress-operator(4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" podUID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" Mar 12 14:30:39.528935 master-0 kubenswrapper[7440]: I0312 14:30:39.528872 7440 generic.go:334] "Generic (PLEG): container finished" podID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerID="ced725ff08f0784185b129c88b510bee99f07dfd79fa7c15509acb3b5c4c7595" exitCode=0 Mar 12 14:30:39.528935 master-0 kubenswrapper[7440]: I0312 14:30:39.528939 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" event={"ID":"e7f6ebd3-98c8-457c-a88c-7e81270f01b5","Type":"ContainerDied","Data":"ced725ff08f0784185b129c88b510bee99f07dfd79fa7c15509acb3b5c4c7595"} Mar 12 14:30:39.529563 master-0 kubenswrapper[7440]: I0312 14:30:39.528971 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" event={"ID":"e7f6ebd3-98c8-457c-a88c-7e81270f01b5","Type":"ContainerStarted","Data":"23045659386f5f50b8b2e11a25ff55cb6da08b535a3f1f8469ef54d77c636cee"} Mar 12 14:30:39.529563 master-0 kubenswrapper[7440]: I0312 14:30:39.528991 7440 scope.go:117] "RemoveContainer" containerID="8ea8824cc66d3733dec4f191955e838e6c7cbda51a4332331b8b1ab5e09b2eaf" Mar 12 14:30:40.129743 master-0 kubenswrapper[7440]: I0312 14:30:40.129642 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:30:40.132573 master-0 kubenswrapper[7440]: I0312 14:30:40.132523 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:30:40.132573 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:30:40.132573 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:30:40.132573 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:30:40.132847 master-0 kubenswrapper[7440]: I0312 14:30:40.132572 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:30:41.129360 master-0 kubenswrapper[7440]: I0312 14:30:41.129271 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:30:41.131816 master-0 kubenswrapper[7440]: I0312 14:30:41.131763 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:30:41.131816 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:30:41.131816 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:30:41.131816 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:30:41.132068 master-0 kubenswrapper[7440]: I0312 14:30:41.131830 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:30:42.131505 master-0 kubenswrapper[7440]: I0312 14:30:42.131432 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:30:42.131505 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:30:42.131505 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:30:42.131505 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:30:42.131505 master-0 kubenswrapper[7440]: I0312 14:30:42.131500 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:30:43.131788 master-0 kubenswrapper[7440]: I0312 14:30:43.131714 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:30:43.131788 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:30:43.131788 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:30:43.131788 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:30:43.132431 master-0 kubenswrapper[7440]: I0312 14:30:43.131787 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:30:44.131989 master-0 kubenswrapper[7440]: I0312 14:30:44.131877 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:30:44.131989 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:30:44.131989 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:30:44.131989 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:30:44.132657 master-0 kubenswrapper[7440]: I0312 14:30:44.131992 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:30:44.805734 master-0 kubenswrapper[7440]: I0312 14:30:44.805639 7440 scope.go:117] "RemoveContainer" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" Mar 12 14:30:44.806334 master-0 kubenswrapper[7440]: E0312 14:30:44.805930 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:30:45.132390 master-0 kubenswrapper[7440]: I0312 14:30:45.132257 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:30:45.132390 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:30:45.132390 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:30:45.132390 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:30:45.132390 master-0 kubenswrapper[7440]: I0312 14:30:45.132343 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:30:46.131098 master-0 kubenswrapper[7440]: I0312 14:30:46.131017 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:30:46.131098 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:30:46.131098 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:30:46.131098 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:30:46.131098 master-0 kubenswrapper[7440]: I0312 14:30:46.131080 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:30:47.131882 master-0 kubenswrapper[7440]: I0312 14:30:47.131805 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:30:47.131882 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:30:47.131882 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:30:47.131882 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:30:47.131882 master-0 kubenswrapper[7440]: I0312 14:30:47.131878 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:30:48.131931 master-0 kubenswrapper[7440]: I0312 14:30:48.131806 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:30:48.131931 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:30:48.131931 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:30:48.131931 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:30:48.131931 master-0 kubenswrapper[7440]: I0312 14:30:48.131876 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:30:48.805274 master-0 kubenswrapper[7440]: I0312 14:30:48.805207 7440 scope.go:117] "RemoveContainer" containerID="d7590356afea30db90fd18bb64f353e2cda51d0df2cf338f3dd9cfc534cc6343" Mar 12 14:30:48.805498 master-0 kubenswrapper[7440]: E0312 14:30:48.805444 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-44hhf_openshift-ingress-operator(4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" podUID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" Mar 12 14:30:49.132280 master-0 kubenswrapper[7440]: I0312 14:30:49.132078 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:30:49.132280 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:30:49.132280 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:30:49.132280 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:30:49.132280 master-0 kubenswrapper[7440]: I0312 14:30:49.132153 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:30:50.131687 master-0 kubenswrapper[7440]: I0312 14:30:50.131613 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:30:50.131687 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:30:50.131687 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:30:50.131687 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:30:50.131687 master-0 kubenswrapper[7440]: I0312 14:30:50.131686 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:30:51.131588 master-0 kubenswrapper[7440]: I0312 14:30:51.131529 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:30:51.131588 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:30:51.131588 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:30:51.131588 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:30:51.131588 master-0 kubenswrapper[7440]: I0312 14:30:51.131591 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:30:52.132311 master-0 kubenswrapper[7440]: I0312 14:30:52.132221 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:30:52.132311 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:30:52.132311 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:30:52.132311 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:30:52.132880 master-0 kubenswrapper[7440]: I0312 14:30:52.132347 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:30:53.131295 master-0 kubenswrapper[7440]: I0312 14:30:53.131221 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:30:53.131295 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:30:53.131295 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:30:53.131295 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:30:53.131295 master-0 kubenswrapper[7440]: I0312 14:30:53.131294 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:30:54.131616 master-0 kubenswrapper[7440]: I0312 14:30:54.131508 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:30:54.131616 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:30:54.131616 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:30:54.131616 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:30:54.131616 master-0 kubenswrapper[7440]: I0312 14:30:54.131597 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:30:55.131667 master-0 kubenswrapper[7440]: I0312 14:30:55.131576 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:30:55.131667 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:30:55.131667 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:30:55.131667 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:30:55.131667 master-0 kubenswrapper[7440]: I0312 14:30:55.131651 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:30:55.805480 master-0 kubenswrapper[7440]: I0312 14:30:55.805392 7440 scope.go:117] "RemoveContainer" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" Mar 12 14:30:55.805977 master-0 kubenswrapper[7440]: E0312 14:30:55.805619 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:30:56.131854 master-0 kubenswrapper[7440]: I0312 14:30:56.131702 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:30:56.131854 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:30:56.131854 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:30:56.131854 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:30:56.131854 master-0 kubenswrapper[7440]: I0312 14:30:56.131827 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:30:57.131428 master-0 kubenswrapper[7440]: I0312 14:30:57.131323 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:30:57.131428 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:30:57.131428 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:30:57.131428 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:30:57.132564 master-0 kubenswrapper[7440]: I0312 14:30:57.131442 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:30:58.131681 master-0 kubenswrapper[7440]: I0312 14:30:58.131560 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:30:58.131681 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:30:58.131681 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:30:58.131681 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:30:58.131681 master-0 kubenswrapper[7440]: I0312 14:30:58.131623 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:30:59.132070 master-0 kubenswrapper[7440]: I0312 14:30:59.132007 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:30:59.132070 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:30:59.132070 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:30:59.132070 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:30:59.132070 master-0 kubenswrapper[7440]: I0312 14:30:59.132070 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:00.131593 master-0 kubenswrapper[7440]: I0312 14:31:00.131524 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:00.131593 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:00.131593 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:00.131593 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:00.131593 master-0 kubenswrapper[7440]: I0312 14:31:00.131597 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:01.131720 master-0 kubenswrapper[7440]: I0312 14:31:01.131627 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:01.131720 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:01.131720 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:01.131720 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:01.133096 master-0 kubenswrapper[7440]: I0312 14:31:01.131745 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:02.132158 master-0 kubenswrapper[7440]: I0312 14:31:02.132058 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:02.132158 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:02.132158 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:02.132158 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:02.132158 master-0 kubenswrapper[7440]: I0312 14:31:02.132125 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:02.804981 master-0 kubenswrapper[7440]: I0312 14:31:02.804859 7440 scope.go:117] "RemoveContainer" containerID="d7590356afea30db90fd18bb64f353e2cda51d0df2cf338f3dd9cfc534cc6343" Mar 12 14:31:02.805319 master-0 kubenswrapper[7440]: E0312 14:31:02.805247 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-44hhf_openshift-ingress-operator(4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" podUID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" Mar 12 14:31:03.131525 master-0 kubenswrapper[7440]: I0312 14:31:03.131363 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:03.131525 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:03.131525 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:03.131525 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:03.131525 master-0 kubenswrapper[7440]: I0312 14:31:03.131428 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:04.132133 master-0 kubenswrapper[7440]: I0312 14:31:04.132052 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:04.132133 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:04.132133 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:04.132133 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:04.132133 master-0 kubenswrapper[7440]: I0312 14:31:04.132130 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:05.132079 master-0 kubenswrapper[7440]: I0312 14:31:05.131984 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:05.132079 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:05.132079 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:05.132079 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:05.132697 master-0 kubenswrapper[7440]: I0312 14:31:05.132139 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:06.131361 master-0 kubenswrapper[7440]: I0312 14:31:06.131262 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:06.131361 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:06.131361 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:06.131361 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:06.131662 master-0 kubenswrapper[7440]: I0312 14:31:06.131403 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:07.131704 master-0 kubenswrapper[7440]: I0312 14:31:07.131620 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:07.131704 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:07.131704 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:07.131704 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:07.132770 master-0 kubenswrapper[7440]: I0312 14:31:07.131731 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:08.131251 master-0 kubenswrapper[7440]: I0312 14:31:08.131202 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:08.131251 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:08.131251 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:08.131251 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:08.131661 master-0 kubenswrapper[7440]: I0312 14:31:08.131630 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:09.132312 master-0 kubenswrapper[7440]: I0312 14:31:09.132198 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:09.132312 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:09.132312 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:09.132312 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:09.132312 master-0 kubenswrapper[7440]: I0312 14:31:09.132313 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:09.809960 master-0 kubenswrapper[7440]: I0312 14:31:09.808987 7440 scope.go:117] "RemoveContainer" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" Mar 12 14:31:09.809960 master-0 kubenswrapper[7440]: E0312 14:31:09.809333 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:31:10.130924 master-0 kubenswrapper[7440]: I0312 14:31:10.130778 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:10.130924 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:10.130924 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:10.130924 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:10.131359 master-0 kubenswrapper[7440]: I0312 14:31:10.131325 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:11.131521 master-0 kubenswrapper[7440]: I0312 14:31:11.131270 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:11.131521 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:11.131521 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:11.131521 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:11.131521 master-0 kubenswrapper[7440]: I0312 14:31:11.131316 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:12.131121 master-0 kubenswrapper[7440]: I0312 14:31:12.131057 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:12.131121 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:12.131121 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:12.131121 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:12.131490 master-0 kubenswrapper[7440]: I0312 14:31:12.131145 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:13.132511 master-0 kubenswrapper[7440]: I0312 14:31:13.132435 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:13.132511 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:13.132511 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:13.132511 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:13.133618 master-0 kubenswrapper[7440]: I0312 14:31:13.132571 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:14.131127 master-0 kubenswrapper[7440]: I0312 14:31:14.131069 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:14.131127 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:14.131127 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:14.131127 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:14.131390 master-0 kubenswrapper[7440]: I0312 14:31:14.131128 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:15.130637 master-0 kubenswrapper[7440]: I0312 14:31:15.130582 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:15.130637 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:15.130637 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:15.130637 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:15.131198 master-0 kubenswrapper[7440]: I0312 14:31:15.130642 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:15.804568 master-0 kubenswrapper[7440]: I0312 14:31:15.804508 7440 scope.go:117] "RemoveContainer" containerID="d7590356afea30db90fd18bb64f353e2cda51d0df2cf338f3dd9cfc534cc6343" Mar 12 14:31:15.804807 master-0 kubenswrapper[7440]: E0312 14:31:15.804723 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-44hhf_openshift-ingress-operator(4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" podUID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" Mar 12 14:31:16.132841 master-0 kubenswrapper[7440]: I0312 14:31:16.132707 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:16.132841 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:16.132841 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:16.132841 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:16.132841 master-0 kubenswrapper[7440]: I0312 14:31:16.132796 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:17.132706 master-0 kubenswrapper[7440]: I0312 14:31:17.132639 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:17.132706 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:17.132706 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:17.132706 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:17.133278 master-0 kubenswrapper[7440]: I0312 14:31:17.132719 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:18.131163 master-0 kubenswrapper[7440]: I0312 14:31:18.131101 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:18.131163 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:18.131163 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:18.131163 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:18.131585 master-0 kubenswrapper[7440]: I0312 14:31:18.131180 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:19.131394 master-0 kubenswrapper[7440]: I0312 14:31:19.131264 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:19.131394 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:19.131394 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:19.131394 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:19.131394 master-0 kubenswrapper[7440]: I0312 14:31:19.131352 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:20.131812 master-0 kubenswrapper[7440]: I0312 14:31:20.131733 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:20.131812 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:20.131812 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:20.131812 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:20.131812 master-0 kubenswrapper[7440]: I0312 14:31:20.131809 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:21.132339 master-0 kubenswrapper[7440]: I0312 14:31:21.132283 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:21.132339 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:21.132339 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:21.132339 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:21.133265 master-0 kubenswrapper[7440]: I0312 14:31:21.133175 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:22.132017 master-0 kubenswrapper[7440]: I0312 14:31:22.131943 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:22.132017 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:22.132017 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:22.132017 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:22.132017 master-0 kubenswrapper[7440]: I0312 14:31:22.132011 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:23.131889 master-0 kubenswrapper[7440]: I0312 14:31:23.131812 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:23.131889 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:23.131889 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:23.131889 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:23.132430 master-0 kubenswrapper[7440]: I0312 14:31:23.131962 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:23.804826 master-0 kubenswrapper[7440]: I0312 14:31:23.804777 7440 scope.go:117] "RemoveContainer" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" Mar 12 14:31:23.805184 master-0 kubenswrapper[7440]: E0312 14:31:23.805022 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:31:24.132607 master-0 kubenswrapper[7440]: I0312 14:31:24.132461 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:24.132607 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:24.132607 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:24.132607 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:24.132607 master-0 kubenswrapper[7440]: I0312 14:31:24.132545 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:25.133436 master-0 kubenswrapper[7440]: I0312 14:31:25.133269 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:25.133436 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:25.133436 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:25.133436 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:25.135062 master-0 kubenswrapper[7440]: I0312 14:31:25.133491 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:26.132046 master-0 kubenswrapper[7440]: I0312 14:31:26.131958 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:26.132046 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:26.132046 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:26.132046 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:26.132356 master-0 kubenswrapper[7440]: I0312 14:31:26.132076 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:27.142143 master-0 kubenswrapper[7440]: I0312 14:31:27.142014 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:27.142143 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:27.142143 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:27.142143 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:27.142781 master-0 kubenswrapper[7440]: I0312 14:31:27.142161 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:28.131812 master-0 kubenswrapper[7440]: I0312 14:31:28.131745 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:28.131812 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:28.131812 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:28.131812 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:28.131812 master-0 kubenswrapper[7440]: I0312 14:31:28.131806 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:29.131335 master-0 kubenswrapper[7440]: I0312 14:31:29.131187 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:29.131335 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:29.131335 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:29.131335 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:29.131335 master-0 kubenswrapper[7440]: I0312 14:31:29.131301 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:29.816695 master-0 kubenswrapper[7440]: I0312 14:31:29.816609 7440 scope.go:117] "RemoveContainer" containerID="d7590356afea30db90fd18bb64f353e2cda51d0df2cf338f3dd9cfc534cc6343" Mar 12 14:31:30.132190 master-0 kubenswrapper[7440]: I0312 14:31:30.132060 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:30.132190 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:30.132190 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:30.132190 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:30.132190 master-0 kubenswrapper[7440]: I0312 14:31:30.132117 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:30.871622 master-0 kubenswrapper[7440]: I0312 14:31:30.871560 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/4.log" Mar 12 14:31:30.872247 master-0 kubenswrapper[7440]: I0312 14:31:30.872149 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" event={"ID":"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6","Type":"ContainerStarted","Data":"c16aee696a6ef88096dfa67f9116c7fd30990cd6603084cb800a4c732d12f445"} Mar 12 14:31:31.132238 master-0 kubenswrapper[7440]: I0312 14:31:31.132135 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:31.132238 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:31.132238 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:31.132238 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:31.133099 master-0 kubenswrapper[7440]: I0312 14:31:31.132260 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:32.132433 master-0 kubenswrapper[7440]: I0312 14:31:32.132308 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:32.132433 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:32.132433 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:32.132433 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:32.132433 master-0 kubenswrapper[7440]: I0312 14:31:32.132425 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:33.132325 master-0 kubenswrapper[7440]: I0312 14:31:33.132229 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:33.132325 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:33.132325 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:33.132325 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:33.132325 master-0 kubenswrapper[7440]: I0312 14:31:33.132329 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:34.134212 master-0 kubenswrapper[7440]: I0312 14:31:34.134101 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:34.134212 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:34.134212 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:34.134212 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:34.135353 master-0 kubenswrapper[7440]: I0312 14:31:34.134251 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:35.130698 master-0 kubenswrapper[7440]: I0312 14:31:35.130661 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:35.130698 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:35.130698 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:35.130698 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:35.131076 master-0 kubenswrapper[7440]: I0312 14:31:35.131052 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:36.133013 master-0 kubenswrapper[7440]: I0312 14:31:36.132923 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:36.133013 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:36.133013 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:36.133013 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:36.133797 master-0 kubenswrapper[7440]: I0312 14:31:36.133025 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:37.132939 master-0 kubenswrapper[7440]: I0312 14:31:37.132813 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:37.132939 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:37.132939 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:37.132939 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:37.134191 master-0 kubenswrapper[7440]: I0312 14:31:37.132996 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:37.804810 master-0 kubenswrapper[7440]: I0312 14:31:37.804728 7440 scope.go:117] "RemoveContainer" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" Mar 12 14:31:37.805146 master-0 kubenswrapper[7440]: E0312 14:31:37.805061 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:31:38.131812 master-0 kubenswrapper[7440]: I0312 14:31:38.131650 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:38.131812 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:38.131812 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:38.131812 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:38.131812 master-0 kubenswrapper[7440]: I0312 14:31:38.131714 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:39.131346 master-0 kubenswrapper[7440]: I0312 14:31:39.131221 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:39.131346 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:39.131346 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:39.131346 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:39.131918 master-0 kubenswrapper[7440]: I0312 14:31:39.131361 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:40.132880 master-0 kubenswrapper[7440]: I0312 14:31:40.132792 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:40.132880 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:40.132880 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:40.132880 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:40.133659 master-0 kubenswrapper[7440]: I0312 14:31:40.132929 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:41.131574 master-0 kubenswrapper[7440]: I0312 14:31:41.131492 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:41.131574 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:41.131574 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:41.131574 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:41.132014 master-0 kubenswrapper[7440]: I0312 14:31:41.131602 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:42.132332 master-0 kubenswrapper[7440]: I0312 14:31:42.132236 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:42.132332 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:42.132332 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:42.132332 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:42.132332 master-0 kubenswrapper[7440]: I0312 14:31:42.132302 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:43.131854 master-0 kubenswrapper[7440]: I0312 14:31:43.131772 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:43.131854 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:43.131854 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:43.131854 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:43.131854 master-0 kubenswrapper[7440]: I0312 14:31:43.131833 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:44.130984 master-0 kubenswrapper[7440]: I0312 14:31:44.130923 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:44.130984 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:44.130984 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:44.130984 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:44.130984 master-0 kubenswrapper[7440]: I0312 14:31:44.130983 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:45.132212 master-0 kubenswrapper[7440]: I0312 14:31:45.132129 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:45.132212 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:45.132212 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:45.132212 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:45.132212 master-0 kubenswrapper[7440]: I0312 14:31:45.132192 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:46.131662 master-0 kubenswrapper[7440]: I0312 14:31:46.131556 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:46.131662 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:46.131662 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:46.131662 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:46.132052 master-0 kubenswrapper[7440]: I0312 14:31:46.131672 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:47.131878 master-0 kubenswrapper[7440]: I0312 14:31:47.131789 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:47.131878 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:47.131878 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:47.131878 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:47.132506 master-0 kubenswrapper[7440]: I0312 14:31:47.131959 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:48.131957 master-0 kubenswrapper[7440]: I0312 14:31:48.131837 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:48.131957 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:48.131957 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:48.131957 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:48.133402 master-0 kubenswrapper[7440]: I0312 14:31:48.131993 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:49.132060 master-0 kubenswrapper[7440]: I0312 14:31:49.131940 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:49.132060 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:49.132060 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:49.132060 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:49.133166 master-0 kubenswrapper[7440]: I0312 14:31:49.132047 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:50.130880 master-0 kubenswrapper[7440]: I0312 14:31:50.130791 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:50.130880 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:50.130880 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:50.130880 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:50.130880 master-0 kubenswrapper[7440]: I0312 14:31:50.130846 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:51.132196 master-0 kubenswrapper[7440]: I0312 14:31:51.132103 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:51.132196 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:51.132196 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:51.132196 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:51.133165 master-0 kubenswrapper[7440]: I0312 14:31:51.132222 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:52.132812 master-0 kubenswrapper[7440]: I0312 14:31:52.132694 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:52.132812 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:52.132812 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:52.132812 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:52.132812 master-0 kubenswrapper[7440]: I0312 14:31:52.132797 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:52.805461 master-0 kubenswrapper[7440]: I0312 14:31:52.805354 7440 scope.go:117] "RemoveContainer" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" Mar 12 14:31:52.805822 master-0 kubenswrapper[7440]: E0312 14:31:52.805753 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:31:53.133376 master-0 kubenswrapper[7440]: I0312 14:31:53.133193 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:53.133376 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:53.133376 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:53.133376 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:53.134426 master-0 kubenswrapper[7440]: I0312 14:31:53.134062 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:54.132038 master-0 kubenswrapper[7440]: I0312 14:31:54.131990 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:54.132038 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:54.132038 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:54.132038 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:54.132568 master-0 kubenswrapper[7440]: I0312 14:31:54.132538 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:55.132613 master-0 kubenswrapper[7440]: I0312 14:31:55.132500 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:55.132613 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:55.132613 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:55.132613 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:55.133350 master-0 kubenswrapper[7440]: I0312 14:31:55.132628 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:56.131968 master-0 kubenswrapper[7440]: I0312 14:31:56.131859 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:56.131968 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:56.131968 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:56.131968 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:56.132258 master-0 kubenswrapper[7440]: I0312 14:31:56.131975 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:57.131369 master-0 kubenswrapper[7440]: I0312 14:31:57.131318 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:57.131369 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:57.131369 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:57.131369 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:57.132054 master-0 kubenswrapper[7440]: I0312 14:31:57.131380 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:58.132041 master-0 kubenswrapper[7440]: I0312 14:31:58.131966 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:58.132041 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:58.132041 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:58.132041 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:58.132757 master-0 kubenswrapper[7440]: I0312 14:31:58.132069 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:31:59.131497 master-0 kubenswrapper[7440]: I0312 14:31:59.131445 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:31:59.131497 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:31:59.131497 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:31:59.131497 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:31:59.132065 master-0 kubenswrapper[7440]: I0312 14:31:59.132023 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:00.131288 master-0 kubenswrapper[7440]: I0312 14:32:00.131202 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:00.131288 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:00.131288 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:00.131288 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:00.131288 master-0 kubenswrapper[7440]: I0312 14:32:00.131289 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:01.131639 master-0 kubenswrapper[7440]: I0312 14:32:01.131551 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:01.131639 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:01.131639 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:01.131639 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:01.131639 master-0 kubenswrapper[7440]: I0312 14:32:01.131636 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:02.132371 master-0 kubenswrapper[7440]: I0312 14:32:02.132285 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:02.132371 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:02.132371 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:02.132371 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:02.133080 master-0 kubenswrapper[7440]: I0312 14:32:02.132368 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:03.131092 master-0 kubenswrapper[7440]: I0312 14:32:03.131034 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:03.131092 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:03.131092 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:03.131092 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:03.131373 master-0 kubenswrapper[7440]: I0312 14:32:03.131115 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:04.131910 master-0 kubenswrapper[7440]: I0312 14:32:04.131818 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:04.131910 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:04.131910 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:04.131910 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:04.132620 master-0 kubenswrapper[7440]: I0312 14:32:04.131945 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:05.132580 master-0 kubenswrapper[7440]: I0312 14:32:05.132472 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:05.132580 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:05.132580 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:05.132580 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:05.132580 master-0 kubenswrapper[7440]: I0312 14:32:05.132575 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:06.132256 master-0 kubenswrapper[7440]: I0312 14:32:06.132206 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:06.132256 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:06.132256 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:06.132256 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:06.132735 master-0 kubenswrapper[7440]: I0312 14:32:06.132709 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:07.132235 master-0 kubenswrapper[7440]: I0312 14:32:07.132142 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:07.132235 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:07.132235 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:07.132235 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:07.132511 master-0 kubenswrapper[7440]: I0312 14:32:07.132237 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:07.806130 master-0 kubenswrapper[7440]: I0312 14:32:07.806052 7440 scope.go:117] "RemoveContainer" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" Mar 12 14:32:07.806739 master-0 kubenswrapper[7440]: E0312 14:32:07.806649 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:32:08.131781 master-0 kubenswrapper[7440]: I0312 14:32:08.131643 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:08.131781 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:08.131781 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:08.131781 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:08.131781 master-0 kubenswrapper[7440]: I0312 14:32:08.131738 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:09.134980 master-0 kubenswrapper[7440]: I0312 14:32:09.134891 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:09.134980 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:09.134980 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:09.134980 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:09.135585 master-0 kubenswrapper[7440]: I0312 14:32:09.134993 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:10.132203 master-0 kubenswrapper[7440]: I0312 14:32:10.132153 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:10.132203 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:10.132203 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:10.132203 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:10.132649 master-0 kubenswrapper[7440]: I0312 14:32:10.132216 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:11.132089 master-0 kubenswrapper[7440]: I0312 14:32:11.131986 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:11.132089 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:11.132089 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:11.132089 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:11.133159 master-0 kubenswrapper[7440]: I0312 14:32:11.132092 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:12.132239 master-0 kubenswrapper[7440]: I0312 14:32:12.132144 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:12.132239 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:12.132239 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:12.132239 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:12.132239 master-0 kubenswrapper[7440]: I0312 14:32:12.132235 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:13.132089 master-0 kubenswrapper[7440]: I0312 14:32:13.131996 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:13.132089 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:13.132089 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:13.132089 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:13.133217 master-0 kubenswrapper[7440]: I0312 14:32:13.132104 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:14.131639 master-0 kubenswrapper[7440]: I0312 14:32:14.131565 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:14.131639 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:14.131639 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:14.131639 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:14.131639 master-0 kubenswrapper[7440]: I0312 14:32:14.131640 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:15.131937 master-0 kubenswrapper[7440]: I0312 14:32:15.131846 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:15.131937 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:15.131937 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:15.131937 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:15.132598 master-0 kubenswrapper[7440]: I0312 14:32:15.131975 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:16.154507 master-0 kubenswrapper[7440]: I0312 14:32:16.154435 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:16.154507 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:16.154507 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:16.154507 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:16.155146 master-0 kubenswrapper[7440]: I0312 14:32:16.154518 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:17.131371 master-0 kubenswrapper[7440]: I0312 14:32:17.131312 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:17.131371 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:17.131371 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:17.131371 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:17.132072 master-0 kubenswrapper[7440]: I0312 14:32:17.132023 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:17.968623 master-0 kubenswrapper[7440]: I0312 14:32:17.968550 7440 patch_prober.go:28] interesting pod/machine-config-daemon-ngzc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:32:17.969866 master-0 kubenswrapper[7440]: I0312 14:32:17.969811 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" podUID="8e4d9407-ff79-4396-a37f-896617e024d4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:32:18.131383 master-0 kubenswrapper[7440]: I0312 14:32:18.131277 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:18.131383 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:18.131383 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:18.131383 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:18.131383 master-0 kubenswrapper[7440]: I0312 14:32:18.131364 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:18.805269 master-0 kubenswrapper[7440]: I0312 14:32:18.805176 7440 scope.go:117] "RemoveContainer" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" Mar 12 14:32:18.805577 master-0 kubenswrapper[7440]: E0312 14:32:18.805404 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:32:19.133412 master-0 kubenswrapper[7440]: I0312 14:32:19.133202 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:19.133412 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:19.133412 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:19.133412 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:19.133412 master-0 kubenswrapper[7440]: I0312 14:32:19.133349 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:20.133063 master-0 kubenswrapper[7440]: I0312 14:32:20.132999 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:20.133063 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:20.133063 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:20.133063 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:20.134168 master-0 kubenswrapper[7440]: I0312 14:32:20.133795 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:21.132105 master-0 kubenswrapper[7440]: I0312 14:32:21.132024 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:21.132105 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:21.132105 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:21.132105 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:21.132611 master-0 kubenswrapper[7440]: I0312 14:32:21.132116 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:22.131458 master-0 kubenswrapper[7440]: I0312 14:32:22.131420 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:22.131458 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:22.131458 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:22.131458 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:22.132113 master-0 kubenswrapper[7440]: I0312 14:32:22.132042 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:23.132296 master-0 kubenswrapper[7440]: I0312 14:32:23.132219 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:23.132296 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:23.132296 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:23.132296 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:23.132296 master-0 kubenswrapper[7440]: I0312 14:32:23.132296 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:24.131591 master-0 kubenswrapper[7440]: I0312 14:32:24.131532 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:24.131591 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:24.131591 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:24.131591 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:24.131967 master-0 kubenswrapper[7440]: I0312 14:32:24.131597 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:25.131294 master-0 kubenswrapper[7440]: I0312 14:32:25.131230 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:25.131294 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:25.131294 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:25.131294 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:25.132034 master-0 kubenswrapper[7440]: I0312 14:32:25.131301 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:26.131111 master-0 kubenswrapper[7440]: I0312 14:32:26.131032 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:26.131111 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:26.131111 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:26.131111 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:26.131674 master-0 kubenswrapper[7440]: I0312 14:32:26.131118 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:27.132660 master-0 kubenswrapper[7440]: I0312 14:32:27.132539 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:27.132660 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:27.132660 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:27.132660 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:27.132660 master-0 kubenswrapper[7440]: I0312 14:32:27.132657 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:28.132582 master-0 kubenswrapper[7440]: I0312 14:32:28.132441 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:28.132582 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:28.132582 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:28.132582 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:28.132582 master-0 kubenswrapper[7440]: I0312 14:32:28.132551 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:29.131210 master-0 kubenswrapper[7440]: I0312 14:32:29.131132 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:29.131210 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:29.131210 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:29.131210 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:29.131210 master-0 kubenswrapper[7440]: I0312 14:32:29.131204 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:30.131469 master-0 kubenswrapper[7440]: I0312 14:32:30.131392 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:30.131469 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:30.131469 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:30.131469 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:30.132069 master-0 kubenswrapper[7440]: I0312 14:32:30.131473 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:30.804842 master-0 kubenswrapper[7440]: I0312 14:32:30.804782 7440 scope.go:117] "RemoveContainer" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" Mar 12 14:32:30.805103 master-0 kubenswrapper[7440]: E0312 14:32:30.805042 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:32:31.132714 master-0 kubenswrapper[7440]: I0312 14:32:31.132626 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:31.132714 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:31.132714 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:31.132714 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:31.133315 master-0 kubenswrapper[7440]: I0312 14:32:31.132718 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:32.133440 master-0 kubenswrapper[7440]: I0312 14:32:32.133361 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:32.133440 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:32.133440 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:32.133440 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:32.133440 master-0 kubenswrapper[7440]: I0312 14:32:32.133434 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:33.131460 master-0 kubenswrapper[7440]: I0312 14:32:33.131382 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:33.131460 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:33.131460 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:33.131460 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:33.131727 master-0 kubenswrapper[7440]: I0312 14:32:33.131462 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:34.132073 master-0 kubenswrapper[7440]: I0312 14:32:34.131962 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:34.132073 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:34.132073 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:34.132073 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:34.132756 master-0 kubenswrapper[7440]: I0312 14:32:34.132109 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:35.132229 master-0 kubenswrapper[7440]: I0312 14:32:35.132161 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:35.132229 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:35.132229 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:35.132229 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:35.132845 master-0 kubenswrapper[7440]: I0312 14:32:35.132253 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:36.131501 master-0 kubenswrapper[7440]: I0312 14:32:36.131453 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:36.131501 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:36.131501 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:36.131501 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:36.131804 master-0 kubenswrapper[7440]: I0312 14:32:36.131523 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:37.132963 master-0 kubenswrapper[7440]: I0312 14:32:37.132849 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:37.132963 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:37.132963 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:37.132963 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:37.134215 master-0 kubenswrapper[7440]: I0312 14:32:37.132991 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:38.130726 master-0 kubenswrapper[7440]: I0312 14:32:38.130672 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:38.130726 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:38.130726 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:38.130726 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:38.131110 master-0 kubenswrapper[7440]: I0312 14:32:38.130728 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:39.131818 master-0 kubenswrapper[7440]: I0312 14:32:39.131761 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:32:39.131818 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:32:39.131818 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:32:39.131818 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:32:39.131818 master-0 kubenswrapper[7440]: I0312 14:32:39.131814 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:32:39.132539 master-0 kubenswrapper[7440]: I0312 14:32:39.131853 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:32:39.132539 master-0 kubenswrapper[7440]: I0312 14:32:39.132374 7440 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"23045659386f5f50b8b2e11a25ff55cb6da08b535a3f1f8469ef54d77c636cee"} pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" containerMessage="Container router failed startup probe, will be restarted" Mar 12 14:32:39.132539 master-0 kubenswrapper[7440]: I0312 14:32:39.132399 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" containerID="cri-o://23045659386f5f50b8b2e11a25ff55cb6da08b535a3f1f8469ef54d77c636cee" gracePeriod=3600 Mar 12 14:32:42.805455 master-0 kubenswrapper[7440]: I0312 14:32:42.805402 7440 scope.go:117] "RemoveContainer" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" Mar 12 14:32:42.806059 master-0 kubenswrapper[7440]: E0312 14:32:42.805756 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:32:47.967983 master-0 kubenswrapper[7440]: I0312 14:32:47.967857 7440 patch_prober.go:28] interesting pod/machine-config-daemon-ngzc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:32:47.968599 master-0 kubenswrapper[7440]: I0312 14:32:47.967998 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" podUID="8e4d9407-ff79-4396-a37f-896617e024d4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:32:54.805427 master-0 kubenswrapper[7440]: I0312 14:32:54.805347 7440 scope.go:117] "RemoveContainer" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" Mar 12 14:32:54.806162 master-0 kubenswrapper[7440]: E0312 14:32:54.805754 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7fed292c3d5a90a99bfee43e89190405)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" Mar 12 14:33:07.805322 master-0 kubenswrapper[7440]: I0312 14:33:07.805222 7440 scope.go:117] "RemoveContainer" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" Mar 12 14:33:08.555403 master-0 kubenswrapper[7440]: I0312 14:33:08.555370 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/kube-controller-manager-cert-syncer/0.log" Mar 12 14:33:08.556016 master-0 kubenswrapper[7440]: I0312 14:33:08.555976 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7fed292c3d5a90a99bfee43e89190405","Type":"ContainerStarted","Data":"bd7899bffaf6aa78dc3ed5f5798ea564a1a0894027ca075b490729e999a8ce5b"} Mar 12 14:33:16.517173 master-0 kubenswrapper[7440]: I0312 14:33:16.517050 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:33:16.518060 master-0 kubenswrapper[7440]: I0312 14:33:16.517243 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:33:16.521574 master-0 kubenswrapper[7440]: I0312 14:33:16.521520 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:33:17.968175 master-0 kubenswrapper[7440]: I0312 14:33:17.968083 7440 patch_prober.go:28] interesting pod/machine-config-daemon-ngzc8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:33:17.968692 master-0 kubenswrapper[7440]: I0312 14:33:17.968222 7440 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" podUID="8e4d9407-ff79-4396-a37f-896617e024d4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:33:17.968692 master-0 kubenswrapper[7440]: I0312 14:33:17.968299 7440 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:33:17.969350 master-0 kubenswrapper[7440]: I0312 14:33:17.969298 7440 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f3cde608396e1250953a5916aba2ef7c179e1de121583d5c59e0f48fda1512ff"} pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 14:33:17.969470 master-0 kubenswrapper[7440]: I0312 14:33:17.969428 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" podUID="8e4d9407-ff79-4396-a37f-896617e024d4" containerName="machine-config-daemon" containerID="cri-o://f3cde608396e1250953a5916aba2ef7c179e1de121583d5c59e0f48fda1512ff" gracePeriod=600 Mar 12 14:33:18.625059 master-0 kubenswrapper[7440]: I0312 14:33:18.624964 7440 generic.go:334] "Generic (PLEG): container finished" podID="8e4d9407-ff79-4396-a37f-896617e024d4" containerID="f3cde608396e1250953a5916aba2ef7c179e1de121583d5c59e0f48fda1512ff" exitCode=0 Mar 12 14:33:18.625059 master-0 kubenswrapper[7440]: I0312 14:33:18.625024 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" event={"ID":"8e4d9407-ff79-4396-a37f-896617e024d4","Type":"ContainerDied","Data":"f3cde608396e1250953a5916aba2ef7c179e1de121583d5c59e0f48fda1512ff"} Mar 12 14:33:18.625059 master-0 kubenswrapper[7440]: I0312 14:33:18.625056 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" event={"ID":"8e4d9407-ff79-4396-a37f-896617e024d4","Type":"ContainerStarted","Data":"f6e61fb48d9732e09deab678588d21ae5ee12522c122ebf00a93dabd3828c932"} Mar 12 14:33:18.625059 master-0 kubenswrapper[7440]: I0312 14:33:18.625074 7440 scope.go:117] "RemoveContainer" containerID="2624aa96483e7d2f539ca381f3c23b1b80ab32e21f5c81745c07dc9b511b56c4" Mar 12 14:33:25.672127 master-0 kubenswrapper[7440]: I0312 14:33:25.672032 7440 generic.go:334] "Generic (PLEG): container finished" podID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerID="23045659386f5f50b8b2e11a25ff55cb6da08b535a3f1f8469ef54d77c636cee" exitCode=0 Mar 12 14:33:25.672127 master-0 kubenswrapper[7440]: I0312 14:33:25.672106 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" event={"ID":"e7f6ebd3-98c8-457c-a88c-7e81270f01b5","Type":"ContainerDied","Data":"23045659386f5f50b8b2e11a25ff55cb6da08b535a3f1f8469ef54d77c636cee"} Mar 12 14:33:25.672720 master-0 kubenswrapper[7440]: I0312 14:33:25.672145 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" event={"ID":"e7f6ebd3-98c8-457c-a88c-7e81270f01b5","Type":"ContainerStarted","Data":"8267e1775d4f1f71ce9ca7f7438e5d643c261adc1297b9c3415c07d0974bcee7"} Mar 12 14:33:25.672720 master-0 kubenswrapper[7440]: I0312 14:33:25.672174 7440 scope.go:117] "RemoveContainer" containerID="ced725ff08f0784185b129c88b510bee99f07dfd79fa7c15509acb3b5c4c7595" Mar 12 14:33:26.129770 master-0 kubenswrapper[7440]: I0312 14:33:26.129715 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:33:26.131839 master-0 kubenswrapper[7440]: I0312 14:33:26.131807 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:26.131839 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:26.131839 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:26.131839 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:26.132117 master-0 kubenswrapper[7440]: I0312 14:33:26.132082 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:26.520719 master-0 kubenswrapper[7440]: I0312 14:33:26.520670 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:33:27.131736 master-0 kubenswrapper[7440]: I0312 14:33:27.131679 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:27.131736 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:27.131736 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:27.131736 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:27.132314 master-0 kubenswrapper[7440]: I0312 14:33:27.131747 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:28.131594 master-0 kubenswrapper[7440]: I0312 14:33:28.131539 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:28.131594 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:28.131594 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:28.131594 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:28.132392 master-0 kubenswrapper[7440]: I0312 14:33:28.132360 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:29.131784 master-0 kubenswrapper[7440]: I0312 14:33:29.131708 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:29.131784 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:29.131784 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:29.131784 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:29.132382 master-0 kubenswrapper[7440]: I0312 14:33:29.131822 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:30.131375 master-0 kubenswrapper[7440]: I0312 14:33:30.131305 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:30.131375 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:30.131375 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:30.131375 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:30.131375 master-0 kubenswrapper[7440]: I0312 14:33:30.131371 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:30.712607 master-0 kubenswrapper[7440]: I0312 14:33:30.712564 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/5.log" Mar 12 14:33:30.713269 master-0 kubenswrapper[7440]: I0312 14:33:30.713080 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/4.log" Mar 12 14:33:30.713486 master-0 kubenswrapper[7440]: I0312 14:33:30.713453 7440 generic.go:334] "Generic (PLEG): container finished" podID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" containerID="c16aee696a6ef88096dfa67f9116c7fd30990cd6603084cb800a4c732d12f445" exitCode=1 Mar 12 14:33:30.713540 master-0 kubenswrapper[7440]: I0312 14:33:30.713494 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" event={"ID":"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6","Type":"ContainerDied","Data":"c16aee696a6ef88096dfa67f9116c7fd30990cd6603084cb800a4c732d12f445"} Mar 12 14:33:30.713540 master-0 kubenswrapper[7440]: I0312 14:33:30.713531 7440 scope.go:117] "RemoveContainer" containerID="d7590356afea30db90fd18bb64f353e2cda51d0df2cf338f3dd9cfc534cc6343" Mar 12 14:33:30.714188 master-0 kubenswrapper[7440]: I0312 14:33:30.714135 7440 scope.go:117] "RemoveContainer" containerID="c16aee696a6ef88096dfa67f9116c7fd30990cd6603084cb800a4c732d12f445" Mar 12 14:33:30.714436 master-0 kubenswrapper[7440]: E0312 14:33:30.714404 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-44hhf_openshift-ingress-operator(4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" podUID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" Mar 12 14:33:31.130024 master-0 kubenswrapper[7440]: I0312 14:33:31.129962 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:33:31.132055 master-0 kubenswrapper[7440]: I0312 14:33:31.132015 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:31.132055 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:31.132055 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:31.132055 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:31.132221 master-0 kubenswrapper[7440]: I0312 14:33:31.132075 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:31.724813 master-0 kubenswrapper[7440]: I0312 14:33:31.724762 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/5.log" Mar 12 14:33:32.132836 master-0 kubenswrapper[7440]: I0312 14:33:32.132758 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:32.132836 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:32.132836 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:32.132836 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:32.132836 master-0 kubenswrapper[7440]: I0312 14:33:32.132838 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:33.131840 master-0 kubenswrapper[7440]: I0312 14:33:33.131765 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:33.131840 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:33.131840 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:33.131840 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:33.132587 master-0 kubenswrapper[7440]: I0312 14:33:33.131847 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:34.131844 master-0 kubenswrapper[7440]: I0312 14:33:34.131783 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:34.131844 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:34.131844 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:34.131844 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:34.132576 master-0 kubenswrapper[7440]: I0312 14:33:34.131860 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:34.552007 master-0 kubenswrapper[7440]: I0312 14:33:34.551951 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 12 14:33:34.552262 master-0 kubenswrapper[7440]: E0312 14:33:34.552177 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05fc4965-b390-4edc-a407-d431b06d7612" containerName="installer" Mar 12 14:33:34.552262 master-0 kubenswrapper[7440]: I0312 14:33:34.552188 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="05fc4965-b390-4edc-a407-d431b06d7612" containerName="installer" Mar 12 14:33:34.552262 master-0 kubenswrapper[7440]: E0312 14:33:34.552207 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a8ac56-734c-4d51-9171-0540f8b9f242" containerName="kube-rbac-proxy" Mar 12 14:33:34.552262 master-0 kubenswrapper[7440]: I0312 14:33:34.552213 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a8ac56-734c-4d51-9171-0540f8b9f242" containerName="kube-rbac-proxy" Mar 12 14:33:34.552262 master-0 kubenswrapper[7440]: E0312 14:33:34.552225 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8675d4-a0be-42a3-96af-e56f5fb02983" containerName="installer" Mar 12 14:33:34.552262 master-0 kubenswrapper[7440]: I0312 14:33:34.552230 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8675d4-a0be-42a3-96af-e56f5fb02983" containerName="installer" Mar 12 14:33:34.552262 master-0 kubenswrapper[7440]: E0312 14:33:34.552244 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a8ac56-734c-4d51-9171-0540f8b9f242" containerName="reload" Mar 12 14:33:34.552262 master-0 kubenswrapper[7440]: I0312 14:33:34.552250 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a8ac56-734c-4d51-9171-0540f8b9f242" containerName="reload" Mar 12 14:33:34.552262 master-0 kubenswrapper[7440]: E0312 14:33:34.552269 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a8ac56-734c-4d51-9171-0540f8b9f242" containerName="telemeter-client" Mar 12 14:33:34.552685 master-0 kubenswrapper[7440]: I0312 14:33:34.552275 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a8ac56-734c-4d51-9171-0540f8b9f242" containerName="telemeter-client" Mar 12 14:33:34.552685 master-0 kubenswrapper[7440]: E0312 14:33:34.552288 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d8e6e9-c10f-4b43-8155-9addbfddba2e" containerName="installer" Mar 12 14:33:34.552685 master-0 kubenswrapper[7440]: I0312 14:33:34.552293 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d8e6e9-c10f-4b43-8155-9addbfddba2e" containerName="installer" Mar 12 14:33:34.552685 master-0 kubenswrapper[7440]: I0312 14:33:34.552391 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a8ac56-734c-4d51-9171-0540f8b9f242" containerName="reload" Mar 12 14:33:34.552685 master-0 kubenswrapper[7440]: I0312 14:33:34.552407 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a8ac56-734c-4d51-9171-0540f8b9f242" containerName="telemeter-client" Mar 12 14:33:34.552685 master-0 kubenswrapper[7440]: I0312 14:33:34.552417 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a8ac56-734c-4d51-9171-0540f8b9f242" containerName="kube-rbac-proxy" Mar 12 14:33:34.552685 master-0 kubenswrapper[7440]: I0312 14:33:34.552430 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8675d4-a0be-42a3-96af-e56f5fb02983" containerName="installer" Mar 12 14:33:34.552685 master-0 kubenswrapper[7440]: I0312 14:33:34.552437 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d8e6e9-c10f-4b43-8155-9addbfddba2e" containerName="installer" Mar 12 14:33:34.552685 master-0 kubenswrapper[7440]: I0312 14:33:34.552457 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="05fc4965-b390-4edc-a407-d431b06d7612" containerName="installer" Mar 12 14:33:34.553093 master-0 kubenswrapper[7440]: I0312 14:33:34.552857 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 14:33:34.555624 master-0 kubenswrapper[7440]: I0312 14:33:34.555601 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-2gl5t" Mar 12 14:33:34.556109 master-0 kubenswrapper[7440]: I0312 14:33:34.556054 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 12 14:33:34.561233 master-0 kubenswrapper[7440]: I0312 14:33:34.561178 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 12 14:33:34.689084 master-0 kubenswrapper[7440]: I0312 14:33:34.689019 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a2c3501c-0ebe-46d0-b2ed-540f96cd137c-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"a2c3501c-0ebe-46d0-b2ed-540f96cd137c\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 14:33:34.689084 master-0 kubenswrapper[7440]: I0312 14:33:34.689082 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2c3501c-0ebe-46d0-b2ed-540f96cd137c-kube-api-access\") pod \"installer-4-master-0\" (UID: \"a2c3501c-0ebe-46d0-b2ed-540f96cd137c\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 14:33:34.689480 master-0 kubenswrapper[7440]: I0312 14:33:34.689431 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a2c3501c-0ebe-46d0-b2ed-540f96cd137c-var-lock\") pod \"installer-4-master-0\" (UID: \"a2c3501c-0ebe-46d0-b2ed-540f96cd137c\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 14:33:34.790877 master-0 kubenswrapper[7440]: I0312 14:33:34.790803 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a2c3501c-0ebe-46d0-b2ed-540f96cd137c-var-lock\") pod \"installer-4-master-0\" (UID: \"a2c3501c-0ebe-46d0-b2ed-540f96cd137c\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 14:33:34.791117 master-0 kubenswrapper[7440]: I0312 14:33:34.790948 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a2c3501c-0ebe-46d0-b2ed-540f96cd137c-var-lock\") pod \"installer-4-master-0\" (UID: \"a2c3501c-0ebe-46d0-b2ed-540f96cd137c\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 14:33:34.791117 master-0 kubenswrapper[7440]: I0312 14:33:34.791005 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a2c3501c-0ebe-46d0-b2ed-540f96cd137c-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"a2c3501c-0ebe-46d0-b2ed-540f96cd137c\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 14:33:34.791117 master-0 kubenswrapper[7440]: I0312 14:33:34.791025 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2c3501c-0ebe-46d0-b2ed-540f96cd137c-kube-api-access\") pod \"installer-4-master-0\" (UID: \"a2c3501c-0ebe-46d0-b2ed-540f96cd137c\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 14:33:34.791258 master-0 kubenswrapper[7440]: I0312 14:33:34.791173 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a2c3501c-0ebe-46d0-b2ed-540f96cd137c-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"a2c3501c-0ebe-46d0-b2ed-540f96cd137c\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 14:33:34.807569 master-0 kubenswrapper[7440]: I0312 14:33:34.807442 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2c3501c-0ebe-46d0-b2ed-540f96cd137c-kube-api-access\") pod \"installer-4-master-0\" (UID: \"a2c3501c-0ebe-46d0-b2ed-540f96cd137c\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 14:33:34.878706 master-0 kubenswrapper[7440]: I0312 14:33:34.878609 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 14:33:35.131765 master-0 kubenswrapper[7440]: I0312 14:33:35.131633 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:35.131765 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:35.131765 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:35.131765 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:35.131765 master-0 kubenswrapper[7440]: I0312 14:33:35.131689 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:35.247525 master-0 kubenswrapper[7440]: I0312 14:33:35.247468 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 12 14:33:35.586723 master-0 kubenswrapper[7440]: I0312 14:33:35.586666 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd"] Mar 12 14:33:35.588163 master-0 kubenswrapper[7440]: I0312 14:33:35.588133 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:33:35.589979 master-0 kubenswrapper[7440]: I0312 14:33:35.589933 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 12 14:33:35.596444 master-0 kubenswrapper[7440]: I0312 14:33:35.596410 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 12 14:33:35.596652 master-0 kubenswrapper[7440]: I0312 14:33:35.596633 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 12 14:33:35.596708 master-0 kubenswrapper[7440]: I0312 14:33:35.596664 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 12 14:33:35.596745 master-0 kubenswrapper[7440]: I0312 14:33:35.596703 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 12 14:33:35.596774 master-0 kubenswrapper[7440]: I0312 14:33:35.596706 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-ct5mf" Mar 12 14:33:35.606978 master-0 kubenswrapper[7440]: I0312 14:33:35.606884 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd"] Mar 12 14:33:35.631000 master-0 kubenswrapper[7440]: I0312 14:33:35.630943 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 12 14:33:35.702363 master-0 kubenswrapper[7440]: I0312 14:33:35.702160 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-secret-telemeter-client\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:33:35.702363 master-0 kubenswrapper[7440]: I0312 14:33:35.702228 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmcz9\" (UniqueName: \"kubernetes.io/projected/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-kube-api-access-mmcz9\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:33:35.702363 master-0 kubenswrapper[7440]: I0312 14:33:35.702339 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-telemeter-trusted-ca-bundle\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:33:35.702363 master-0 kubenswrapper[7440]: I0312 14:33:35.702363 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-serving-certs-ca-bundle\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:33:35.702363 master-0 kubenswrapper[7440]: I0312 14:33:35.702379 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-metrics-client-ca\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:33:35.702829 master-0 kubenswrapper[7440]: I0312 14:33:35.702407 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-federate-client-tls\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:33:35.702829 master-0 kubenswrapper[7440]: I0312 14:33:35.702430 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-telemeter-client-tls\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:33:35.702829 master-0 kubenswrapper[7440]: I0312 14:33:35.702464 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:33:35.747516 master-0 kubenswrapper[7440]: I0312 14:33:35.747461 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"a2c3501c-0ebe-46d0-b2ed-540f96cd137c","Type":"ContainerStarted","Data":"92d7499402985a174fd8cf44fdbd49d9d08d220559433aa9bf620331ab2599ae"} Mar 12 14:33:35.747516 master-0 kubenswrapper[7440]: I0312 14:33:35.747513 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"a2c3501c-0ebe-46d0-b2ed-540f96cd137c","Type":"ContainerStarted","Data":"5efa81dbe1ce010e90dacfcc2b35c64f33e1c5492934d48f9dc1bdd46d4dd233"} Mar 12 14:33:35.765671 master-0 kubenswrapper[7440]: I0312 14:33:35.765582 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=1.7655618739999999 podStartE2EDuration="1.765561874s" podCreationTimestamp="2026-03-12 14:33:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:33:35.761605485 +0000 UTC m=+1276.096984054" watchObservedRunningTime="2026-03-12 14:33:35.765561874 +0000 UTC m=+1276.100940433" Mar 12 14:33:35.803728 master-0 kubenswrapper[7440]: I0312 14:33:35.803661 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-telemeter-trusted-ca-bundle\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:33:35.803728 master-0 kubenswrapper[7440]: I0312 14:33:35.803712 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-serving-certs-ca-bundle\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:33:35.803728 master-0 kubenswrapper[7440]: I0312 14:33:35.803731 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-metrics-client-ca\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:33:35.804045 master-0 kubenswrapper[7440]: I0312 14:33:35.803802 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-federate-client-tls\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:33:35.804045 master-0 kubenswrapper[7440]: I0312 14:33:35.803825 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-telemeter-client-tls\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:33:35.804706 master-0 kubenswrapper[7440]: I0312 14:33:35.804668 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-metrics-client-ca\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:33:35.804963 master-0 kubenswrapper[7440]: I0312 14:33:35.804924 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-serving-certs-ca-bundle\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:33:35.805108 master-0 kubenswrapper[7440]: I0312 14:33:35.805074 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:33:35.805165 master-0 kubenswrapper[7440]: I0312 14:33:35.805143 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-secret-telemeter-client\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:33:35.805213 master-0 kubenswrapper[7440]: I0312 14:33:35.805183 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmcz9\" (UniqueName: \"kubernetes.io/projected/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-kube-api-access-mmcz9\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:33:35.805603 master-0 kubenswrapper[7440]: I0312 14:33:35.805548 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-telemeter-trusted-ca-bundle\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:33:35.807610 master-0 kubenswrapper[7440]: I0312 14:33:35.807571 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-federate-client-tls\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:33:35.809581 master-0 kubenswrapper[7440]: I0312 14:33:35.809551 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:33:35.809659 master-0 kubenswrapper[7440]: I0312 14:33:35.809591 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-telemeter-client-tls\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:33:35.810941 master-0 kubenswrapper[7440]: I0312 14:33:35.810314 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-secret-telemeter-client\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:33:35.824562 master-0 kubenswrapper[7440]: I0312 14:33:35.824481 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmcz9\" (UniqueName: \"kubernetes.io/projected/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-kube-api-access-mmcz9\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:33:35.936298 master-0 kubenswrapper[7440]: I0312 14:33:35.936210 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:33:36.130891 master-0 kubenswrapper[7440]: I0312 14:33:36.130841 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:36.130891 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:36.130891 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:36.130891 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:36.133187 master-0 kubenswrapper[7440]: I0312 14:33:36.130929 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:36.336586 master-0 kubenswrapper[7440]: I0312 14:33:36.336431 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd"] Mar 12 14:33:36.342519 master-0 kubenswrapper[7440]: W0312 14:33:36.342044 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9dfe48c_daa1_4c18_9cf5_7b4930a0e649.slice/crio-b00ca20b86c203586e283f8a194f1ae9775853a076e1989c48f1365bb1141a67 WatchSource:0}: Error finding container b00ca20b86c203586e283f8a194f1ae9775853a076e1989c48f1365bb1141a67: Status 404 returned error can't find the container with id b00ca20b86c203586e283f8a194f1ae9775853a076e1989c48f1365bb1141a67 Mar 12 14:33:36.757313 master-0 kubenswrapper[7440]: I0312 14:33:36.757266 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" event={"ID":"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649","Type":"ContainerStarted","Data":"4443df09a8c19650104c98a740a88d33df6130e524690a66362e4946d87ce8bd"} Mar 12 14:33:36.757313 master-0 kubenswrapper[7440]: I0312 14:33:36.757316 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" event={"ID":"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649","Type":"ContainerStarted","Data":"243af8de94d7247256fe8d5f1c07f4bdc58bf9e725adb6ad3b482cf84320ddf3"} Mar 12 14:33:36.757313 master-0 kubenswrapper[7440]: I0312 14:33:36.757327 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" event={"ID":"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649","Type":"ContainerStarted","Data":"75a51d91deac1b48c8ef86e4ae313b0ebac186bbd6cc97293836179bad976767"} Mar 12 14:33:36.757313 master-0 kubenswrapper[7440]: I0312 14:33:36.757337 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" event={"ID":"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649","Type":"ContainerStarted","Data":"b00ca20b86c203586e283f8a194f1ae9775853a076e1989c48f1365bb1141a67"} Mar 12 14:33:36.787382 master-0 kubenswrapper[7440]: I0312 14:33:36.787299 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" podStartSLOduration=1.7872737189999999 podStartE2EDuration="1.787273719s" podCreationTimestamp="2026-03-12 14:33:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:33:36.783125616 +0000 UTC m=+1277.118504195" watchObservedRunningTime="2026-03-12 14:33:36.787273719 +0000 UTC m=+1277.122652278" Mar 12 14:33:37.131763 master-0 kubenswrapper[7440]: I0312 14:33:37.131679 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:37.131763 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:37.131763 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:37.131763 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:37.132155 master-0 kubenswrapper[7440]: I0312 14:33:37.131796 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:38.132079 master-0 kubenswrapper[7440]: I0312 14:33:38.132011 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:38.132079 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:38.132079 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:38.132079 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:38.132079 master-0 kubenswrapper[7440]: I0312 14:33:38.132085 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:39.132833 master-0 kubenswrapper[7440]: I0312 14:33:39.132701 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:39.132833 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:39.132833 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:39.132833 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:39.132833 master-0 kubenswrapper[7440]: I0312 14:33:39.132815 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:40.131661 master-0 kubenswrapper[7440]: I0312 14:33:40.131587 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:40.131661 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:40.131661 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:40.131661 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:40.132076 master-0 kubenswrapper[7440]: I0312 14:33:40.131688 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:41.131836 master-0 kubenswrapper[7440]: I0312 14:33:41.131722 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:41.131836 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:41.131836 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:41.131836 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:41.132864 master-0 kubenswrapper[7440]: I0312 14:33:41.132825 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:42.131768 master-0 kubenswrapper[7440]: I0312 14:33:42.131638 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:42.131768 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:42.131768 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:42.131768 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:42.131768 master-0 kubenswrapper[7440]: I0312 14:33:42.131718 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:43.131354 master-0 kubenswrapper[7440]: I0312 14:33:43.131305 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:43.131354 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:43.131354 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:43.131354 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:43.131967 master-0 kubenswrapper[7440]: I0312 14:33:43.131365 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:44.131923 master-0 kubenswrapper[7440]: I0312 14:33:44.131844 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:44.131923 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:44.131923 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:44.131923 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:44.132550 master-0 kubenswrapper[7440]: I0312 14:33:44.131945 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:44.805494 master-0 kubenswrapper[7440]: I0312 14:33:44.805443 7440 scope.go:117] "RemoveContainer" containerID="c16aee696a6ef88096dfa67f9116c7fd30990cd6603084cb800a4c732d12f445" Mar 12 14:33:44.805690 master-0 kubenswrapper[7440]: E0312 14:33:44.805669 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-44hhf_openshift-ingress-operator(4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" podUID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" Mar 12 14:33:45.131817 master-0 kubenswrapper[7440]: I0312 14:33:45.131680 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:45.131817 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:45.131817 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:45.131817 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:45.131817 master-0 kubenswrapper[7440]: I0312 14:33:45.131754 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:46.131549 master-0 kubenswrapper[7440]: I0312 14:33:46.131499 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:46.131549 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:46.131549 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:46.131549 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:46.131949 master-0 kubenswrapper[7440]: I0312 14:33:46.131602 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:47.132179 master-0 kubenswrapper[7440]: I0312 14:33:47.132109 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:47.132179 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:47.132179 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:47.132179 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:47.132985 master-0 kubenswrapper[7440]: I0312 14:33:47.132202 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:48.131612 master-0 kubenswrapper[7440]: I0312 14:33:48.131557 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:48.131612 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:48.131612 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:48.131612 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:48.132005 master-0 kubenswrapper[7440]: I0312 14:33:48.131619 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:49.131730 master-0 kubenswrapper[7440]: I0312 14:33:49.131656 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:49.131730 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:49.131730 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:49.131730 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:49.132516 master-0 kubenswrapper[7440]: I0312 14:33:49.131746 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:50.130924 master-0 kubenswrapper[7440]: I0312 14:33:50.130841 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:50.130924 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:50.130924 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:50.130924 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:50.131215 master-0 kubenswrapper[7440]: I0312 14:33:50.130960 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:51.131445 master-0 kubenswrapper[7440]: I0312 14:33:51.131336 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:51.131445 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:51.131445 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:51.131445 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:51.131445 master-0 kubenswrapper[7440]: I0312 14:33:51.131412 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:52.132491 master-0 kubenswrapper[7440]: I0312 14:33:52.132431 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:52.132491 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:52.132491 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:52.132491 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:52.133078 master-0 kubenswrapper[7440]: I0312 14:33:52.132506 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:52.759050 master-0 kubenswrapper[7440]: I0312 14:33:52.758994 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-vr5md"] Mar 12 14:33:52.759297 master-0 kubenswrapper[7440]: I0312 14:33:52.759215 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" podUID="37aeb9b1-9138-41e8-83d1-8c0e0a60a00e" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://3787f36c30658b983a3a24e5747d079ed8e5f2c993c16a4b74574ce6690d96ca" gracePeriod=30 Mar 12 14:33:52.813736 master-0 kubenswrapper[7440]: I0312 14:33:52.813683 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-j4p86"] Mar 12 14:33:52.814512 master-0 kubenswrapper[7440]: I0312 14:33:52.814474 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" Mar 12 14:33:52.944641 master-0 kubenswrapper[7440]: I0312 14:33:52.944570 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1f2f9cac-0921-4f6c-b67a-714f0a81a83a-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-j4p86\" (UID: \"1f2f9cac-0921-4f6c-b67a-714f0a81a83a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" Mar 12 14:33:52.944835 master-0 kubenswrapper[7440]: I0312 14:33:52.944670 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvnwc\" (UniqueName: \"kubernetes.io/projected/1f2f9cac-0921-4f6c-b67a-714f0a81a83a-kube-api-access-kvnwc\") pod \"cni-sysctl-allowlist-ds-j4p86\" (UID: \"1f2f9cac-0921-4f6c-b67a-714f0a81a83a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" Mar 12 14:33:52.944835 master-0 kubenswrapper[7440]: I0312 14:33:52.944709 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/1f2f9cac-0921-4f6c-b67a-714f0a81a83a-ready\") pod \"cni-sysctl-allowlist-ds-j4p86\" (UID: \"1f2f9cac-0921-4f6c-b67a-714f0a81a83a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" Mar 12 14:33:52.944835 master-0 kubenswrapper[7440]: I0312 14:33:52.944744 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1f2f9cac-0921-4f6c-b67a-714f0a81a83a-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-j4p86\" (UID: \"1f2f9cac-0921-4f6c-b67a-714f0a81a83a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" Mar 12 14:33:53.046305 master-0 kubenswrapper[7440]: I0312 14:33:53.046147 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvnwc\" (UniqueName: \"kubernetes.io/projected/1f2f9cac-0921-4f6c-b67a-714f0a81a83a-kube-api-access-kvnwc\") pod \"cni-sysctl-allowlist-ds-j4p86\" (UID: \"1f2f9cac-0921-4f6c-b67a-714f0a81a83a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" Mar 12 14:33:53.046305 master-0 kubenswrapper[7440]: I0312 14:33:53.046237 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/1f2f9cac-0921-4f6c-b67a-714f0a81a83a-ready\") pod \"cni-sysctl-allowlist-ds-j4p86\" (UID: \"1f2f9cac-0921-4f6c-b67a-714f0a81a83a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" Mar 12 14:33:53.046305 master-0 kubenswrapper[7440]: I0312 14:33:53.046268 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1f2f9cac-0921-4f6c-b67a-714f0a81a83a-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-j4p86\" (UID: \"1f2f9cac-0921-4f6c-b67a-714f0a81a83a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" Mar 12 14:33:53.046581 master-0 kubenswrapper[7440]: I0312 14:33:53.046332 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1f2f9cac-0921-4f6c-b67a-714f0a81a83a-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-j4p86\" (UID: \"1f2f9cac-0921-4f6c-b67a-714f0a81a83a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" Mar 12 14:33:53.046619 master-0 kubenswrapper[7440]: I0312 14:33:53.046595 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1f2f9cac-0921-4f6c-b67a-714f0a81a83a-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-j4p86\" (UID: \"1f2f9cac-0921-4f6c-b67a-714f0a81a83a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" Mar 12 14:33:53.046880 master-0 kubenswrapper[7440]: I0312 14:33:53.046842 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/1f2f9cac-0921-4f6c-b67a-714f0a81a83a-ready\") pod \"cni-sysctl-allowlist-ds-j4p86\" (UID: \"1f2f9cac-0921-4f6c-b67a-714f0a81a83a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" Mar 12 14:33:53.047201 master-0 kubenswrapper[7440]: I0312 14:33:53.047170 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1f2f9cac-0921-4f6c-b67a-714f0a81a83a-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-j4p86\" (UID: \"1f2f9cac-0921-4f6c-b67a-714f0a81a83a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" Mar 12 14:33:53.064937 master-0 kubenswrapper[7440]: I0312 14:33:53.064877 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvnwc\" (UniqueName: \"kubernetes.io/projected/1f2f9cac-0921-4f6c-b67a-714f0a81a83a-kube-api-access-kvnwc\") pod \"cni-sysctl-allowlist-ds-j4p86\" (UID: \"1f2f9cac-0921-4f6c-b67a-714f0a81a83a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" Mar 12 14:33:53.130867 master-0 kubenswrapper[7440]: I0312 14:33:53.130788 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:53.130867 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:53.130867 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:53.130867 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:53.131151 master-0 kubenswrapper[7440]: I0312 14:33:53.130879 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:53.141402 master-0 kubenswrapper[7440]: I0312 14:33:53.141348 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" Mar 12 14:33:53.158000 master-0 kubenswrapper[7440]: W0312 14:33:53.157941 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f2f9cac_0921_4f6c_b67a_714f0a81a83a.slice/crio-c90045977993a6dcb0bd1d9f253b5b8f4a42bd71e23759614a70642a6d82a49a WatchSource:0}: Error finding container c90045977993a6dcb0bd1d9f253b5b8f4a42bd71e23759614a70642a6d82a49a: Status 404 returned error can't find the container with id c90045977993a6dcb0bd1d9f253b5b8f4a42bd71e23759614a70642a6d82a49a Mar 12 14:33:53.886672 master-0 kubenswrapper[7440]: I0312 14:33:53.886629 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" event={"ID":"1f2f9cac-0921-4f6c-b67a-714f0a81a83a","Type":"ContainerStarted","Data":"f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2"} Mar 12 14:33:53.886672 master-0 kubenswrapper[7440]: I0312 14:33:53.886679 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" event={"ID":"1f2f9cac-0921-4f6c-b67a-714f0a81a83a","Type":"ContainerStarted","Data":"c90045977993a6dcb0bd1d9f253b5b8f4a42bd71e23759614a70642a6d82a49a"} Mar 12 14:33:53.887046 master-0 kubenswrapper[7440]: I0312 14:33:53.886847 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" Mar 12 14:33:53.903370 master-0 kubenswrapper[7440]: I0312 14:33:53.903297 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" podStartSLOduration=1.903276041 podStartE2EDuration="1.903276041s" podCreationTimestamp="2026-03-12 14:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:33:53.899425135 +0000 UTC m=+1294.234803704" watchObservedRunningTime="2026-03-12 14:33:53.903276041 +0000 UTC m=+1294.238654600" Mar 12 14:33:54.132286 master-0 kubenswrapper[7440]: I0312 14:33:54.132230 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:54.132286 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:54.132286 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:54.132286 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:54.132604 master-0 kubenswrapper[7440]: I0312 14:33:54.132329 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:54.916783 master-0 kubenswrapper[7440]: I0312 14:33:54.916714 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" Mar 12 14:33:55.131121 master-0 kubenswrapper[7440]: I0312 14:33:55.131046 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:55.131121 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:55.131121 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:55.131121 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:55.131436 master-0 kubenswrapper[7440]: I0312 14:33:55.131122 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:55.782141 master-0 kubenswrapper[7440]: I0312 14:33:55.782068 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-j4p86"] Mar 12 14:33:55.805107 master-0 kubenswrapper[7440]: I0312 14:33:55.805053 7440 scope.go:117] "RemoveContainer" containerID="c16aee696a6ef88096dfa67f9116c7fd30990cd6603084cb800a4c732d12f445" Mar 12 14:33:55.805339 master-0 kubenswrapper[7440]: E0312 14:33:55.805309 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-44hhf_openshift-ingress-operator(4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" podUID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" Mar 12 14:33:56.131932 master-0 kubenswrapper[7440]: I0312 14:33:56.131741 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:56.131932 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:56.131932 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:56.131932 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:56.131932 master-0 kubenswrapper[7440]: I0312 14:33:56.131866 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:56.905727 master-0 kubenswrapper[7440]: I0312 14:33:56.905631 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" podUID="1f2f9cac-0921-4f6c-b67a-714f0a81a83a" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2" gracePeriod=30 Mar 12 14:33:57.131161 master-0 kubenswrapper[7440]: I0312 14:33:57.131107 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:57.131161 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:57.131161 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:57.131161 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:57.131460 master-0 kubenswrapper[7440]: I0312 14:33:57.131222 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:58.132590 master-0 kubenswrapper[7440]: I0312 14:33:58.132483 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:58.132590 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:58.132590 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:58.132590 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:58.133482 master-0 kubenswrapper[7440]: I0312 14:33:58.133438 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:33:59.131764 master-0 kubenswrapper[7440]: I0312 14:33:59.131675 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:33:59.131764 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:33:59.131764 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:33:59.131764 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:33:59.131764 master-0 kubenswrapper[7440]: I0312 14:33:59.131745 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:00.132117 master-0 kubenswrapper[7440]: I0312 14:34:00.132016 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:00.132117 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:00.132117 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:00.132117 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:00.132117 master-0 kubenswrapper[7440]: I0312 14:34:00.132091 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:01.132230 master-0 kubenswrapper[7440]: I0312 14:34:01.132093 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:01.132230 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:01.132230 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:01.132230 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:01.132230 master-0 kubenswrapper[7440]: I0312 14:34:01.132188 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:01.202545 master-0 kubenswrapper[7440]: E0312 14:34:01.202351 7440 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3787f36c30658b983a3a24e5747d079ed8e5f2c993c16a4b74574ce6690d96ca" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 14:34:01.207072 master-0 kubenswrapper[7440]: E0312 14:34:01.206961 7440 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3787f36c30658b983a3a24e5747d079ed8e5f2c993c16a4b74574ce6690d96ca" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 14:34:01.208666 master-0 kubenswrapper[7440]: E0312 14:34:01.208611 7440 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3787f36c30658b983a3a24e5747d079ed8e5f2c993c16a4b74574ce6690d96ca" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 14:34:01.208803 master-0 kubenswrapper[7440]: E0312 14:34:01.208670 7440 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" podUID="37aeb9b1-9138-41e8-83d1-8c0e0a60a00e" containerName="kube-multus-additional-cni-plugins" Mar 12 14:34:02.131502 master-0 kubenswrapper[7440]: I0312 14:34:02.131436 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:02.131502 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:02.131502 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:02.131502 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:02.131854 master-0 kubenswrapper[7440]: I0312 14:34:02.131519 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:03.132777 master-0 kubenswrapper[7440]: I0312 14:34:03.132712 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:03.132777 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:03.132777 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:03.132777 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:03.133506 master-0 kubenswrapper[7440]: I0312 14:34:03.132782 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:03.143527 master-0 kubenswrapper[7440]: E0312 14:34:03.143468 7440 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 14:34:03.145271 master-0 kubenswrapper[7440]: E0312 14:34:03.145176 7440 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 14:34:03.147253 master-0 kubenswrapper[7440]: E0312 14:34:03.147202 7440 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 14:34:03.147331 master-0 kubenswrapper[7440]: E0312 14:34:03.147255 7440 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" podUID="1f2f9cac-0921-4f6c-b67a-714f0a81a83a" containerName="kube-multus-additional-cni-plugins" Mar 12 14:34:04.131830 master-0 kubenswrapper[7440]: I0312 14:34:04.131777 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:04.131830 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:04.131830 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:04.131830 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:04.132205 master-0 kubenswrapper[7440]: I0312 14:34:04.131849 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:05.132063 master-0 kubenswrapper[7440]: I0312 14:34:05.131954 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:05.132063 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:05.132063 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:05.132063 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:05.132801 master-0 kubenswrapper[7440]: I0312 14:34:05.132100 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:06.130685 master-0 kubenswrapper[7440]: I0312 14:34:06.130605 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:06.130685 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:06.130685 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:06.130685 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:06.131092 master-0 kubenswrapper[7440]: I0312 14:34:06.130744 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:06.546106 master-0 kubenswrapper[7440]: I0312 14:34:06.545996 7440 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 12 14:34:06.547161 master-0 kubenswrapper[7440]: I0312 14:34:06.546303 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler-recovery-controller" containerID="cri-o://338028102e5041c5f5cf79657b9c14128ab7afda445b15271f5d150bacb3bcde" gracePeriod=30 Mar 12 14:34:06.547161 master-0 kubenswrapper[7440]: I0312 14:34:06.546464 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler-cert-syncer" containerID="cri-o://d961cd077c4348f499a31e617d8bf3df9410762f91851718b3122d68eafa5a20" gracePeriod=30 Mar 12 14:34:06.547161 master-0 kubenswrapper[7440]: I0312 14:34:06.546519 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler" containerID="cri-o://c29049190c2156c35ffa7feae22368ca8c2c0a91bfbd57f97c9a9e38dccc0bdf" gracePeriod=30 Mar 12 14:34:06.547161 master-0 kubenswrapper[7440]: I0312 14:34:06.546964 7440 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 12 14:34:06.548572 master-0 kubenswrapper[7440]: E0312 14:34:06.547254 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler" Mar 12 14:34:06.548572 master-0 kubenswrapper[7440]: I0312 14:34:06.547271 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler" Mar 12 14:34:06.548572 master-0 kubenswrapper[7440]: E0312 14:34:06.547291 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler" Mar 12 14:34:06.548572 master-0 kubenswrapper[7440]: I0312 14:34:06.547298 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler" Mar 12 14:34:06.548572 master-0 kubenswrapper[7440]: E0312 14:34:06.547312 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler-recovery-controller" Mar 12 14:34:06.548572 master-0 kubenswrapper[7440]: I0312 14:34:06.547320 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler-recovery-controller" Mar 12 14:34:06.548572 master-0 kubenswrapper[7440]: E0312 14:34:06.547332 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler-cert-syncer" Mar 12 14:34:06.548572 master-0 kubenswrapper[7440]: I0312 14:34:06.547341 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler-cert-syncer" Mar 12 14:34:06.548572 master-0 kubenswrapper[7440]: E0312 14:34:06.547356 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler-cert-syncer" Mar 12 14:34:06.548572 master-0 kubenswrapper[7440]: I0312 14:34:06.547364 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler-cert-syncer" Mar 12 14:34:06.548572 master-0 kubenswrapper[7440]: E0312 14:34:06.547388 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="wait-for-host-port" Mar 12 14:34:06.548572 master-0 kubenswrapper[7440]: I0312 14:34:06.547396 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="wait-for-host-port" Mar 12 14:34:06.548572 master-0 kubenswrapper[7440]: I0312 14:34:06.547528 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler" Mar 12 14:34:06.548572 master-0 kubenswrapper[7440]: I0312 14:34:06.547548 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler-cert-syncer" Mar 12 14:34:06.548572 master-0 kubenswrapper[7440]: I0312 14:34:06.547562 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler-recovery-controller" Mar 12 14:34:06.548572 master-0 kubenswrapper[7440]: I0312 14:34:06.547581 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler" Mar 12 14:34:06.548572 master-0 kubenswrapper[7440]: I0312 14:34:06.547595 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a711bc27e73e2efc239fb72d1184e6" containerName="kube-scheduler-cert-syncer" Mar 12 14:34:06.671195 master-0 kubenswrapper[7440]: I0312 14:34:06.671110 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:34:06.671387 master-0 kubenswrapper[7440]: I0312 14:34:06.671255 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:34:06.709202 master-0 kubenswrapper[7440]: I0312 14:34:06.709159 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_c6a711bc27e73e2efc239fb72d1184e6/kube-scheduler-cert-syncer/1.log" Mar 12 14:34:06.710685 master-0 kubenswrapper[7440]: I0312 14:34:06.710652 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_c6a711bc27e73e2efc239fb72d1184e6/kube-scheduler-cert-syncer/0.log" Mar 12 14:34:06.711253 master-0 kubenswrapper[7440]: I0312 14:34:06.711214 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_c6a711bc27e73e2efc239fb72d1184e6/kube-scheduler/0.log" Mar 12 14:34:06.711712 master-0 kubenswrapper[7440]: I0312 14:34:06.711680 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:34:06.714492 master-0 kubenswrapper[7440]: I0312 14:34:06.714385 7440 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="c6a711bc27e73e2efc239fb72d1184e6" podUID="1d3d45b6ce1b3764f9927e623a71adf8" Mar 12 14:34:06.772593 master-0 kubenswrapper[7440]: I0312 14:34:06.772467 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:34:06.772833 master-0 kubenswrapper[7440]: I0312 14:34:06.772609 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:34:06.772833 master-0 kubenswrapper[7440]: I0312 14:34:06.772674 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:34:06.772833 master-0 kubenswrapper[7440]: I0312 14:34:06.772708 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:34:06.805436 master-0 kubenswrapper[7440]: I0312 14:34:06.805326 7440 scope.go:117] "RemoveContainer" containerID="c16aee696a6ef88096dfa67f9116c7fd30990cd6603084cb800a4c732d12f445" Mar 12 14:34:06.805594 master-0 kubenswrapper[7440]: E0312 14:34:06.805501 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-44hhf_openshift-ingress-operator(4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" podUID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" Mar 12 14:34:06.873557 master-0 kubenswrapper[7440]: I0312 14:34:06.873480 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c6a711bc27e73e2efc239fb72d1184e6-cert-dir\") pod \"c6a711bc27e73e2efc239fb72d1184e6\" (UID: \"c6a711bc27e73e2efc239fb72d1184e6\") " Mar 12 14:34:06.873557 master-0 kubenswrapper[7440]: I0312 14:34:06.873565 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c6a711bc27e73e2efc239fb72d1184e6-resource-dir\") pod \"c6a711bc27e73e2efc239fb72d1184e6\" (UID: \"c6a711bc27e73e2efc239fb72d1184e6\") " Mar 12 14:34:06.873838 master-0 kubenswrapper[7440]: I0312 14:34:06.873563 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6a711bc27e73e2efc239fb72d1184e6-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "c6a711bc27e73e2efc239fb72d1184e6" (UID: "c6a711bc27e73e2efc239fb72d1184e6"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:34:06.873838 master-0 kubenswrapper[7440]: I0312 14:34:06.873651 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6a711bc27e73e2efc239fb72d1184e6-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "c6a711bc27e73e2efc239fb72d1184e6" (UID: "c6a711bc27e73e2efc239fb72d1184e6"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:34:06.873838 master-0 kubenswrapper[7440]: I0312 14:34:06.873826 7440 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c6a711bc27e73e2efc239fb72d1184e6-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:34:06.873838 master-0 kubenswrapper[7440]: I0312 14:34:06.873839 7440 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c6a711bc27e73e2efc239fb72d1184e6-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:34:06.992290 master-0 kubenswrapper[7440]: I0312 14:34:06.992232 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_c6a711bc27e73e2efc239fb72d1184e6/kube-scheduler-cert-syncer/1.log" Mar 12 14:34:06.993541 master-0 kubenswrapper[7440]: I0312 14:34:06.993456 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_c6a711bc27e73e2efc239fb72d1184e6/kube-scheduler-cert-syncer/0.log" Mar 12 14:34:06.994064 master-0 kubenswrapper[7440]: I0312 14:34:06.994044 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_c6a711bc27e73e2efc239fb72d1184e6/kube-scheduler/0.log" Mar 12 14:34:06.994458 master-0 kubenswrapper[7440]: I0312 14:34:06.994324 7440 generic.go:334] "Generic (PLEG): container finished" podID="c6a711bc27e73e2efc239fb72d1184e6" containerID="d961cd077c4348f499a31e617d8bf3df9410762f91851718b3122d68eafa5a20" exitCode=2 Mar 12 14:34:06.994458 master-0 kubenswrapper[7440]: I0312 14:34:06.994350 7440 generic.go:334] "Generic (PLEG): container finished" podID="c6a711bc27e73e2efc239fb72d1184e6" containerID="c29049190c2156c35ffa7feae22368ca8c2c0a91bfbd57f97c9a9e38dccc0bdf" exitCode=0 Mar 12 14:34:06.994458 master-0 kubenswrapper[7440]: I0312 14:34:06.994359 7440 generic.go:334] "Generic (PLEG): container finished" podID="c6a711bc27e73e2efc239fb72d1184e6" containerID="338028102e5041c5f5cf79657b9c14128ab7afda445b15271f5d150bacb3bcde" exitCode=0 Mar 12 14:34:06.994458 master-0 kubenswrapper[7440]: I0312 14:34:06.994379 7440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3a306c26a0173b7c306591d8fb09ccc137a9d2d80b43e56b18e1f7e938dbefa" Mar 12 14:34:06.994458 master-0 kubenswrapper[7440]: I0312 14:34:06.994411 7440 scope.go:117] "RemoveContainer" containerID="2aee18625338d290a376474bbeead6c6bef3630d9c0a26ff9cffcf446662e724" Mar 12 14:34:06.994941 master-0 kubenswrapper[7440]: I0312 14:34:06.994837 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:34:06.997991 master-0 kubenswrapper[7440]: I0312 14:34:06.996228 7440 generic.go:334] "Generic (PLEG): container finished" podID="a2c3501c-0ebe-46d0-b2ed-540f96cd137c" containerID="92d7499402985a174fd8cf44fdbd49d9d08d220559433aa9bf620331ab2599ae" exitCode=0 Mar 12 14:34:06.997991 master-0 kubenswrapper[7440]: I0312 14:34:06.996263 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"a2c3501c-0ebe-46d0-b2ed-540f96cd137c","Type":"ContainerDied","Data":"92d7499402985a174fd8cf44fdbd49d9d08d220559433aa9bf620331ab2599ae"} Mar 12 14:34:06.998317 master-0 kubenswrapper[7440]: I0312 14:34:06.998288 7440 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="c6a711bc27e73e2efc239fb72d1184e6" podUID="1d3d45b6ce1b3764f9927e623a71adf8" Mar 12 14:34:07.019875 master-0 kubenswrapper[7440]: I0312 14:34:07.019848 7440 scope.go:117] "RemoveContainer" containerID="b7832dc4839767f3cbfd92e515cd8bc243889013b3c5aafd8b213f8334c4b7db" Mar 12 14:34:07.023239 master-0 kubenswrapper[7440]: I0312 14:34:07.023168 7440 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="c6a711bc27e73e2efc239fb72d1184e6" podUID="1d3d45b6ce1b3764f9927e623a71adf8" Mar 12 14:34:07.131726 master-0 kubenswrapper[7440]: I0312 14:34:07.131538 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:07.131726 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:07.131726 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:07.131726 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:07.131726 master-0 kubenswrapper[7440]: I0312 14:34:07.131640 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:07.811503 master-0 kubenswrapper[7440]: I0312 14:34:07.811446 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6a711bc27e73e2efc239fb72d1184e6" path="/var/lib/kubelet/pods/c6a711bc27e73e2efc239fb72d1184e6/volumes" Mar 12 14:34:08.002100 master-0 kubenswrapper[7440]: I0312 14:34:08.002052 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_c6a711bc27e73e2efc239fb72d1184e6/kube-scheduler-cert-syncer/1.log" Mar 12 14:34:08.131694 master-0 kubenswrapper[7440]: I0312 14:34:08.131024 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:08.131694 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:08.131694 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:08.131694 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:08.131694 master-0 kubenswrapper[7440]: I0312 14:34:08.131080 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:08.247761 master-0 kubenswrapper[7440]: I0312 14:34:08.247713 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 14:34:08.394920 master-0 kubenswrapper[7440]: I0312 14:34:08.394756 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2c3501c-0ebe-46d0-b2ed-540f96cd137c-kube-api-access\") pod \"a2c3501c-0ebe-46d0-b2ed-540f96cd137c\" (UID: \"a2c3501c-0ebe-46d0-b2ed-540f96cd137c\") " Mar 12 14:34:08.394920 master-0 kubenswrapper[7440]: I0312 14:34:08.394828 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a2c3501c-0ebe-46d0-b2ed-540f96cd137c-var-lock\") pod \"a2c3501c-0ebe-46d0-b2ed-540f96cd137c\" (UID: \"a2c3501c-0ebe-46d0-b2ed-540f96cd137c\") " Mar 12 14:34:08.394920 master-0 kubenswrapper[7440]: I0312 14:34:08.394879 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a2c3501c-0ebe-46d0-b2ed-540f96cd137c-kubelet-dir\") pod \"a2c3501c-0ebe-46d0-b2ed-540f96cd137c\" (UID: \"a2c3501c-0ebe-46d0-b2ed-540f96cd137c\") " Mar 12 14:34:08.395320 master-0 kubenswrapper[7440]: I0312 14:34:08.395242 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2c3501c-0ebe-46d0-b2ed-540f96cd137c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a2c3501c-0ebe-46d0-b2ed-540f96cd137c" (UID: "a2c3501c-0ebe-46d0-b2ed-540f96cd137c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:34:08.395320 master-0 kubenswrapper[7440]: I0312 14:34:08.395250 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2c3501c-0ebe-46d0-b2ed-540f96cd137c-var-lock" (OuterVolumeSpecName: "var-lock") pod "a2c3501c-0ebe-46d0-b2ed-540f96cd137c" (UID: "a2c3501c-0ebe-46d0-b2ed-540f96cd137c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:34:08.399305 master-0 kubenswrapper[7440]: I0312 14:34:08.399261 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2c3501c-0ebe-46d0-b2ed-540f96cd137c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a2c3501c-0ebe-46d0-b2ed-540f96cd137c" (UID: "a2c3501c-0ebe-46d0-b2ed-540f96cd137c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:34:08.496637 master-0 kubenswrapper[7440]: I0312 14:34:08.496585 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2c3501c-0ebe-46d0-b2ed-540f96cd137c-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 14:34:08.496637 master-0 kubenswrapper[7440]: I0312 14:34:08.496631 7440 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a2c3501c-0ebe-46d0-b2ed-540f96cd137c-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 14:34:08.496637 master-0 kubenswrapper[7440]: I0312 14:34:08.496645 7440 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a2c3501c-0ebe-46d0-b2ed-540f96cd137c-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:34:09.008938 master-0 kubenswrapper[7440]: I0312 14:34:09.008877 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"a2c3501c-0ebe-46d0-b2ed-540f96cd137c","Type":"ContainerDied","Data":"5efa81dbe1ce010e90dacfcc2b35c64f33e1c5492934d48f9dc1bdd46d4dd233"} Mar 12 14:34:09.008938 master-0 kubenswrapper[7440]: I0312 14:34:09.008939 7440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5efa81dbe1ce010e90dacfcc2b35c64f33e1c5492934d48f9dc1bdd46d4dd233" Mar 12 14:34:09.009527 master-0 kubenswrapper[7440]: I0312 14:34:09.009511 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 14:34:09.131629 master-0 kubenswrapper[7440]: I0312 14:34:09.131573 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:09.131629 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:09.131629 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:09.131629 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:09.132070 master-0 kubenswrapper[7440]: I0312 14:34:09.131640 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:10.131920 master-0 kubenswrapper[7440]: I0312 14:34:10.131826 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:10.131920 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:10.131920 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:10.131920 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:10.132546 master-0 kubenswrapper[7440]: I0312 14:34:10.131937 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:11.132213 master-0 kubenswrapper[7440]: I0312 14:34:11.132056 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:11.132213 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:11.132213 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:11.132213 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:11.132213 master-0 kubenswrapper[7440]: I0312 14:34:11.132166 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:11.202483 master-0 kubenswrapper[7440]: E0312 14:34:11.202319 7440 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3787f36c30658b983a3a24e5747d079ed8e5f2c993c16a4b74574ce6690d96ca" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 14:34:11.204220 master-0 kubenswrapper[7440]: E0312 14:34:11.204116 7440 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3787f36c30658b983a3a24e5747d079ed8e5f2c993c16a4b74574ce6690d96ca" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 14:34:11.206268 master-0 kubenswrapper[7440]: E0312 14:34:11.206226 7440 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3787f36c30658b983a3a24e5747d079ed8e5f2c993c16a4b74574ce6690d96ca" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 14:34:11.206268 master-0 kubenswrapper[7440]: E0312 14:34:11.206262 7440 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" podUID="37aeb9b1-9138-41e8-83d1-8c0e0a60a00e" containerName="kube-multus-additional-cni-plugins" Mar 12 14:34:12.131594 master-0 kubenswrapper[7440]: I0312 14:34:12.131528 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:12.131594 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:12.131594 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:12.131594 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:12.132009 master-0 kubenswrapper[7440]: I0312 14:34:12.131597 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:13.131325 master-0 kubenswrapper[7440]: I0312 14:34:13.131261 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:13.131325 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:13.131325 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:13.131325 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:13.131869 master-0 kubenswrapper[7440]: I0312 14:34:13.131329 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:13.144036 master-0 kubenswrapper[7440]: E0312 14:34:13.143974 7440 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 14:34:13.145210 master-0 kubenswrapper[7440]: E0312 14:34:13.144984 7440 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 14:34:13.146428 master-0 kubenswrapper[7440]: E0312 14:34:13.146385 7440 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 14:34:13.146503 master-0 kubenswrapper[7440]: E0312 14:34:13.146424 7440 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" podUID="1f2f9cac-0921-4f6c-b67a-714f0a81a83a" containerName="kube-multus-additional-cni-plugins" Mar 12 14:34:14.131590 master-0 kubenswrapper[7440]: I0312 14:34:14.131521 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:14.131590 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:14.131590 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:14.131590 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:14.132175 master-0 kubenswrapper[7440]: I0312 14:34:14.131598 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:15.131730 master-0 kubenswrapper[7440]: I0312 14:34:15.131679 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:15.131730 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:15.131730 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:15.131730 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:15.132403 master-0 kubenswrapper[7440]: I0312 14:34:15.132373 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:16.130998 master-0 kubenswrapper[7440]: I0312 14:34:16.130924 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:16.130998 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:16.130998 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:16.130998 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:16.131378 master-0 kubenswrapper[7440]: I0312 14:34:16.131013 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:16.975167 master-0 kubenswrapper[7440]: I0312 14:34:16.975110 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-retry-1-master-0"] Mar 12 14:34:16.975737 master-0 kubenswrapper[7440]: E0312 14:34:16.975358 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c3501c-0ebe-46d0-b2ed-540f96cd137c" containerName="installer" Mar 12 14:34:16.975737 master-0 kubenswrapper[7440]: I0312 14:34:16.975370 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c3501c-0ebe-46d0-b2ed-540f96cd137c" containerName="installer" Mar 12 14:34:16.975737 master-0 kubenswrapper[7440]: I0312 14:34:16.975491 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2c3501c-0ebe-46d0-b2ed-540f96cd137c" containerName="installer" Mar 12 14:34:16.975889 master-0 kubenswrapper[7440]: I0312 14:34:16.975874 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 14:34:16.980033 master-0 kubenswrapper[7440]: I0312 14:34:16.978296 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 12 14:34:16.980033 master-0 kubenswrapper[7440]: I0312 14:34:16.978527 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-9xvhv" Mar 12 14:34:16.987853 master-0 kubenswrapper[7440]: I0312 14:34:16.986700 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-retry-1-master-0"] Mar 12 14:34:17.127377 master-0 kubenswrapper[7440]: I0312 14:34:17.127315 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e446f8c1-88ee-4891-acff-1634059952b8-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"e446f8c1-88ee-4891-acff-1634059952b8\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 14:34:17.127377 master-0 kubenswrapper[7440]: I0312 14:34:17.127381 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e446f8c1-88ee-4891-acff-1634059952b8-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"e446f8c1-88ee-4891-acff-1634059952b8\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 14:34:17.127619 master-0 kubenswrapper[7440]: I0312 14:34:17.127416 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e446f8c1-88ee-4891-acff-1634059952b8-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"e446f8c1-88ee-4891-acff-1634059952b8\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 14:34:17.131659 master-0 kubenswrapper[7440]: I0312 14:34:17.131622 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:17.131659 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:17.131659 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:17.131659 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:17.131805 master-0 kubenswrapper[7440]: I0312 14:34:17.131669 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:17.229100 master-0 kubenswrapper[7440]: I0312 14:34:17.228960 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e446f8c1-88ee-4891-acff-1634059952b8-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"e446f8c1-88ee-4891-acff-1634059952b8\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 14:34:17.229100 master-0 kubenswrapper[7440]: I0312 14:34:17.229026 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e446f8c1-88ee-4891-acff-1634059952b8-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"e446f8c1-88ee-4891-acff-1634059952b8\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 14:34:17.229353 master-0 kubenswrapper[7440]: I0312 14:34:17.229136 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e446f8c1-88ee-4891-acff-1634059952b8-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"e446f8c1-88ee-4891-acff-1634059952b8\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 14:34:17.229353 master-0 kubenswrapper[7440]: I0312 14:34:17.229221 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e446f8c1-88ee-4891-acff-1634059952b8-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"e446f8c1-88ee-4891-acff-1634059952b8\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 14:34:17.229878 master-0 kubenswrapper[7440]: I0312 14:34:17.229843 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e446f8c1-88ee-4891-acff-1634059952b8-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"e446f8c1-88ee-4891-acff-1634059952b8\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 14:34:17.256054 master-0 kubenswrapper[7440]: I0312 14:34:17.255720 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e446f8c1-88ee-4891-acff-1634059952b8-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"e446f8c1-88ee-4891-acff-1634059952b8\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 14:34:17.295356 master-0 kubenswrapper[7440]: I0312 14:34:17.295256 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 14:34:17.692973 master-0 kubenswrapper[7440]: I0312 14:34:17.692928 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-retry-1-master-0"] Mar 12 14:34:17.697666 master-0 kubenswrapper[7440]: W0312 14:34:17.697576 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode446f8c1_88ee_4891_acff_1634059952b8.slice/crio-331ad83b0b6e86ed94272c41bd3bff12e2a7b521900d8d64bcaf467e7a689445 WatchSource:0}: Error finding container 331ad83b0b6e86ed94272c41bd3bff12e2a7b521900d8d64bcaf467e7a689445: Status 404 returned error can't find the container with id 331ad83b0b6e86ed94272c41bd3bff12e2a7b521900d8d64bcaf467e7a689445 Mar 12 14:34:18.071389 master-0 kubenswrapper[7440]: I0312 14:34:18.071344 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" event={"ID":"e446f8c1-88ee-4891-acff-1634059952b8","Type":"ContainerStarted","Data":"69b1a662825b18e27460359c18d2fbfc7e22bb39335fd144982b4c8a46a63277"} Mar 12 14:34:18.071389 master-0 kubenswrapper[7440]: I0312 14:34:18.071393 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" event={"ID":"e446f8c1-88ee-4891-acff-1634059952b8","Type":"ContainerStarted","Data":"331ad83b0b6e86ed94272c41bd3bff12e2a7b521900d8d64bcaf467e7a689445"} Mar 12 14:34:18.088301 master-0 kubenswrapper[7440]: I0312 14:34:18.088217 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" podStartSLOduration=2.088193101 podStartE2EDuration="2.088193101s" podCreationTimestamp="2026-03-12 14:34:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:34:18.086406177 +0000 UTC m=+1318.421784786" watchObservedRunningTime="2026-03-12 14:34:18.088193101 +0000 UTC m=+1318.423571680" Mar 12 14:34:18.131519 master-0 kubenswrapper[7440]: I0312 14:34:18.131468 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:18.131519 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:18.131519 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:18.131519 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:18.131771 master-0 kubenswrapper[7440]: I0312 14:34:18.131528 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:19.132210 master-0 kubenswrapper[7440]: I0312 14:34:19.132147 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:19.132210 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:19.132210 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:19.132210 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:19.132839 master-0 kubenswrapper[7440]: I0312 14:34:19.132218 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:19.807829 master-0 kubenswrapper[7440]: I0312 14:34:19.807762 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:34:19.824201 master-0 kubenswrapper[7440]: I0312 14:34:19.824154 7440 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1f141e18-9ada-40e1-a2e5-d45ba1f5ac67" Mar 12 14:34:19.824201 master-0 kubenswrapper[7440]: I0312 14:34:19.824196 7440 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1f141e18-9ada-40e1-a2e5-d45ba1f5ac67" Mar 12 14:34:19.837642 master-0 kubenswrapper[7440]: I0312 14:34:19.837586 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 12 14:34:19.840023 master-0 kubenswrapper[7440]: I0312 14:34:19.839987 7440 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:34:19.843787 master-0 kubenswrapper[7440]: I0312 14:34:19.843740 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 12 14:34:19.852397 master-0 kubenswrapper[7440]: I0312 14:34:19.852342 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:34:19.859142 master-0 kubenswrapper[7440]: I0312 14:34:19.859081 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 12 14:34:19.875570 master-0 kubenswrapper[7440]: W0312 14:34:19.875498 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3d45b6ce1b3764f9927e623a71adf8.slice/crio-c08577925424813ee777936cf83e1b718ae5ce815b0089c7d7f01bbc45cd2891 WatchSource:0}: Error finding container c08577925424813ee777936cf83e1b718ae5ce815b0089c7d7f01bbc45cd2891: Status 404 returned error can't find the container with id c08577925424813ee777936cf83e1b718ae5ce815b0089c7d7f01bbc45cd2891 Mar 12 14:34:20.085120 master-0 kubenswrapper[7440]: I0312 14:34:20.085081 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"c08577925424813ee777936cf83e1b718ae5ce815b0089c7d7f01bbc45cd2891"} Mar 12 14:34:20.131753 master-0 kubenswrapper[7440]: I0312 14:34:20.131691 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:20.131753 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:20.131753 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:20.131753 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:20.132042 master-0 kubenswrapper[7440]: I0312 14:34:20.131771 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:20.976659 master-0 kubenswrapper[7440]: I0312 14:34:20.976533 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-3-retry-1-master-0"] Mar 12 14:34:20.977242 master-0 kubenswrapper[7440]: I0312 14:34:20.976758 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" podUID="e446f8c1-88ee-4891-acff-1634059952b8" containerName="installer" containerID="cri-o://69b1a662825b18e27460359c18d2fbfc7e22bb39335fd144982b4c8a46a63277" gracePeriod=30 Mar 12 14:34:21.094206 master-0 kubenswrapper[7440]: I0312 14:34:21.094137 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"3a9edbd537b2b433573698a4a6787a21fea247fccf7cbaf8147e87a4f36c14fb"} Mar 12 14:34:21.130919 master-0 kubenswrapper[7440]: I0312 14:34:21.130829 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:21.130919 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:21.130919 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:21.130919 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:21.131206 master-0 kubenswrapper[7440]: I0312 14:34:21.130917 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:21.202597 master-0 kubenswrapper[7440]: E0312 14:34:21.202523 7440 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3787f36c30658b983a3a24e5747d079ed8e5f2c993c16a4b74574ce6690d96ca" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 14:34:21.203988 master-0 kubenswrapper[7440]: E0312 14:34:21.203926 7440 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3787f36c30658b983a3a24e5747d079ed8e5f2c993c16a4b74574ce6690d96ca" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 14:34:21.205235 master-0 kubenswrapper[7440]: E0312 14:34:21.205165 7440 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3787f36c30658b983a3a24e5747d079ed8e5f2c993c16a4b74574ce6690d96ca" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 14:34:21.205292 master-0 kubenswrapper[7440]: E0312 14:34:21.205236 7440 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" podUID="37aeb9b1-9138-41e8-83d1-8c0e0a60a00e" containerName="kube-multus-additional-cni-plugins" Mar 12 14:34:21.805245 master-0 kubenswrapper[7440]: I0312 14:34:21.805133 7440 scope.go:117] "RemoveContainer" containerID="c16aee696a6ef88096dfa67f9116c7fd30990cd6603084cb800a4c732d12f445" Mar 12 14:34:21.805593 master-0 kubenswrapper[7440]: E0312 14:34:21.805399 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-44hhf_openshift-ingress-operator(4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" podUID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" Mar 12 14:34:22.131684 master-0 kubenswrapper[7440]: I0312 14:34:22.131515 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:22.131684 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:22.131684 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:22.131684 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:22.131684 master-0 kubenswrapper[7440]: I0312 14:34:22.131595 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:22.896174 master-0 kubenswrapper[7440]: I0312 14:34:22.896108 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-vr5md_37aeb9b1-9138-41e8-83d1-8c0e0a60a00e/kube-multus-additional-cni-plugins/0.log" Mar 12 14:34:22.896414 master-0 kubenswrapper[7440]: I0312 14:34:22.896194 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" Mar 12 14:34:23.007253 master-0 kubenswrapper[7440]: I0312 14:34:23.007160 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/37aeb9b1-9138-41e8-83d1-8c0e0a60a00e-ready\") pod \"37aeb9b1-9138-41e8-83d1-8c0e0a60a00e\" (UID: \"37aeb9b1-9138-41e8-83d1-8c0e0a60a00e\") " Mar 12 14:34:23.007534 master-0 kubenswrapper[7440]: I0312 14:34:23.007266 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/37aeb9b1-9138-41e8-83d1-8c0e0a60a00e-tuning-conf-dir\") pod \"37aeb9b1-9138-41e8-83d1-8c0e0a60a00e\" (UID: \"37aeb9b1-9138-41e8-83d1-8c0e0a60a00e\") " Mar 12 14:34:23.007534 master-0 kubenswrapper[7440]: I0312 14:34:23.007308 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vs6k\" (UniqueName: \"kubernetes.io/projected/37aeb9b1-9138-41e8-83d1-8c0e0a60a00e-kube-api-access-6vs6k\") pod \"37aeb9b1-9138-41e8-83d1-8c0e0a60a00e\" (UID: \"37aeb9b1-9138-41e8-83d1-8c0e0a60a00e\") " Mar 12 14:34:23.007534 master-0 kubenswrapper[7440]: I0312 14:34:23.007373 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/37aeb9b1-9138-41e8-83d1-8c0e0a60a00e-cni-sysctl-allowlist\") pod \"37aeb9b1-9138-41e8-83d1-8c0e0a60a00e\" (UID: \"37aeb9b1-9138-41e8-83d1-8c0e0a60a00e\") " Mar 12 14:34:23.007534 master-0 kubenswrapper[7440]: I0312 14:34:23.007384 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37aeb9b1-9138-41e8-83d1-8c0e0a60a00e-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "37aeb9b1-9138-41e8-83d1-8c0e0a60a00e" (UID: "37aeb9b1-9138-41e8-83d1-8c0e0a60a00e"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:34:23.007534 master-0 kubenswrapper[7440]: I0312 14:34:23.007487 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37aeb9b1-9138-41e8-83d1-8c0e0a60a00e-ready" (OuterVolumeSpecName: "ready") pod "37aeb9b1-9138-41e8-83d1-8c0e0a60a00e" (UID: "37aeb9b1-9138-41e8-83d1-8c0e0a60a00e"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:34:23.007884 master-0 kubenswrapper[7440]: I0312 14:34:23.007835 7440 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/37aeb9b1-9138-41e8-83d1-8c0e0a60a00e-ready\") on node \"master-0\" DevicePath \"\"" Mar 12 14:34:23.007884 master-0 kubenswrapper[7440]: I0312 14:34:23.007852 7440 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/37aeb9b1-9138-41e8-83d1-8c0e0a60a00e-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:34:23.008085 master-0 kubenswrapper[7440]: I0312 14:34:23.008017 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37aeb9b1-9138-41e8-83d1-8c0e0a60a00e-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "37aeb9b1-9138-41e8-83d1-8c0e0a60a00e" (UID: "37aeb9b1-9138-41e8-83d1-8c0e0a60a00e"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:34:23.010532 master-0 kubenswrapper[7440]: I0312 14:34:23.010490 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37aeb9b1-9138-41e8-83d1-8c0e0a60a00e-kube-api-access-6vs6k" (OuterVolumeSpecName: "kube-api-access-6vs6k") pod "37aeb9b1-9138-41e8-83d1-8c0e0a60a00e" (UID: "37aeb9b1-9138-41e8-83d1-8c0e0a60a00e"). InnerVolumeSpecName "kube-api-access-6vs6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:34:23.108718 master-0 kubenswrapper[7440]: I0312 14:34:23.108652 7440 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/37aeb9b1-9138-41e8-83d1-8c0e0a60a00e-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Mar 12 14:34:23.108718 master-0 kubenswrapper[7440]: I0312 14:34:23.108691 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vs6k\" (UniqueName: \"kubernetes.io/projected/37aeb9b1-9138-41e8-83d1-8c0e0a60a00e-kube-api-access-6vs6k\") on node \"master-0\" DevicePath \"\"" Mar 12 14:34:23.109937 master-0 kubenswrapper[7440]: I0312 14:34:23.109858 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-vr5md_37aeb9b1-9138-41e8-83d1-8c0e0a60a00e/kube-multus-additional-cni-plugins/0.log" Mar 12 14:34:23.109937 master-0 kubenswrapper[7440]: I0312 14:34:23.109924 7440 generic.go:334] "Generic (PLEG): container finished" podID="37aeb9b1-9138-41e8-83d1-8c0e0a60a00e" containerID="3787f36c30658b983a3a24e5747d079ed8e5f2c993c16a4b74574ce6690d96ca" exitCode=137 Mar 12 14:34:23.110136 master-0 kubenswrapper[7440]: I0312 14:34:23.109956 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" event={"ID":"37aeb9b1-9138-41e8-83d1-8c0e0a60a00e","Type":"ContainerDied","Data":"3787f36c30658b983a3a24e5747d079ed8e5f2c993c16a4b74574ce6690d96ca"} Mar 12 14:34:23.110136 master-0 kubenswrapper[7440]: I0312 14:34:23.109986 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" event={"ID":"37aeb9b1-9138-41e8-83d1-8c0e0a60a00e","Type":"ContainerDied","Data":"a9054a9359d736ed3f297de33ab43b49ffefd2bc4dddda05743306c3b05999a8"} Mar 12 14:34:23.110136 master-0 kubenswrapper[7440]: I0312 14:34:23.110007 7440 scope.go:117] "RemoveContainer" containerID="3787f36c30658b983a3a24e5747d079ed8e5f2c993c16a4b74574ce6690d96ca" Mar 12 14:34:23.110136 master-0 kubenswrapper[7440]: I0312 14:34:23.110129 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-vr5md" Mar 12 14:34:23.129118 master-0 kubenswrapper[7440]: I0312 14:34:23.129073 7440 scope.go:117] "RemoveContainer" containerID="3787f36c30658b983a3a24e5747d079ed8e5f2c993c16a4b74574ce6690d96ca" Mar 12 14:34:23.130956 master-0 kubenswrapper[7440]: E0312 14:34:23.130890 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3787f36c30658b983a3a24e5747d079ed8e5f2c993c16a4b74574ce6690d96ca\": container with ID starting with 3787f36c30658b983a3a24e5747d079ed8e5f2c993c16a4b74574ce6690d96ca not found: ID does not exist" containerID="3787f36c30658b983a3a24e5747d079ed8e5f2c993c16a4b74574ce6690d96ca" Mar 12 14:34:23.131057 master-0 kubenswrapper[7440]: I0312 14:34:23.130957 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3787f36c30658b983a3a24e5747d079ed8e5f2c993c16a4b74574ce6690d96ca"} err="failed to get container status \"3787f36c30658b983a3a24e5747d079ed8e5f2c993c16a4b74574ce6690d96ca\": rpc error: code = NotFound desc = could not find container \"3787f36c30658b983a3a24e5747d079ed8e5f2c993c16a4b74574ce6690d96ca\": container with ID starting with 3787f36c30658b983a3a24e5747d079ed8e5f2c993c16a4b74574ce6690d96ca not found: ID does not exist" Mar 12 14:34:23.134199 master-0 kubenswrapper[7440]: I0312 14:34:23.134140 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:23.134199 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:23.134199 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:23.134199 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:23.134761 master-0 kubenswrapper[7440]: I0312 14:34:23.134225 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:23.139298 master-0 kubenswrapper[7440]: I0312 14:34:23.139243 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-vr5md"] Mar 12 14:34:23.144280 master-0 kubenswrapper[7440]: E0312 14:34:23.144221 7440 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 14:34:23.145795 master-0 kubenswrapper[7440]: E0312 14:34:23.145735 7440 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 14:34:23.146587 master-0 kubenswrapper[7440]: I0312 14:34:23.146521 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-vr5md"] Mar 12 14:34:23.147644 master-0 kubenswrapper[7440]: E0312 14:34:23.147562 7440 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 14:34:23.147644 master-0 kubenswrapper[7440]: E0312 14:34:23.147618 7440 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" podUID="1f2f9cac-0921-4f6c-b67a-714f0a81a83a" containerName="kube-multus-additional-cni-plugins" Mar 12 14:34:23.811484 master-0 kubenswrapper[7440]: I0312 14:34:23.811424 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37aeb9b1-9138-41e8-83d1-8c0e0a60a00e" path="/var/lib/kubelet/pods/37aeb9b1-9138-41e8-83d1-8c0e0a60a00e/volumes" Mar 12 14:34:24.131532 master-0 kubenswrapper[7440]: I0312 14:34:24.131407 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:24.131532 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:24.131532 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:24.131532 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:24.131532 master-0 kubenswrapper[7440]: I0312 14:34:24.131467 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:24.376064 master-0 kubenswrapper[7440]: I0312 14:34:24.376003 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 12 14:34:24.376573 master-0 kubenswrapper[7440]: E0312 14:34:24.376309 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37aeb9b1-9138-41e8-83d1-8c0e0a60a00e" containerName="kube-multus-additional-cni-plugins" Mar 12 14:34:24.376573 master-0 kubenswrapper[7440]: I0312 14:34:24.376327 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="37aeb9b1-9138-41e8-83d1-8c0e0a60a00e" containerName="kube-multus-additional-cni-plugins" Mar 12 14:34:24.376573 master-0 kubenswrapper[7440]: I0312 14:34:24.376472 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="37aeb9b1-9138-41e8-83d1-8c0e0a60a00e" containerName="kube-multus-additional-cni-plugins" Mar 12 14:34:24.377034 master-0 kubenswrapper[7440]: I0312 14:34:24.376999 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:34:24.392584 master-0 kubenswrapper[7440]: I0312 14:34:24.392532 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 12 14:34:24.526018 master-0 kubenswrapper[7440]: I0312 14:34:24.525941 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a56d42a-efb4-4956-acab-d12c7ca5276e-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:34:24.526342 master-0 kubenswrapper[7440]: I0312 14:34:24.526050 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5a56d42a-efb4-4956-acab-d12c7ca5276e-var-lock\") pod \"installer-4-master-0\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:34:24.526342 master-0 kubenswrapper[7440]: I0312 14:34:24.526126 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:34:24.627222 master-0 kubenswrapper[7440]: I0312 14:34:24.627168 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:34:24.627441 master-0 kubenswrapper[7440]: I0312 14:34:24.627251 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a56d42a-efb4-4956-acab-d12c7ca5276e-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:34:24.627441 master-0 kubenswrapper[7440]: I0312 14:34:24.627312 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a56d42a-efb4-4956-acab-d12c7ca5276e-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:34:24.627519 master-0 kubenswrapper[7440]: I0312 14:34:24.627452 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5a56d42a-efb4-4956-acab-d12c7ca5276e-var-lock\") pod \"installer-4-master-0\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:34:24.627553 master-0 kubenswrapper[7440]: I0312 14:34:24.627527 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5a56d42a-efb4-4956-acab-d12c7ca5276e-var-lock\") pod \"installer-4-master-0\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:34:24.643144 master-0 kubenswrapper[7440]: I0312 14:34:24.643063 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:34:24.699981 master-0 kubenswrapper[7440]: I0312 14:34:24.699887 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:34:24.850593 master-0 kubenswrapper[7440]: I0312 14:34:24.849742 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 12 14:34:24.851769 master-0 kubenswrapper[7440]: I0312 14:34:24.851737 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 14:34:24.854931 master-0 kubenswrapper[7440]: I0312 14:34:24.854873 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 12 14:34:24.855054 master-0 kubenswrapper[7440]: I0312 14:34:24.855014 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-wm5wg" Mar 12 14:34:24.860406 master-0 kubenswrapper[7440]: I0312 14:34:24.860228 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 12 14:34:24.932162 master-0 kubenswrapper[7440]: I0312 14:34:24.931103 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a2b4b06-98cd-4ca3-aebe-d49651c6013f-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"9a2b4b06-98cd-4ca3-aebe-d49651c6013f\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 14:34:24.932162 master-0 kubenswrapper[7440]: I0312 14:34:24.931239 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a2b4b06-98cd-4ca3-aebe-d49651c6013f-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9a2b4b06-98cd-4ca3-aebe-d49651c6013f\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 14:34:24.932162 master-0 kubenswrapper[7440]: I0312 14:34:24.931328 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9a2b4b06-98cd-4ca3-aebe-d49651c6013f-var-lock\") pod \"installer-3-master-0\" (UID: \"9a2b4b06-98cd-4ca3-aebe-d49651c6013f\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 14:34:25.033139 master-0 kubenswrapper[7440]: I0312 14:34:25.033073 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a2b4b06-98cd-4ca3-aebe-d49651c6013f-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9a2b4b06-98cd-4ca3-aebe-d49651c6013f\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 14:34:25.033383 master-0 kubenswrapper[7440]: I0312 14:34:25.033153 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9a2b4b06-98cd-4ca3-aebe-d49651c6013f-var-lock\") pod \"installer-3-master-0\" (UID: \"9a2b4b06-98cd-4ca3-aebe-d49651c6013f\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 14:34:25.033383 master-0 kubenswrapper[7440]: I0312 14:34:25.033193 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a2b4b06-98cd-4ca3-aebe-d49651c6013f-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"9a2b4b06-98cd-4ca3-aebe-d49651c6013f\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 14:34:25.033383 master-0 kubenswrapper[7440]: I0312 14:34:25.033267 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a2b4b06-98cd-4ca3-aebe-d49651c6013f-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"9a2b4b06-98cd-4ca3-aebe-d49651c6013f\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 14:34:25.033858 master-0 kubenswrapper[7440]: I0312 14:34:25.033686 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9a2b4b06-98cd-4ca3-aebe-d49651c6013f-var-lock\") pod \"installer-3-master-0\" (UID: \"9a2b4b06-98cd-4ca3-aebe-d49651c6013f\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 14:34:25.047064 master-0 kubenswrapper[7440]: I0312 14:34:25.047020 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a2b4b06-98cd-4ca3-aebe-d49651c6013f-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9a2b4b06-98cd-4ca3-aebe-d49651c6013f\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 14:34:25.118650 master-0 kubenswrapper[7440]: I0312 14:34:25.118555 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 12 14:34:25.130618 master-0 kubenswrapper[7440]: I0312 14:34:25.130572 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:25.130618 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:25.130618 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:25.130618 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:25.130832 master-0 kubenswrapper[7440]: I0312 14:34:25.130627 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:25.181003 master-0 kubenswrapper[7440]: I0312 14:34:25.180953 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 14:34:25.553763 master-0 kubenswrapper[7440]: I0312 14:34:25.553722 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 12 14:34:26.131476 master-0 kubenswrapper[7440]: I0312 14:34:26.131156 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:26.131476 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:26.131476 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:26.131476 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:26.131476 master-0 kubenswrapper[7440]: I0312 14:34:26.131243 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:26.132008 master-0 kubenswrapper[7440]: I0312 14:34:26.131975 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"5a56d42a-efb4-4956-acab-d12c7ca5276e","Type":"ContainerStarted","Data":"146c62a465e9e1e895adc796ffe1dc3a492864f1300cc5372ec58af6ed5526e2"} Mar 12 14:34:26.132090 master-0 kubenswrapper[7440]: I0312 14:34:26.132014 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"5a56d42a-efb4-4956-acab-d12c7ca5276e","Type":"ContainerStarted","Data":"de0406e113f23db73705a57d2ac92f7e04c405beeb25e91cf51ec912fcd90a38"} Mar 12 14:34:26.134204 master-0 kubenswrapper[7440]: I0312 14:34:26.134157 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"9a2b4b06-98cd-4ca3-aebe-d49651c6013f","Type":"ContainerStarted","Data":"6a4b354a483f93559470810779464488abbf5caec068837d5cc9967973e986cd"} Mar 12 14:34:26.134284 master-0 kubenswrapper[7440]: I0312 14:34:26.134216 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"9a2b4b06-98cd-4ca3-aebe-d49651c6013f","Type":"ContainerStarted","Data":"d550ea8dc31005b416b9c69f57e3f529e1fb9f7cb9468cf14d70b47c6fe1bf41"} Mar 12 14:34:26.156683 master-0 kubenswrapper[7440]: I0312 14:34:26.156592 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=2.15656728 podStartE2EDuration="2.15656728s" podCreationTimestamp="2026-03-12 14:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:34:26.153633956 +0000 UTC m=+1326.489012535" watchObservedRunningTime="2026-03-12 14:34:26.15656728 +0000 UTC m=+1326.491945839" Mar 12 14:34:26.179780 master-0 kubenswrapper[7440]: I0312 14:34:26.178913 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=2.178842139 podStartE2EDuration="2.178842139s" podCreationTimestamp="2026-03-12 14:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:34:26.174229463 +0000 UTC m=+1326.509608022" watchObservedRunningTime="2026-03-12 14:34:26.178842139 +0000 UTC m=+1326.514220698" Mar 12 14:34:27.036248 master-0 kubenswrapper[7440]: I0312 14:34:27.036191 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-j4p86_1f2f9cac-0921-4f6c-b67a-714f0a81a83a/kube-multus-additional-cni-plugins/0.log" Mar 12 14:34:27.036248 master-0 kubenswrapper[7440]: I0312 14:34:27.036261 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" Mar 12 14:34:27.131258 master-0 kubenswrapper[7440]: I0312 14:34:27.131183 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:27.131258 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:27.131258 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:27.131258 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:27.131533 master-0 kubenswrapper[7440]: I0312 14:34:27.131291 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:27.143239 master-0 kubenswrapper[7440]: I0312 14:34:27.143195 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-j4p86_1f2f9cac-0921-4f6c-b67a-714f0a81a83a/kube-multus-additional-cni-plugins/0.log" Mar 12 14:34:27.143239 master-0 kubenswrapper[7440]: I0312 14:34:27.143237 7440 generic.go:334] "Generic (PLEG): container finished" podID="1f2f9cac-0921-4f6c-b67a-714f0a81a83a" containerID="f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2" exitCode=137 Mar 12 14:34:27.143536 master-0 kubenswrapper[7440]: I0312 14:34:27.143292 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" event={"ID":"1f2f9cac-0921-4f6c-b67a-714f0a81a83a","Type":"ContainerDied","Data":"f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2"} Mar 12 14:34:27.143536 master-0 kubenswrapper[7440]: I0312 14:34:27.143363 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" event={"ID":"1f2f9cac-0921-4f6c-b67a-714f0a81a83a","Type":"ContainerDied","Data":"c90045977993a6dcb0bd1d9f253b5b8f4a42bd71e23759614a70642a6d82a49a"} Mar 12 14:34:27.143536 master-0 kubenswrapper[7440]: I0312 14:34:27.143411 7440 scope.go:117] "RemoveContainer" containerID="f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2" Mar 12 14:34:27.143536 master-0 kubenswrapper[7440]: I0312 14:34:27.143463 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-j4p86" Mar 12 14:34:27.167055 master-0 kubenswrapper[7440]: I0312 14:34:27.166977 7440 scope.go:117] "RemoveContainer" containerID="f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2" Mar 12 14:34:27.167649 master-0 kubenswrapper[7440]: E0312 14:34:27.167576 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2\": container with ID starting with f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2 not found: ID does not exist" containerID="f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2" Mar 12 14:34:27.167768 master-0 kubenswrapper[7440]: I0312 14:34:27.167642 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2"} err="failed to get container status \"f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2\": rpc error: code = NotFound desc = could not find container \"f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2\": container with ID starting with f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2 not found: ID does not exist" Mar 12 14:34:27.169759 master-0 kubenswrapper[7440]: I0312 14:34:27.169650 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvnwc\" (UniqueName: \"kubernetes.io/projected/1f2f9cac-0921-4f6c-b67a-714f0a81a83a-kube-api-access-kvnwc\") pod \"1f2f9cac-0921-4f6c-b67a-714f0a81a83a\" (UID: \"1f2f9cac-0921-4f6c-b67a-714f0a81a83a\") " Mar 12 14:34:27.169846 master-0 kubenswrapper[7440]: I0312 14:34:27.169779 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/1f2f9cac-0921-4f6c-b67a-714f0a81a83a-ready\") pod \"1f2f9cac-0921-4f6c-b67a-714f0a81a83a\" (UID: \"1f2f9cac-0921-4f6c-b67a-714f0a81a83a\") " Mar 12 14:34:27.169892 master-0 kubenswrapper[7440]: I0312 14:34:27.169868 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1f2f9cac-0921-4f6c-b67a-714f0a81a83a-tuning-conf-dir\") pod \"1f2f9cac-0921-4f6c-b67a-714f0a81a83a\" (UID: \"1f2f9cac-0921-4f6c-b67a-714f0a81a83a\") " Mar 12 14:34:27.169980 master-0 kubenswrapper[7440]: I0312 14:34:27.169955 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1f2f9cac-0921-4f6c-b67a-714f0a81a83a-cni-sysctl-allowlist\") pod \"1f2f9cac-0921-4f6c-b67a-714f0a81a83a\" (UID: \"1f2f9cac-0921-4f6c-b67a-714f0a81a83a\") " Mar 12 14:34:27.170092 master-0 kubenswrapper[7440]: I0312 14:34:27.170031 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f2f9cac-0921-4f6c-b67a-714f0a81a83a-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "1f2f9cac-0921-4f6c-b67a-714f0a81a83a" (UID: "1f2f9cac-0921-4f6c-b67a-714f0a81a83a"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:34:27.170462 master-0 kubenswrapper[7440]: I0312 14:34:27.170427 7440 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1f2f9cac-0921-4f6c-b67a-714f0a81a83a-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:34:27.170515 master-0 kubenswrapper[7440]: I0312 14:34:27.170413 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f2f9cac-0921-4f6c-b67a-714f0a81a83a-ready" (OuterVolumeSpecName: "ready") pod "1f2f9cac-0921-4f6c-b67a-714f0a81a83a" (UID: "1f2f9cac-0921-4f6c-b67a-714f0a81a83a"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:34:27.170870 master-0 kubenswrapper[7440]: I0312 14:34:27.170802 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f2f9cac-0921-4f6c-b67a-714f0a81a83a-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "1f2f9cac-0921-4f6c-b67a-714f0a81a83a" (UID: "1f2f9cac-0921-4f6c-b67a-714f0a81a83a"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:34:27.173628 master-0 kubenswrapper[7440]: I0312 14:34:27.173539 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f2f9cac-0921-4f6c-b67a-714f0a81a83a-kube-api-access-kvnwc" (OuterVolumeSpecName: "kube-api-access-kvnwc") pod "1f2f9cac-0921-4f6c-b67a-714f0a81a83a" (UID: "1f2f9cac-0921-4f6c-b67a-714f0a81a83a"). InnerVolumeSpecName "kube-api-access-kvnwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:34:27.271795 master-0 kubenswrapper[7440]: I0312 14:34:27.271378 7440 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1f2f9cac-0921-4f6c-b67a-714f0a81a83a-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Mar 12 14:34:27.271795 master-0 kubenswrapper[7440]: I0312 14:34:27.271481 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvnwc\" (UniqueName: \"kubernetes.io/projected/1f2f9cac-0921-4f6c-b67a-714f0a81a83a-kube-api-access-kvnwc\") on node \"master-0\" DevicePath \"\"" Mar 12 14:34:27.271795 master-0 kubenswrapper[7440]: I0312 14:34:27.271526 7440 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/1f2f9cac-0921-4f6c-b67a-714f0a81a83a-ready\") on node \"master-0\" DevicePath \"\"" Mar 12 14:34:27.471510 master-0 kubenswrapper[7440]: I0312 14:34:27.471452 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-j4p86"] Mar 12 14:34:27.475604 master-0 kubenswrapper[7440]: I0312 14:34:27.475567 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-j4p86"] Mar 12 14:34:27.812578 master-0 kubenswrapper[7440]: I0312 14:34:27.812524 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f2f9cac-0921-4f6c-b67a-714f0a81a83a" path="/var/lib/kubelet/pods/1f2f9cac-0921-4f6c-b67a-714f0a81a83a/volumes" Mar 12 14:34:28.132223 master-0 kubenswrapper[7440]: I0312 14:34:28.132062 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:28.132223 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:28.132223 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:28.132223 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:28.132223 master-0 kubenswrapper[7440]: I0312 14:34:28.132150 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:29.132507 master-0 kubenswrapper[7440]: I0312 14:34:29.132434 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:29.132507 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:29.132507 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:29.132507 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:29.133160 master-0 kubenswrapper[7440]: I0312 14:34:29.132521 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:30.131322 master-0 kubenswrapper[7440]: I0312 14:34:30.131259 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:30.131322 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:30.131322 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:30.131322 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:30.131322 master-0 kubenswrapper[7440]: I0312 14:34:30.131321 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:31.635881 master-0 kubenswrapper[7440]: I0312 14:34:31.635790 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:31.635881 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:31.635881 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:31.635881 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:31.636652 master-0 kubenswrapper[7440]: I0312 14:34:31.635921 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:32.130813 master-0 kubenswrapper[7440]: I0312 14:34:32.130751 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:32.130813 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:32.130813 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:32.130813 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:32.131198 master-0 kubenswrapper[7440]: I0312 14:34:32.131169 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:33.131551 master-0 kubenswrapper[7440]: I0312 14:34:33.131490 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:33.131551 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:33.131551 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:33.131551 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:33.132111 master-0 kubenswrapper[7440]: I0312 14:34:33.131563 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:34.131327 master-0 kubenswrapper[7440]: I0312 14:34:34.131248 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:34.131327 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:34.131327 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:34.131327 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:34.132147 master-0 kubenswrapper[7440]: I0312 14:34:34.131333 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:34.917864 master-0 kubenswrapper[7440]: I0312 14:34:34.917811 7440 scope.go:117] "RemoveContainer" containerID="d961cd077c4348f499a31e617d8bf3df9410762f91851718b3122d68eafa5a20" Mar 12 14:34:34.933607 master-0 kubenswrapper[7440]: I0312 14:34:34.933544 7440 scope.go:117] "RemoveContainer" containerID="c29049190c2156c35ffa7feae22368ca8c2c0a91bfbd57f97c9a9e38dccc0bdf" Mar 12 14:34:34.950077 master-0 kubenswrapper[7440]: I0312 14:34:34.950038 7440 scope.go:117] "RemoveContainer" containerID="338028102e5041c5f5cf79657b9c14128ab7afda445b15271f5d150bacb3bcde" Mar 12 14:34:34.971481 master-0 kubenswrapper[7440]: I0312 14:34:34.971438 7440 scope.go:117] "RemoveContainer" containerID="3dd7a4da04b6c01935c26571e75395e15a7850b95c867d09c0ff6a148fabca36" Mar 12 14:34:35.132002 master-0 kubenswrapper[7440]: I0312 14:34:35.131944 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:35.132002 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:35.132002 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:35.132002 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:35.132594 master-0 kubenswrapper[7440]: I0312 14:34:35.132009 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:36.131089 master-0 kubenswrapper[7440]: I0312 14:34:36.130972 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:36.131089 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:36.131089 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:36.131089 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:36.131482 master-0 kubenswrapper[7440]: I0312 14:34:36.131125 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:36.804865 master-0 kubenswrapper[7440]: I0312 14:34:36.804785 7440 scope.go:117] "RemoveContainer" containerID="c16aee696a6ef88096dfa67f9116c7fd30990cd6603084cb800a4c732d12f445" Mar 12 14:34:36.805684 master-0 kubenswrapper[7440]: E0312 14:34:36.805030 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-44hhf_openshift-ingress-operator(4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" podUID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" Mar 12 14:34:37.131086 master-0 kubenswrapper[7440]: I0312 14:34:37.131008 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:37.131086 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:37.131086 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:37.131086 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:37.131086 master-0 kubenswrapper[7440]: I0312 14:34:37.131071 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:37.209067 master-0 kubenswrapper[7440]: I0312 14:34:37.209002 7440 generic.go:334] "Generic (PLEG): container finished" podID="1d3d45b6ce1b3764f9927e623a71adf8" containerID="3a9edbd537b2b433573698a4a6787a21fea247fccf7cbaf8147e87a4f36c14fb" exitCode=0 Mar 12 14:34:37.209067 master-0 kubenswrapper[7440]: I0312 14:34:37.209059 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerDied","Data":"3a9edbd537b2b433573698a4a6787a21fea247fccf7cbaf8147e87a4f36c14fb"} Mar 12 14:34:37.837689 master-0 kubenswrapper[7440]: I0312 14:34:37.837650 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-dbdr9"] Mar 12 14:34:37.844288 master-0 kubenswrapper[7440]: E0312 14:34:37.838348 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2f9cac-0921-4f6c-b67a-714f0a81a83a" containerName="kube-multus-additional-cni-plugins" Mar 12 14:34:37.844288 master-0 kubenswrapper[7440]: I0312 14:34:37.838364 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2f9cac-0921-4f6c-b67a-714f0a81a83a" containerName="kube-multus-additional-cni-plugins" Mar 12 14:34:37.844288 master-0 kubenswrapper[7440]: I0312 14:34:37.838485 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f2f9cac-0921-4f6c-b67a-714f0a81a83a" containerName="kube-multus-additional-cni-plugins" Mar 12 14:34:37.844288 master-0 kubenswrapper[7440]: I0312 14:34:37.839121 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-dbdr9" Mar 12 14:34:37.844288 master-0 kubenswrapper[7440]: I0312 14:34:37.841097 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-vbgk2" Mar 12 14:34:37.844288 master-0 kubenswrapper[7440]: I0312 14:34:37.841150 7440 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 12 14:34:37.844288 master-0 kubenswrapper[7440]: I0312 14:34:37.844173 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 12 14:34:37.844288 master-0 kubenswrapper[7440]: I0312 14:34:37.844182 7440 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 12 14:34:37.850010 master-0 kubenswrapper[7440]: I0312 14:34:37.848649 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-dbdr9"] Mar 12 14:34:37.919526 master-0 kubenswrapper[7440]: I0312 14:34:37.919479 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv6gf\" (UniqueName: \"kubernetes.io/projected/ef5679f7-5bf5-409d-b74b-64a9cbb6c701-kube-api-access-vv6gf\") pod \"ingress-canary-dbdr9\" (UID: \"ef5679f7-5bf5-409d-b74b-64a9cbb6c701\") " pod="openshift-ingress-canary/ingress-canary-dbdr9" Mar 12 14:34:37.919704 master-0 kubenswrapper[7440]: I0312 14:34:37.919599 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef5679f7-5bf5-409d-b74b-64a9cbb6c701-cert\") pod \"ingress-canary-dbdr9\" (UID: \"ef5679f7-5bf5-409d-b74b-64a9cbb6c701\") " pod="openshift-ingress-canary/ingress-canary-dbdr9" Mar 12 14:34:38.021131 master-0 kubenswrapper[7440]: I0312 14:34:38.021060 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef5679f7-5bf5-409d-b74b-64a9cbb6c701-cert\") pod \"ingress-canary-dbdr9\" (UID: \"ef5679f7-5bf5-409d-b74b-64a9cbb6c701\") " pod="openshift-ingress-canary/ingress-canary-dbdr9" Mar 12 14:34:38.021313 master-0 kubenswrapper[7440]: I0312 14:34:38.021198 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv6gf\" (UniqueName: \"kubernetes.io/projected/ef5679f7-5bf5-409d-b74b-64a9cbb6c701-kube-api-access-vv6gf\") pod \"ingress-canary-dbdr9\" (UID: \"ef5679f7-5bf5-409d-b74b-64a9cbb6c701\") " pod="openshift-ingress-canary/ingress-canary-dbdr9" Mar 12 14:34:38.024590 master-0 kubenswrapper[7440]: I0312 14:34:38.024566 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef5679f7-5bf5-409d-b74b-64a9cbb6c701-cert\") pod \"ingress-canary-dbdr9\" (UID: \"ef5679f7-5bf5-409d-b74b-64a9cbb6c701\") " pod="openshift-ingress-canary/ingress-canary-dbdr9" Mar 12 14:34:38.036093 master-0 kubenswrapper[7440]: I0312 14:34:38.036071 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv6gf\" (UniqueName: \"kubernetes.io/projected/ef5679f7-5bf5-409d-b74b-64a9cbb6c701-kube-api-access-vv6gf\") pod \"ingress-canary-dbdr9\" (UID: \"ef5679f7-5bf5-409d-b74b-64a9cbb6c701\") " pod="openshift-ingress-canary/ingress-canary-dbdr9" Mar 12 14:34:38.131976 master-0 kubenswrapper[7440]: I0312 14:34:38.131922 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:38.131976 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:38.131976 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:38.131976 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:38.132488 master-0 kubenswrapper[7440]: I0312 14:34:38.132446 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:38.179738 master-0 kubenswrapper[7440]: I0312 14:34:38.179577 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-dbdr9" Mar 12 14:34:38.218454 master-0 kubenswrapper[7440]: I0312 14:34:38.218399 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"38a182ab60d59f721fb8126757690cc7012aae3a440b852f434d3a3df1616418"} Mar 12 14:34:38.218454 master-0 kubenswrapper[7440]: I0312 14:34:38.218443 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"fb3c45839cbe90283d02c62fead173d9e325341ad3690ee7a41efec589b54f05"} Mar 12 14:34:38.218454 master-0 kubenswrapper[7440]: I0312 14:34:38.218456 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"8d3f8c6c0f2e16a16de21bbdef81829ff48d83da35a97f9706694fdb99e2f9cc"} Mar 12 14:34:38.218756 master-0 kubenswrapper[7440]: I0312 14:34:38.218688 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:34:38.244155 master-0 kubenswrapper[7440]: I0312 14:34:38.239495 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=19.239477499 podStartE2EDuration="19.239477499s" podCreationTimestamp="2026-03-12 14:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:34:38.23749783 +0000 UTC m=+1338.572876419" watchObservedRunningTime="2026-03-12 14:34:38.239477499 +0000 UTC m=+1338.574856058" Mar 12 14:34:38.614516 master-0 kubenswrapper[7440]: I0312 14:34:38.614461 7440 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-dbdr9"] Mar 12 14:34:39.131503 master-0 kubenswrapper[7440]: I0312 14:34:39.131458 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:39.131503 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:39.131503 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:39.131503 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:39.132311 master-0 kubenswrapper[7440]: I0312 14:34:39.132283 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:39.225872 master-0 kubenswrapper[7440]: I0312 14:34:39.225822 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-dbdr9" event={"ID":"ef5679f7-5bf5-409d-b74b-64a9cbb6c701","Type":"ContainerStarted","Data":"2bc2451d30c899d724765075921ba2037d7b62249caf8354e71f78f87b61472d"} Mar 12 14:34:39.225872 master-0 kubenswrapper[7440]: I0312 14:34:39.225877 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-dbdr9" event={"ID":"ef5679f7-5bf5-409d-b74b-64a9cbb6c701","Type":"ContainerStarted","Data":"5a8c18378832b96fedb1cc482f9c56eff1b7bedfc155a7a794d6f4818bd05ce5"} Mar 12 14:34:39.242522 master-0 kubenswrapper[7440]: I0312 14:34:39.242456 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-dbdr9" podStartSLOduration=9.242438704 podStartE2EDuration="9.242438704s" podCreationTimestamp="2026-03-12 14:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:34:39.239184842 +0000 UTC m=+1339.574563411" watchObservedRunningTime="2026-03-12 14:34:39.242438704 +0000 UTC m=+1339.577817263" Mar 12 14:34:40.132626 master-0 kubenswrapper[7440]: I0312 14:34:40.132477 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:40.132626 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:40.132626 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:40.132626 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:40.132626 master-0 kubenswrapper[7440]: I0312 14:34:40.132628 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:41.131351 master-0 kubenswrapper[7440]: I0312 14:34:41.131237 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:41.131351 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:41.131351 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:41.131351 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:41.131351 master-0 kubenswrapper[7440]: I0312 14:34:41.131313 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:42.131672 master-0 kubenswrapper[7440]: I0312 14:34:42.131612 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:42.131672 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:42.131672 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:42.131672 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:42.132674 master-0 kubenswrapper[7440]: I0312 14:34:42.131677 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:43.131575 master-0 kubenswrapper[7440]: I0312 14:34:43.131490 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:43.131575 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:43.131575 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:43.131575 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:43.131575 master-0 kubenswrapper[7440]: I0312 14:34:43.131552 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:44.131199 master-0 kubenswrapper[7440]: I0312 14:34:44.131087 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:44.131199 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:44.131199 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:44.131199 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:44.131551 master-0 kubenswrapper[7440]: I0312 14:34:44.131198 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:45.131476 master-0 kubenswrapper[7440]: I0312 14:34:45.131408 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:45.131476 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:45.131476 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:45.131476 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:45.132064 master-0 kubenswrapper[7440]: I0312 14:34:45.131498 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:46.131502 master-0 kubenswrapper[7440]: I0312 14:34:46.131434 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:46.131502 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:46.131502 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:46.131502 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:46.131502 master-0 kubenswrapper[7440]: I0312 14:34:46.131490 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:47.131362 master-0 kubenswrapper[7440]: I0312 14:34:47.131300 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:47.131362 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:47.131362 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:47.131362 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:47.132007 master-0 kubenswrapper[7440]: I0312 14:34:47.131374 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:48.131771 master-0 kubenswrapper[7440]: I0312 14:34:48.131652 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:48.131771 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:48.131771 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:48.131771 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:48.132548 master-0 kubenswrapper[7440]: I0312 14:34:48.131809 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:48.631556 master-0 kubenswrapper[7440]: E0312 14:34:48.631452 7440 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pode446f8c1_88ee_4891_acff_1634059952b8.slice/crio-331ad83b0b6e86ed94272c41bd3bff12e2a7b521900d8d64bcaf467e7a689445\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3d45b6ce1b3764f9927e623a71adf8.slice/crio-conmon-3a9edbd537b2b433573698a4a6787a21fea247fccf7cbaf8147e87a4f36c14fb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f2f9cac_0921_4f6c_b67a_714f0a81a83a.slice/crio-conmon-f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pode446f8c1_88ee_4891_acff_1634059952b8.slice/crio-conmon-69b1a662825b18e27460359c18d2fbfc7e22bb39335fd144982b4c8a46a63277.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3d45b6ce1b3764f9927e623a71adf8.slice/crio-3a9edbd537b2b433573698a4a6787a21fea247fccf7cbaf8147e87a4f36c14fb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pode446f8c1_88ee_4891_acff_1634059952b8.slice/crio-69b1a662825b18e27460359c18d2fbfc7e22bb39335fd144982b4c8a46a63277.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f2f9cac_0921_4f6c_b67a_714f0a81a83a.slice/crio-c90045977993a6dcb0bd1d9f253b5b8f4a42bd71e23759614a70642a6d82a49a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f2f9cac_0921_4f6c_b67a_714f0a81a83a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f2f9cac_0921_4f6c_b67a_714f0a81a83a.slice/crio-f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2.scope\": RecentStats: unable to find data in memory cache]" Mar 12 14:34:48.631813 master-0 kubenswrapper[7440]: E0312 14:34:48.631593 7440 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f2f9cac_0921_4f6c_b67a_714f0a81a83a.slice/crio-c90045977993a6dcb0bd1d9f253b5b8f4a42bd71e23759614a70642a6d82a49a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f2f9cac_0921_4f6c_b67a_714f0a81a83a.slice/crio-conmon-f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3d45b6ce1b3764f9927e623a71adf8.slice/crio-conmon-3a9edbd537b2b433573698a4a6787a21fea247fccf7cbaf8147e87a4f36c14fb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pode446f8c1_88ee_4891_acff_1634059952b8.slice/crio-69b1a662825b18e27460359c18d2fbfc7e22bb39335fd144982b4c8a46a63277.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f2f9cac_0921_4f6c_b67a_714f0a81a83a.slice/crio-f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f2f9cac_0921_4f6c_b67a_714f0a81a83a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3d45b6ce1b3764f9927e623a71adf8.slice/crio-3a9edbd537b2b433573698a4a6787a21fea247fccf7cbaf8147e87a4f36c14fb.scope\": RecentStats: unable to find data in memory cache]" Mar 12 14:34:48.631813 master-0 kubenswrapper[7440]: E0312 14:34:48.631634 7440 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3d45b6ce1b3764f9927e623a71adf8.slice/crio-conmon-3a9edbd537b2b433573698a4a6787a21fea247fccf7cbaf8147e87a4f36c14fb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pode446f8c1_88ee_4891_acff_1634059952b8.slice/crio-conmon-69b1a662825b18e27460359c18d2fbfc7e22bb39335fd144982b4c8a46a63277.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f2f9cac_0921_4f6c_b67a_714f0a81a83a.slice/crio-c90045977993a6dcb0bd1d9f253b5b8f4a42bd71e23759614a70642a6d82a49a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f2f9cac_0921_4f6c_b67a_714f0a81a83a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f2f9cac_0921_4f6c_b67a_714f0a81a83a.slice/crio-f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f2f9cac_0921_4f6c_b67a_714f0a81a83a.slice/crio-conmon-f3162ade748647ac51eb27adbbdad0e90eb46a17defbfc59116695e1518757d2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pode446f8c1_88ee_4891_acff_1634059952b8.slice/crio-69b1a662825b18e27460359c18d2fbfc7e22bb39335fd144982b4c8a46a63277.scope\": RecentStats: unable to find data in memory cache]" Mar 12 14:34:48.867222 master-0 kubenswrapper[7440]: I0312 14:34:48.867182 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-retry-1-master-0_e446f8c1-88ee-4891-acff-1634059952b8/installer/0.log" Mar 12 14:34:48.867325 master-0 kubenswrapper[7440]: I0312 14:34:48.867246 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 14:34:48.998389 master-0 kubenswrapper[7440]: I0312 14:34:48.998328 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e446f8c1-88ee-4891-acff-1634059952b8-var-lock\") pod \"e446f8c1-88ee-4891-acff-1634059952b8\" (UID: \"e446f8c1-88ee-4891-acff-1634059952b8\") " Mar 12 14:34:48.998389 master-0 kubenswrapper[7440]: I0312 14:34:48.998397 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e446f8c1-88ee-4891-acff-1634059952b8-kubelet-dir\") pod \"e446f8c1-88ee-4891-acff-1634059952b8\" (UID: \"e446f8c1-88ee-4891-acff-1634059952b8\") " Mar 12 14:34:48.998831 master-0 kubenswrapper[7440]: I0312 14:34:48.998453 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e446f8c1-88ee-4891-acff-1634059952b8-kube-api-access\") pod \"e446f8c1-88ee-4891-acff-1634059952b8\" (UID: \"e446f8c1-88ee-4891-acff-1634059952b8\") " Mar 12 14:34:48.998831 master-0 kubenswrapper[7440]: I0312 14:34:48.998506 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e446f8c1-88ee-4891-acff-1634059952b8-var-lock" (OuterVolumeSpecName: "var-lock") pod "e446f8c1-88ee-4891-acff-1634059952b8" (UID: "e446f8c1-88ee-4891-acff-1634059952b8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:34:48.998831 master-0 kubenswrapper[7440]: I0312 14:34:48.998538 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e446f8c1-88ee-4891-acff-1634059952b8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e446f8c1-88ee-4891-acff-1634059952b8" (UID: "e446f8c1-88ee-4891-acff-1634059952b8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:34:48.999003 master-0 kubenswrapper[7440]: I0312 14:34:48.998981 7440 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e446f8c1-88ee-4891-acff-1634059952b8-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 14:34:48.999003 master-0 kubenswrapper[7440]: I0312 14:34:48.998997 7440 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e446f8c1-88ee-4891-acff-1634059952b8-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:34:49.001872 master-0 kubenswrapper[7440]: I0312 14:34:49.001834 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e446f8c1-88ee-4891-acff-1634059952b8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e446f8c1-88ee-4891-acff-1634059952b8" (UID: "e446f8c1-88ee-4891-acff-1634059952b8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:34:49.101013 master-0 kubenswrapper[7440]: I0312 14:34:49.100879 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e446f8c1-88ee-4891-acff-1634059952b8-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 14:34:49.131450 master-0 kubenswrapper[7440]: I0312 14:34:49.131394 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:49.131450 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:49.131450 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:49.131450 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:49.131772 master-0 kubenswrapper[7440]: I0312 14:34:49.131461 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:49.305970 master-0 kubenswrapper[7440]: I0312 14:34:49.305921 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-retry-1-master-0_e446f8c1-88ee-4891-acff-1634059952b8/installer/0.log" Mar 12 14:34:49.307334 master-0 kubenswrapper[7440]: I0312 14:34:49.306010 7440 generic.go:334] "Generic (PLEG): container finished" podID="e446f8c1-88ee-4891-acff-1634059952b8" containerID="69b1a662825b18e27460359c18d2fbfc7e22bb39335fd144982b4c8a46a63277" exitCode=1 Mar 12 14:34:49.307334 master-0 kubenswrapper[7440]: I0312 14:34:49.306044 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" event={"ID":"e446f8c1-88ee-4891-acff-1634059952b8","Type":"ContainerDied","Data":"69b1a662825b18e27460359c18d2fbfc7e22bb39335fd144982b4c8a46a63277"} Mar 12 14:34:49.307334 master-0 kubenswrapper[7440]: I0312 14:34:49.306076 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" event={"ID":"e446f8c1-88ee-4891-acff-1634059952b8","Type":"ContainerDied","Data":"331ad83b0b6e86ed94272c41bd3bff12e2a7b521900d8d64bcaf467e7a689445"} Mar 12 14:34:49.307334 master-0 kubenswrapper[7440]: I0312 14:34:49.306075 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 12 14:34:49.307334 master-0 kubenswrapper[7440]: I0312 14:34:49.306090 7440 scope.go:117] "RemoveContainer" containerID="69b1a662825b18e27460359c18d2fbfc7e22bb39335fd144982b4c8a46a63277" Mar 12 14:34:49.329321 master-0 kubenswrapper[7440]: I0312 14:34:49.329267 7440 scope.go:117] "RemoveContainer" containerID="69b1a662825b18e27460359c18d2fbfc7e22bb39335fd144982b4c8a46a63277" Mar 12 14:34:49.329868 master-0 kubenswrapper[7440]: E0312 14:34:49.329827 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69b1a662825b18e27460359c18d2fbfc7e22bb39335fd144982b4c8a46a63277\": container with ID starting with 69b1a662825b18e27460359c18d2fbfc7e22bb39335fd144982b4c8a46a63277 not found: ID does not exist" containerID="69b1a662825b18e27460359c18d2fbfc7e22bb39335fd144982b4c8a46a63277" Mar 12 14:34:49.330004 master-0 kubenswrapper[7440]: I0312 14:34:49.329862 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69b1a662825b18e27460359c18d2fbfc7e22bb39335fd144982b4c8a46a63277"} err="failed to get container status \"69b1a662825b18e27460359c18d2fbfc7e22bb39335fd144982b4c8a46a63277\": rpc error: code = NotFound desc = could not find container \"69b1a662825b18e27460359c18d2fbfc7e22bb39335fd144982b4c8a46a63277\": container with ID starting with 69b1a662825b18e27460359c18d2fbfc7e22bb39335fd144982b4c8a46a63277 not found: ID does not exist" Mar 12 14:34:49.365101 master-0 kubenswrapper[7440]: I0312 14:34:49.364067 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-3-retry-1-master-0"] Mar 12 14:34:49.389087 master-0 kubenswrapper[7440]: I0312 14:34:49.387683 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-3-retry-1-master-0"] Mar 12 14:34:49.814795 master-0 kubenswrapper[7440]: I0312 14:34:49.814724 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e446f8c1-88ee-4891-acff-1634059952b8" path="/var/lib/kubelet/pods/e446f8c1-88ee-4891-acff-1634059952b8/volumes" Mar 12 14:34:50.132207 master-0 kubenswrapper[7440]: I0312 14:34:50.132049 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:50.132207 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:50.132207 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:50.132207 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:50.132207 master-0 kubenswrapper[7440]: I0312 14:34:50.132144 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:51.130452 master-0 kubenswrapper[7440]: I0312 14:34:51.130338 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:51.130452 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:51.130452 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:51.130452 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:51.130452 master-0 kubenswrapper[7440]: I0312 14:34:51.130399 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:51.805036 master-0 kubenswrapper[7440]: I0312 14:34:51.804995 7440 scope.go:117] "RemoveContainer" containerID="c16aee696a6ef88096dfa67f9116c7fd30990cd6603084cb800a4c732d12f445" Mar 12 14:34:51.805595 master-0 kubenswrapper[7440]: E0312 14:34:51.805570 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-44hhf_openshift-ingress-operator(4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" podUID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" Mar 12 14:34:52.132311 master-0 kubenswrapper[7440]: I0312 14:34:52.132154 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:52.132311 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:52.132311 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:52.132311 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:52.132311 master-0 kubenswrapper[7440]: I0312 14:34:52.132240 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:53.131945 master-0 kubenswrapper[7440]: I0312 14:34:53.131851 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:53.131945 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:53.131945 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:53.131945 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:53.132250 master-0 kubenswrapper[7440]: I0312 14:34:53.131974 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:54.132661 master-0 kubenswrapper[7440]: I0312 14:34:54.132565 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:54.132661 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:54.132661 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:54.132661 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:54.132661 master-0 kubenswrapper[7440]: I0312 14:34:54.132663 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:55.133145 master-0 kubenswrapper[7440]: I0312 14:34:55.133069 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:55.133145 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:55.133145 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:55.133145 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:55.134180 master-0 kubenswrapper[7440]: I0312 14:34:55.133152 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:56.130765 master-0 kubenswrapper[7440]: I0312 14:34:56.130712 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:56.130765 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:56.130765 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:56.130765 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:56.131061 master-0 kubenswrapper[7440]: I0312 14:34:56.130785 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:57.132818 master-0 kubenswrapper[7440]: I0312 14:34:57.132732 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:57.132818 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:57.132818 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:57.132818 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:57.132818 master-0 kubenswrapper[7440]: I0312 14:34:57.132802 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:58.131682 master-0 kubenswrapper[7440]: I0312 14:34:58.131589 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:58.131682 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:58.131682 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:58.131682 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:58.132313 master-0 kubenswrapper[7440]: I0312 14:34:58.131698 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:58.443688 master-0 kubenswrapper[7440]: I0312 14:34:58.443543 7440 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 14:34:58.444287 master-0 kubenswrapper[7440]: I0312 14:34:58.443870 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="kube-controller-manager" containerID="cri-o://cf5f8f103a771fcea458b305dc771a6ec643f8d62a671cc46fbc879cf21a71e2" gracePeriod=30 Mar 12 14:34:58.444287 master-0 kubenswrapper[7440]: I0312 14:34:58.444084 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" containerID="cri-o://bd7899bffaf6aa78dc3ed5f5798ea564a1a0894027ca075b490729e999a8ce5b" gracePeriod=30 Mar 12 14:34:58.444287 master-0 kubenswrapper[7440]: I0312 14:34:58.444145 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://0dec01a437416a94b1faca50b639752f8ecf1a0b753ff095fb2b1362f1488914" gracePeriod=30 Mar 12 14:34:58.444287 master-0 kubenswrapper[7440]: I0312 14:34:58.444203 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://d88b47b724ff96f583f2f5d18384ac675317e999c797b06ce407d3a96a3c0fcd" gracePeriod=30 Mar 12 14:34:58.449179 master-0 kubenswrapper[7440]: I0312 14:34:58.449119 7440 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 14:34:58.449479 master-0 kubenswrapper[7440]: E0312 14:34:58.449453 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:34:58.449479 master-0 kubenswrapper[7440]: I0312 14:34:58.449468 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:34:58.449479 master-0 kubenswrapper[7440]: E0312 14:34:58.449480 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e446f8c1-88ee-4891-acff-1634059952b8" containerName="installer" Mar 12 14:34:58.449713 master-0 kubenswrapper[7440]: I0312 14:34:58.449490 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="e446f8c1-88ee-4891-acff-1634059952b8" containerName="installer" Mar 12 14:34:58.449713 master-0 kubenswrapper[7440]: E0312 14:34:58.449504 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:34:58.449713 master-0 kubenswrapper[7440]: I0312 14:34:58.449513 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:34:58.449713 master-0 kubenswrapper[7440]: E0312 14:34:58.449529 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:34:58.449713 master-0 kubenswrapper[7440]: I0312 14:34:58.449537 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:34:58.449713 master-0 kubenswrapper[7440]: E0312 14:34:58.449551 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="kube-controller-manager" Mar 12 14:34:58.449713 master-0 kubenswrapper[7440]: I0312 14:34:58.449558 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="kube-controller-manager" Mar 12 14:34:58.449713 master-0 kubenswrapper[7440]: E0312 14:34:58.449571 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:34:58.449713 master-0 kubenswrapper[7440]: I0312 14:34:58.449578 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:34:58.449713 master-0 kubenswrapper[7440]: E0312 14:34:58.449593 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:34:58.449713 master-0 kubenswrapper[7440]: I0312 14:34:58.449600 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:34:58.449713 master-0 kubenswrapper[7440]: E0312 14:34:58.449610 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:34:58.449713 master-0 kubenswrapper[7440]: I0312 14:34:58.449618 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:34:58.449713 master-0 kubenswrapper[7440]: E0312 14:34:58.449633 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:34:58.449713 master-0 kubenswrapper[7440]: I0312 14:34:58.449640 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:34:58.449713 master-0 kubenswrapper[7440]: E0312 14:34:58.449660 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="kube-controller-manager-cert-syncer" Mar 12 14:34:58.449713 master-0 kubenswrapper[7440]: I0312 14:34:58.449667 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="kube-controller-manager-cert-syncer" Mar 12 14:34:58.449713 master-0 kubenswrapper[7440]: E0312 14:34:58.449680 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="kube-controller-manager-recovery-controller" Mar 12 14:34:58.449713 master-0 kubenswrapper[7440]: I0312 14:34:58.449688 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="kube-controller-manager-recovery-controller" Mar 12 14:34:58.450894 master-0 kubenswrapper[7440]: I0312 14:34:58.449820 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:34:58.450894 master-0 kubenswrapper[7440]: I0312 14:34:58.449834 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:34:58.450894 master-0 kubenswrapper[7440]: I0312 14:34:58.449844 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="kube-controller-manager-recovery-controller" Mar 12 14:34:58.450894 master-0 kubenswrapper[7440]: I0312 14:34:58.449858 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="kube-controller-manager-cert-syncer" Mar 12 14:34:58.450894 master-0 kubenswrapper[7440]: I0312 14:34:58.449867 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:34:58.450894 master-0 kubenswrapper[7440]: I0312 14:34:58.449877 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="e446f8c1-88ee-4891-acff-1634059952b8" containerName="installer" Mar 12 14:34:58.450894 master-0 kubenswrapper[7440]: I0312 14:34:58.449886 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:34:58.450894 master-0 kubenswrapper[7440]: I0312 14:34:58.449913 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="kube-controller-manager" Mar 12 14:34:58.450894 master-0 kubenswrapper[7440]: I0312 14:34:58.449928 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="kube-controller-manager-cert-syncer" Mar 12 14:34:58.450894 master-0 kubenswrapper[7440]: I0312 14:34:58.449938 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:34:58.450894 master-0 kubenswrapper[7440]: I0312 14:34:58.449955 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:34:58.450894 master-0 kubenswrapper[7440]: I0312 14:34:58.449963 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:34:58.450894 master-0 kubenswrapper[7440]: E0312 14:34:58.450108 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:34:58.450894 master-0 kubenswrapper[7440]: I0312 14:34:58.450118 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:34:58.450894 master-0 kubenswrapper[7440]: E0312 14:34:58.450133 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="kube-controller-manager-cert-syncer" Mar 12 14:34:58.450894 master-0 kubenswrapper[7440]: I0312 14:34:58.450142 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="kube-controller-manager-cert-syncer" Mar 12 14:34:58.450894 master-0 kubenswrapper[7440]: I0312 14:34:58.450271 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:34:58.540061 master-0 kubenswrapper[7440]: I0312 14:34:58.540014 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/965d6e0e3f611771f8ba2352415f565a-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"965d6e0e3f611771f8ba2352415f565a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:34:58.540061 master-0 kubenswrapper[7440]: I0312 14:34:58.540092 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/965d6e0e3f611771f8ba2352415f565a-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"965d6e0e3f611771f8ba2352415f565a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:34:58.617870 master-0 kubenswrapper[7440]: I0312 14:34:58.617772 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/kube-controller-manager-cert-syncer/1.log" Mar 12 14:34:58.619015 master-0 kubenswrapper[7440]: I0312 14:34:58.618855 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/kube-controller-manager-cert-syncer/0.log" Mar 12 14:34:58.619806 master-0 kubenswrapper[7440]: I0312 14:34:58.619318 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:34:58.622077 master-0 kubenswrapper[7440]: I0312 14:34:58.622011 7440 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="7fed292c3d5a90a99bfee43e89190405" podUID="965d6e0e3f611771f8ba2352415f565a" Mar 12 14:34:58.642099 master-0 kubenswrapper[7440]: I0312 14:34:58.642047 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/965d6e0e3f611771f8ba2352415f565a-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"965d6e0e3f611771f8ba2352415f565a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:34:58.642314 master-0 kubenswrapper[7440]: I0312 14:34:58.642200 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/965d6e0e3f611771f8ba2352415f565a-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"965d6e0e3f611771f8ba2352415f565a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:34:58.642314 master-0 kubenswrapper[7440]: I0312 14:34:58.642239 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/965d6e0e3f611771f8ba2352415f565a-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"965d6e0e3f611771f8ba2352415f565a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:34:58.642382 master-0 kubenswrapper[7440]: I0312 14:34:58.642333 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/965d6e0e3f611771f8ba2352415f565a-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"965d6e0e3f611771f8ba2352415f565a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:34:58.743572 master-0 kubenswrapper[7440]: I0312 14:34:58.743374 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7fed292c3d5a90a99bfee43e89190405-resource-dir\") pod \"7fed292c3d5a90a99bfee43e89190405\" (UID: \"7fed292c3d5a90a99bfee43e89190405\") " Mar 12 14:34:58.743851 master-0 kubenswrapper[7440]: I0312 14:34:58.743491 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fed292c3d5a90a99bfee43e89190405-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "7fed292c3d5a90a99bfee43e89190405" (UID: "7fed292c3d5a90a99bfee43e89190405"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:34:58.743851 master-0 kubenswrapper[7440]: I0312 14:34:58.743748 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7fed292c3d5a90a99bfee43e89190405-cert-dir\") pod \"7fed292c3d5a90a99bfee43e89190405\" (UID: \"7fed292c3d5a90a99bfee43e89190405\") " Mar 12 14:34:58.743851 master-0 kubenswrapper[7440]: I0312 14:34:58.743818 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fed292c3d5a90a99bfee43e89190405-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "7fed292c3d5a90a99bfee43e89190405" (UID: "7fed292c3d5a90a99bfee43e89190405"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:34:58.744193 master-0 kubenswrapper[7440]: I0312 14:34:58.744147 7440 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7fed292c3d5a90a99bfee43e89190405-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:34:58.744193 master-0 kubenswrapper[7440]: I0312 14:34:58.744174 7440 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7fed292c3d5a90a99bfee43e89190405-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:34:58.775119 master-0 kubenswrapper[7440]: E0312 14:34:58.775060 7440 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod9a2b4b06_98cd_4ca3_aebe_d49651c6013f.slice/crio-conmon-6a4b354a483f93559470810779464488abbf5caec068837d5cc9967973e986cd.scope\": RecentStats: unable to find data in memory cache]" Mar 12 14:34:59.132270 master-0 kubenswrapper[7440]: I0312 14:34:59.132150 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:34:59.132270 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:34:59.132270 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:34:59.132270 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:34:59.132270 master-0 kubenswrapper[7440]: I0312 14:34:59.132227 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:34:59.389249 master-0 kubenswrapper[7440]: I0312 14:34:59.389120 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/kube-controller-manager-cert-syncer/1.log" Mar 12 14:34:59.390774 master-0 kubenswrapper[7440]: I0312 14:34:59.390726 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/kube-controller-manager-cert-syncer/0.log" Mar 12 14:34:59.391667 master-0 kubenswrapper[7440]: I0312 14:34:59.391619 7440 generic.go:334] "Generic (PLEG): container finished" podID="7fed292c3d5a90a99bfee43e89190405" containerID="bd7899bffaf6aa78dc3ed5f5798ea564a1a0894027ca075b490729e999a8ce5b" exitCode=0 Mar 12 14:34:59.391667 master-0 kubenswrapper[7440]: I0312 14:34:59.391659 7440 generic.go:334] "Generic (PLEG): container finished" podID="7fed292c3d5a90a99bfee43e89190405" containerID="0dec01a437416a94b1faca50b639752f8ecf1a0b753ff095fb2b1362f1488914" exitCode=2 Mar 12 14:34:59.391763 master-0 kubenswrapper[7440]: I0312 14:34:59.391673 7440 generic.go:334] "Generic (PLEG): container finished" podID="7fed292c3d5a90a99bfee43e89190405" containerID="d88b47b724ff96f583f2f5d18384ac675317e999c797b06ce407d3a96a3c0fcd" exitCode=0 Mar 12 14:34:59.391763 master-0 kubenswrapper[7440]: I0312 14:34:59.391685 7440 generic.go:334] "Generic (PLEG): container finished" podID="7fed292c3d5a90a99bfee43e89190405" containerID="cf5f8f103a771fcea458b305dc771a6ec643f8d62a671cc46fbc879cf21a71e2" exitCode=0 Mar 12 14:34:59.391824 master-0 kubenswrapper[7440]: I0312 14:34:59.391779 7440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06b2e38b2912c9d15a5b2978f55eb051dd05aa588cbc81336019b954026e6207" Mar 12 14:34:59.391824 master-0 kubenswrapper[7440]: I0312 14:34:59.391800 7440 scope.go:117] "RemoveContainer" containerID="e097af6a2f7f4544f59f148b96a484480bcbaf385b5a6369c813a0b13f8c8b91" Mar 12 14:34:59.392027 master-0 kubenswrapper[7440]: I0312 14:34:59.391992 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:34:59.400388 master-0 kubenswrapper[7440]: I0312 14:34:59.400249 7440 generic.go:334] "Generic (PLEG): container finished" podID="9a2b4b06-98cd-4ca3-aebe-d49651c6013f" containerID="6a4b354a483f93559470810779464488abbf5caec068837d5cc9967973e986cd" exitCode=0 Mar 12 14:34:59.400524 master-0 kubenswrapper[7440]: I0312 14:34:59.400387 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"9a2b4b06-98cd-4ca3-aebe-d49651c6013f","Type":"ContainerDied","Data":"6a4b354a483f93559470810779464488abbf5caec068837d5cc9967973e986cd"} Mar 12 14:34:59.409156 master-0 kubenswrapper[7440]: I0312 14:34:59.402879 7440 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="7fed292c3d5a90a99bfee43e89190405" podUID="965d6e0e3f611771f8ba2352415f565a" Mar 12 14:34:59.435475 master-0 kubenswrapper[7440]: I0312 14:34:59.434466 7440 scope.go:117] "RemoveContainer" containerID="897e913e5a5888d39eecca73ba6606dae5753683c29db8129ecaf95abc7f3cbb" Mar 12 14:34:59.435753 master-0 kubenswrapper[7440]: I0312 14:34:59.435704 7440 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="7fed292c3d5a90a99bfee43e89190405" podUID="965d6e0e3f611771f8ba2352415f565a" Mar 12 14:34:59.816106 master-0 kubenswrapper[7440]: I0312 14:34:59.816034 7440 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fed292c3d5a90a99bfee43e89190405" path="/var/lib/kubelet/pods/7fed292c3d5a90a99bfee43e89190405/volumes" Mar 12 14:35:00.131962 master-0 kubenswrapper[7440]: I0312 14:35:00.131761 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:35:00.131962 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:35:00.131962 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:35:00.131962 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:35:00.131962 master-0 kubenswrapper[7440]: I0312 14:35:00.131871 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:35:00.411777 master-0 kubenswrapper[7440]: I0312 14:35:00.411551 7440 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7fed292c3d5a90a99bfee43e89190405/kube-controller-manager-cert-syncer/1.log" Mar 12 14:35:00.692268 master-0 kubenswrapper[7440]: I0312 14:35:00.692224 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 14:35:00.777489 master-0 kubenswrapper[7440]: I0312 14:35:00.777394 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9a2b4b06-98cd-4ca3-aebe-d49651c6013f-var-lock\") pod \"9a2b4b06-98cd-4ca3-aebe-d49651c6013f\" (UID: \"9a2b4b06-98cd-4ca3-aebe-d49651c6013f\") " Mar 12 14:35:00.777489 master-0 kubenswrapper[7440]: I0312 14:35:00.777493 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a2b4b06-98cd-4ca3-aebe-d49651c6013f-kubelet-dir\") pod \"9a2b4b06-98cd-4ca3-aebe-d49651c6013f\" (UID: \"9a2b4b06-98cd-4ca3-aebe-d49651c6013f\") " Mar 12 14:35:00.777814 master-0 kubenswrapper[7440]: I0312 14:35:00.777547 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a2b4b06-98cd-4ca3-aebe-d49651c6013f-var-lock" (OuterVolumeSpecName: "var-lock") pod "9a2b4b06-98cd-4ca3-aebe-d49651c6013f" (UID: "9a2b4b06-98cd-4ca3-aebe-d49651c6013f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:35:00.777814 master-0 kubenswrapper[7440]: I0312 14:35:00.777580 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a2b4b06-98cd-4ca3-aebe-d49651c6013f-kube-api-access\") pod \"9a2b4b06-98cd-4ca3-aebe-d49651c6013f\" (UID: \"9a2b4b06-98cd-4ca3-aebe-d49651c6013f\") " Mar 12 14:35:00.777814 master-0 kubenswrapper[7440]: I0312 14:35:00.777587 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a2b4b06-98cd-4ca3-aebe-d49651c6013f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9a2b4b06-98cd-4ca3-aebe-d49651c6013f" (UID: "9a2b4b06-98cd-4ca3-aebe-d49651c6013f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:35:00.778068 master-0 kubenswrapper[7440]: I0312 14:35:00.777890 7440 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9a2b4b06-98cd-4ca3-aebe-d49651c6013f-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 14:35:00.778068 master-0 kubenswrapper[7440]: I0312 14:35:00.777967 7440 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a2b4b06-98cd-4ca3-aebe-d49651c6013f-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:35:00.780594 master-0 kubenswrapper[7440]: I0312 14:35:00.780553 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a2b4b06-98cd-4ca3-aebe-d49651c6013f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9a2b4b06-98cd-4ca3-aebe-d49651c6013f" (UID: "9a2b4b06-98cd-4ca3-aebe-d49651c6013f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:35:00.879301 master-0 kubenswrapper[7440]: I0312 14:35:00.879238 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a2b4b06-98cd-4ca3-aebe-d49651c6013f-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 14:35:01.131205 master-0 kubenswrapper[7440]: I0312 14:35:01.131116 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:35:01.131205 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:35:01.131205 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:35:01.131205 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:35:01.131468 master-0 kubenswrapper[7440]: I0312 14:35:01.131252 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:35:01.421387 master-0 kubenswrapper[7440]: I0312 14:35:01.421278 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"9a2b4b06-98cd-4ca3-aebe-d49651c6013f","Type":"ContainerDied","Data":"d550ea8dc31005b416b9c69f57e3f529e1fb9f7cb9468cf14d70b47c6fe1bf41"} Mar 12 14:35:01.421387 master-0 kubenswrapper[7440]: I0312 14:35:01.421319 7440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d550ea8dc31005b416b9c69f57e3f529e1fb9f7cb9468cf14d70b47c6fe1bf41" Mar 12 14:35:01.421588 master-0 kubenswrapper[7440]: I0312 14:35:01.421391 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 14:35:02.132181 master-0 kubenswrapper[7440]: I0312 14:35:02.132076 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:35:02.132181 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:35:02.132181 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:35:02.132181 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:35:02.133115 master-0 kubenswrapper[7440]: I0312 14:35:02.132193 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:35:03.131008 master-0 kubenswrapper[7440]: I0312 14:35:03.130950 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:35:03.131008 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:35:03.131008 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:35:03.131008 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:35:03.131303 master-0 kubenswrapper[7440]: I0312 14:35:03.131016 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:35:04.132323 master-0 kubenswrapper[7440]: I0312 14:35:04.132253 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:35:04.132323 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:35:04.132323 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:35:04.132323 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:35:04.132889 master-0 kubenswrapper[7440]: I0312 14:35:04.132324 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:35:05.132819 master-0 kubenswrapper[7440]: I0312 14:35:05.132705 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:35:05.132819 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:35:05.132819 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:35:05.132819 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:35:05.132819 master-0 kubenswrapper[7440]: I0312 14:35:05.132807 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:35:06.131664 master-0 kubenswrapper[7440]: I0312 14:35:06.131568 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:35:06.131664 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:35:06.131664 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:35:06.131664 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:35:06.131664 master-0 kubenswrapper[7440]: I0312 14:35:06.131646 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:35:06.805324 master-0 kubenswrapper[7440]: I0312 14:35:06.805275 7440 scope.go:117] "RemoveContainer" containerID="c16aee696a6ef88096dfa67f9116c7fd30990cd6603084cb800a4c732d12f445" Mar 12 14:35:06.805795 master-0 kubenswrapper[7440]: E0312 14:35:06.805511 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-44hhf_openshift-ingress-operator(4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" podUID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" Mar 12 14:35:07.131813 master-0 kubenswrapper[7440]: I0312 14:35:07.131661 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:35:07.131813 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:35:07.131813 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:35:07.131813 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:35:07.131813 master-0 kubenswrapper[7440]: I0312 14:35:07.131734 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:35:08.132213 master-0 kubenswrapper[7440]: I0312 14:35:08.132112 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:35:08.132213 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:35:08.132213 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:35:08.132213 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:35:08.132213 master-0 kubenswrapper[7440]: I0312 14:35:08.132181 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:35:09.131555 master-0 kubenswrapper[7440]: I0312 14:35:09.131450 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:35:09.131555 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:35:09.131555 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:35:09.131555 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:35:09.132068 master-0 kubenswrapper[7440]: I0312 14:35:09.131544 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:35:10.131382 master-0 kubenswrapper[7440]: I0312 14:35:10.131302 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:35:10.131382 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:35:10.131382 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:35:10.131382 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:35:10.132014 master-0 kubenswrapper[7440]: I0312 14:35:10.131386 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:35:11.132390 master-0 kubenswrapper[7440]: I0312 14:35:11.132254 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:35:11.132390 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:35:11.132390 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:35:11.132390 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:35:11.132982 master-0 kubenswrapper[7440]: I0312 14:35:11.132368 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:35:12.132321 master-0 kubenswrapper[7440]: I0312 14:35:12.132225 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:35:12.132321 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:35:12.132321 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:35:12.132321 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:35:12.133402 master-0 kubenswrapper[7440]: I0312 14:35:12.132318 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:35:12.804690 master-0 kubenswrapper[7440]: I0312 14:35:12.804611 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:35:12.831197 master-0 kubenswrapper[7440]: I0312 14:35:12.831163 7440 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="9c8e4d3c-cd75-4221-86ae-9e8919dbb044" Mar 12 14:35:12.831197 master-0 kubenswrapper[7440]: I0312 14:35:12.831198 7440 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="9c8e4d3c-cd75-4221-86ae-9e8919dbb044" Mar 12 14:35:12.850715 master-0 kubenswrapper[7440]: I0312 14:35:12.849850 7440 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:35:12.856605 master-0 kubenswrapper[7440]: I0312 14:35:12.856543 7440 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 14:35:12.864189 master-0 kubenswrapper[7440]: I0312 14:35:12.863734 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:35:12.865300 master-0 kubenswrapper[7440]: I0312 14:35:12.865271 7440 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 14:35:12.874264 master-0 kubenswrapper[7440]: I0312 14:35:12.874211 7440 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 14:35:13.131503 master-0 kubenswrapper[7440]: I0312 14:35:13.131456 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:35:13.131503 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:35:13.131503 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:35:13.131503 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:35:13.131778 master-0 kubenswrapper[7440]: I0312 14:35:13.131522 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:35:13.563418 master-0 kubenswrapper[7440]: I0312 14:35:13.563363 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"965d6e0e3f611771f8ba2352415f565a","Type":"ContainerStarted","Data":"b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28"} Mar 12 14:35:13.563418 master-0 kubenswrapper[7440]: I0312 14:35:13.563411 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"965d6e0e3f611771f8ba2352415f565a","Type":"ContainerStarted","Data":"504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd"} Mar 12 14:35:13.563418 master-0 kubenswrapper[7440]: I0312 14:35:13.563425 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"965d6e0e3f611771f8ba2352415f565a","Type":"ContainerStarted","Data":"4962f86c890ab9be604d23a0da920ebdb05a4b0dbc30671f52da23640f2df151"} Mar 12 14:35:14.131251 master-0 kubenswrapper[7440]: I0312 14:35:14.131079 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:35:14.131251 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:35:14.131251 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:35:14.131251 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:35:14.131251 master-0 kubenswrapper[7440]: I0312 14:35:14.131199 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:35:14.572649 master-0 kubenswrapper[7440]: I0312 14:35:14.571644 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"965d6e0e3f611771f8ba2352415f565a","Type":"ContainerStarted","Data":"c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0"} Mar 12 14:35:14.572649 master-0 kubenswrapper[7440]: I0312 14:35:14.571689 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"965d6e0e3f611771f8ba2352415f565a","Type":"ContainerStarted","Data":"5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795"} Mar 12 14:35:15.131447 master-0 kubenswrapper[7440]: I0312 14:35:15.131381 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:35:15.131447 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:35:15.131447 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:35:15.131447 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:35:15.131741 master-0 kubenswrapper[7440]: I0312 14:35:15.131458 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:35:16.131428 master-0 kubenswrapper[7440]: I0312 14:35:16.131330 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:35:16.131428 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:35:16.131428 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:35:16.131428 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:35:16.132521 master-0 kubenswrapper[7440]: I0312 14:35:16.131462 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:35:17.131622 master-0 kubenswrapper[7440]: I0312 14:35:17.131553 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:35:17.131622 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:35:17.131622 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:35:17.131622 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:35:17.132311 master-0 kubenswrapper[7440]: I0312 14:35:17.131635 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:35:18.131148 master-0 kubenswrapper[7440]: I0312 14:35:18.131088 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:35:18.131148 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:35:18.131148 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:35:18.131148 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:35:18.131437 master-0 kubenswrapper[7440]: I0312 14:35:18.131169 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:35:18.805415 master-0 kubenswrapper[7440]: I0312 14:35:18.805380 7440 scope.go:117] "RemoveContainer" containerID="c16aee696a6ef88096dfa67f9116c7fd30990cd6603084cb800a4c732d12f445" Mar 12 14:35:18.806218 master-0 kubenswrapper[7440]: E0312 14:35:18.806183 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-44hhf_openshift-ingress-operator(4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" podUID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" Mar 12 14:35:19.132172 master-0 kubenswrapper[7440]: I0312 14:35:19.131993 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:35:19.132172 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:35:19.132172 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:35:19.132172 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:35:19.132172 master-0 kubenswrapper[7440]: I0312 14:35:19.132057 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:35:20.133324 master-0 kubenswrapper[7440]: I0312 14:35:20.133057 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:35:20.133324 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:35:20.133324 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:35:20.133324 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:35:20.133324 master-0 kubenswrapper[7440]: I0312 14:35:20.133143 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:35:21.131520 master-0 kubenswrapper[7440]: I0312 14:35:21.131371 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:35:21.131520 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:35:21.131520 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:35:21.131520 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:35:21.131520 master-0 kubenswrapper[7440]: I0312 14:35:21.131444 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:35:22.131273 master-0 kubenswrapper[7440]: I0312 14:35:22.131188 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:35:22.131273 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:35:22.131273 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:35:22.131273 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:35:22.132013 master-0 kubenswrapper[7440]: I0312 14:35:22.131292 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:35:22.865760 master-0 kubenswrapper[7440]: I0312 14:35:22.865644 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:35:22.865760 master-0 kubenswrapper[7440]: I0312 14:35:22.865724 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:35:22.865760 master-0 kubenswrapper[7440]: I0312 14:35:22.865740 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:35:22.865760 master-0 kubenswrapper[7440]: I0312 14:35:22.865753 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:35:22.871722 master-0 kubenswrapper[7440]: I0312 14:35:22.870653 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:35:22.871722 master-0 kubenswrapper[7440]: I0312 14:35:22.870735 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:35:22.890748 master-0 kubenswrapper[7440]: I0312 14:35:22.890676 7440 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=10.890655244 podStartE2EDuration="10.890655244s" podCreationTimestamp="2026-03-12 14:35:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:35:14.598189169 +0000 UTC m=+1374.933567728" watchObservedRunningTime="2026-03-12 14:35:22.890655244 +0000 UTC m=+1383.226033803" Mar 12 14:35:23.131122 master-0 kubenswrapper[7440]: I0312 14:35:23.131011 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:35:23.131122 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:35:23.131122 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:35:23.131122 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:35:23.131122 master-0 kubenswrapper[7440]: I0312 14:35:23.131073 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:35:23.644336 master-0 kubenswrapper[7440]: I0312 14:35:23.644284 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:35:23.645065 master-0 kubenswrapper[7440]: I0312 14:35:23.644998 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:35:24.132306 master-0 kubenswrapper[7440]: I0312 14:35:24.132233 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:35:24.132306 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:35:24.132306 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:35:24.132306 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:35:24.132306 master-0 kubenswrapper[7440]: I0312 14:35:24.132305 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:35:25.130770 master-0 kubenswrapper[7440]: I0312 14:35:25.130715 7440 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-gjwhp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 14:35:25.130770 master-0 kubenswrapper[7440]: [-]has-synced failed: reason withheld Mar 12 14:35:25.130770 master-0 kubenswrapper[7440]: [+]process-running ok Mar 12 14:35:25.130770 master-0 kubenswrapper[7440]: healthz check failed Mar 12 14:35:25.130770 master-0 kubenswrapper[7440]: I0312 14:35:25.130771 7440 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 14:35:25.131189 master-0 kubenswrapper[7440]: I0312 14:35:25.130814 7440 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:35:25.131277 master-0 kubenswrapper[7440]: I0312 14:35:25.131240 7440 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"8267e1775d4f1f71ce9ca7f7438e5d643c261adc1297b9c3415c07d0974bcee7"} pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" containerMessage="Container router failed startup probe, will be restarted" Mar 12 14:35:25.131277 master-0 kubenswrapper[7440]: I0312 14:35:25.131273 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" podUID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerName="router" containerID="cri-o://8267e1775d4f1f71ce9ca7f7438e5d643c261adc1297b9c3415c07d0974bcee7" gracePeriod=3600 Mar 12 14:35:29.807922 master-0 kubenswrapper[7440]: I0312 14:35:29.807775 7440 scope.go:117] "RemoveContainer" containerID="c16aee696a6ef88096dfa67f9116c7fd30990cd6603084cb800a4c732d12f445" Mar 12 14:35:29.808573 master-0 kubenswrapper[7440]: E0312 14:35:29.808104 7440 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-44hhf_openshift-ingress-operator(4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" podUID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" Mar 12 14:35:29.858795 master-0 kubenswrapper[7440]: I0312 14:35:29.858749 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:35:32.956236 master-0 kubenswrapper[7440]: I0312 14:35:32.956146 7440 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 14:35:32.956885 master-0 kubenswrapper[7440]: E0312 14:35:32.956519 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a2b4b06-98cd-4ca3-aebe-d49651c6013f" containerName="installer" Mar 12 14:35:32.956885 master-0 kubenswrapper[7440]: I0312 14:35:32.956536 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a2b4b06-98cd-4ca3-aebe-d49651c6013f" containerName="installer" Mar 12 14:35:32.956885 master-0 kubenswrapper[7440]: I0312 14:35:32.956689 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a2b4b06-98cd-4ca3-aebe-d49651c6013f" containerName="installer" Mar 12 14:35:32.957182 master-0 kubenswrapper[7440]: I0312 14:35:32.957151 7440 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 12 14:35:32.957374 master-0 kubenswrapper[7440]: I0312 14:35:32.957323 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:35:32.957431 master-0 kubenswrapper[7440]: I0312 14:35:32.957376 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" containerID="cri-o://76e7b395c2a9ba3ff27523b5970961a2bb5a85db216f39e42f2dea82ac7351d4" gracePeriod=15 Mar 12 14:35:32.957541 master-0 kubenswrapper[7440]: I0312 14:35:32.957510 7440 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://e2b0c764e775c64bb06daa502f6ffcef2b80b99417457721ebe17108234fc61d" gracePeriod=15 Mar 12 14:35:32.958482 master-0 kubenswrapper[7440]: I0312 14:35:32.958362 7440 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 12 14:35:32.958697 master-0 kubenswrapper[7440]: E0312 14:35:32.958664 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 12 14:35:32.958697 master-0 kubenswrapper[7440]: I0312 14:35:32.958690 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 12 14:35:32.958771 master-0 kubenswrapper[7440]: E0312 14:35:32.958716 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 12 14:35:32.958771 master-0 kubenswrapper[7440]: I0312 14:35:32.958724 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 12 14:35:32.958771 master-0 kubenswrapper[7440]: E0312 14:35:32.958749 7440 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 12 14:35:32.958771 master-0 kubenswrapper[7440]: I0312 14:35:32.958766 7440 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 12 14:35:32.958933 master-0 kubenswrapper[7440]: I0312 14:35:32.958915 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 12 14:35:32.958972 master-0 kubenswrapper[7440]: I0312 14:35:32.958951 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 12 14:35:32.959011 master-0 kubenswrapper[7440]: I0312 14:35:32.958971 7440 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 12 14:35:32.960539 master-0 kubenswrapper[7440]: I0312 14:35:32.960507 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:35:33.003479 master-0 kubenswrapper[7440]: E0312 14:35:33.003429 7440 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:35:33.021413 master-0 kubenswrapper[7440]: E0312 14:35:33.021367 7440 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:35:33.036283 master-0 kubenswrapper[7440]: I0312 14:35:33.036230 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:35:33.036385 master-0 kubenswrapper[7440]: I0312 14:35:33.036296 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:35:33.036385 master-0 kubenswrapper[7440]: I0312 14:35:33.036357 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:35:33.036449 master-0 kubenswrapper[7440]: I0312 14:35:33.036382 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:35:33.036449 master-0 kubenswrapper[7440]: I0312 14:35:33.036435 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:35:33.036512 master-0 kubenswrapper[7440]: I0312 14:35:33.036458 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:35:33.036512 master-0 kubenswrapper[7440]: I0312 14:35:33.036494 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:35:33.036758 master-0 kubenswrapper[7440]: I0312 14:35:33.036713 7440 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:35:33.137930 master-0 kubenswrapper[7440]: I0312 14:35:33.137837 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:35:33.138162 master-0 kubenswrapper[7440]: I0312 14:35:33.137975 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:35:33.138162 master-0 kubenswrapper[7440]: I0312 14:35:33.138011 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:35:33.138162 master-0 kubenswrapper[7440]: I0312 14:35:33.138037 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:35:33.138162 master-0 kubenswrapper[7440]: I0312 14:35:33.138096 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:35:33.138162 master-0 kubenswrapper[7440]: I0312 14:35:33.138134 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:35:33.138376 master-0 kubenswrapper[7440]: I0312 14:35:33.138175 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:35:33.138376 master-0 kubenswrapper[7440]: I0312 14:35:33.138212 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:35:33.138376 master-0 kubenswrapper[7440]: I0312 14:35:33.138187 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:35:33.138376 master-0 kubenswrapper[7440]: I0312 14:35:33.138256 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:35:33.138376 master-0 kubenswrapper[7440]: I0312 14:35:33.138261 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:35:33.138376 master-0 kubenswrapper[7440]: I0312 14:35:33.138267 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:35:33.138376 master-0 kubenswrapper[7440]: I0312 14:35:33.138294 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:35:33.138376 master-0 kubenswrapper[7440]: I0312 14:35:33.138311 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:35:33.138688 master-0 kubenswrapper[7440]: I0312 14:35:33.138387 7440 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:35:33.138688 master-0 kubenswrapper[7440]: I0312 14:35:33.138482 7440 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:35:33.304472 master-0 kubenswrapper[7440]: I0312 14:35:33.304394 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:35:33.323131 master-0 kubenswrapper[7440]: I0312 14:35:33.323064 7440 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:35:33.328413 master-0 kubenswrapper[7440]: W0312 14:35:33.328358 7440 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a18cac8a90d6913a6a0391d805cddc9.slice/crio-5913774b8f250bfb47692670821ad697d9a92cb0aca0d95d6ebaa53a1397311f WatchSource:0}: Error finding container 5913774b8f250bfb47692670821ad697d9a92cb0aca0d95d6ebaa53a1397311f: Status 404 returned error can't find the container with id 5913774b8f250bfb47692670821ad697d9a92cb0aca0d95d6ebaa53a1397311f Mar 12 14:35:33.342431 master-0 kubenswrapper[7440]: E0312 14:35:33.342262 7440 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189c1eb908bb1596 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:3a18cac8a90d6913a6a0391d805cddc9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:35:33.341394326 +0000 UTC m=+1393.676772885,LastTimestamp:2026-03-12 14:35:33.341394326 +0000 UTC m=+1393.676772885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:35:33.402119 master-0 kubenswrapper[7440]: I0312 14:35:33.402071 7440 patch_prober.go:28] interesting pod/bootstrap-kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.32.10:6443/readyz\": dial tcp 192.168.32.10:6443: connect: connection refused" start-of-body= Mar 12 14:35:33.402346 master-0 kubenswrapper[7440]: I0312 14:35:33.402135 7440 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.32.10:6443/readyz\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:35:33.715012 master-0 kubenswrapper[7440]: I0312 14:35:33.714957 7440 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="680cd62a7f090bc2a4f20cc8a440912f04f5a4fb884d39ec76cd168ddf53e447" exitCode=0 Mar 12 14:35:33.715212 master-0 kubenswrapper[7440]: I0312 14:35:33.715033 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerDied","Data":"680cd62a7f090bc2a4f20cc8a440912f04f5a4fb884d39ec76cd168ddf53e447"} Mar 12 14:35:33.715212 master-0 kubenswrapper[7440]: I0312 14:35:33.715072 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"75d2cc73f5d8290489c2ec72fc148a6f125ffa59eaf8f20c0252b0060ef642a3"} Mar 12 14:35:33.716220 master-0 kubenswrapper[7440]: E0312 14:35:33.716150 7440 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:35:33.716620 master-0 kubenswrapper[7440]: I0312 14:35:33.716586 7440 generic.go:334] "Generic (PLEG): container finished" podID="5a56d42a-efb4-4956-acab-d12c7ca5276e" containerID="146c62a465e9e1e895adc796ffe1dc3a492864f1300cc5372ec58af6ed5526e2" exitCode=0 Mar 12 14:35:33.716685 master-0 kubenswrapper[7440]: I0312 14:35:33.716651 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"5a56d42a-efb4-4956-acab-d12c7ca5276e","Type":"ContainerDied","Data":"146c62a465e9e1e895adc796ffe1dc3a492864f1300cc5372ec58af6ed5526e2"} Mar 12 14:35:33.717764 master-0 kubenswrapper[7440]: I0312 14:35:33.717717 7440 status_manager.go:851] "Failed to get status for pod" podUID="5a56d42a-efb4-4956-acab-d12c7ca5276e" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:35:33.718774 master-0 kubenswrapper[7440]: I0312 14:35:33.718307 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"3a18cac8a90d6913a6a0391d805cddc9","Type":"ContainerStarted","Data":"e9fc6346a6da6119c81346ba303c8b5290b20fcbd3042c75e28a3ab7c8620e35"} Mar 12 14:35:33.718774 master-0 kubenswrapper[7440]: I0312 14:35:33.718341 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"3a18cac8a90d6913a6a0391d805cddc9","Type":"ContainerStarted","Data":"5913774b8f250bfb47692670821ad697d9a92cb0aca0d95d6ebaa53a1397311f"} Mar 12 14:35:33.719498 master-0 kubenswrapper[7440]: E0312 14:35:33.719082 7440 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:35:33.719498 master-0 kubenswrapper[7440]: I0312 14:35:33.719118 7440 status_manager.go:851] "Failed to get status for pod" podUID="5a56d42a-efb4-4956-acab-d12c7ca5276e" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:35:33.720754 master-0 kubenswrapper[7440]: I0312 14:35:33.720702 7440 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="e2b0c764e775c64bb06daa502f6ffcef2b80b99417457721ebe17108234fc61d" exitCode=0 Mar 12 14:35:34.740373 master-0 kubenswrapper[7440]: I0312 14:35:34.740291 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"3cc6add3b8ddeafffa30f8317b74f57c52371e22c6de0912648ca83e47756722"} Mar 12 14:35:34.740373 master-0 kubenswrapper[7440]: I0312 14:35:34.740337 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"a9d7b0be96b2dd2ee16b0e4d8085acc0eb870f88bd3a21243f9c99d9574c51c9"} Mar 12 14:35:34.740373 master-0 kubenswrapper[7440]: I0312 14:35:34.740346 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"ae60fe54b5ccd230d5c299ecbcb6f31dfb5d0828ec56237e3d4b1ef25899a097"} Mar 12 14:35:34.740373 master-0 kubenswrapper[7440]: I0312 14:35:34.740354 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"bd47b92106de563d3373945a17b8e6aaefdc2d9f737608fa199cd4000e84df8c"} Mar 12 14:35:35.045541 master-0 kubenswrapper[7440]: I0312 14:35:35.038707 7440 scope.go:117] "RemoveContainer" containerID="d88b47b724ff96f583f2f5d18384ac675317e999c797b06ce407d3a96a3c0fcd" Mar 12 14:35:35.118034 master-0 kubenswrapper[7440]: I0312 14:35:35.117999 7440 scope.go:117] "RemoveContainer" containerID="0dec01a437416a94b1faca50b639752f8ecf1a0b753ff095fb2b1362f1488914" Mar 12 14:35:35.276249 master-0 kubenswrapper[7440]: I0312 14:35:35.276199 7440 scope.go:117] "RemoveContainer" containerID="cf5f8f103a771fcea458b305dc771a6ec643f8d62a671cc46fbc879cf21a71e2" Mar 12 14:35:35.290371 master-0 kubenswrapper[7440]: I0312 14:35:35.290116 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:35:35.430263 master-0 kubenswrapper[7440]: I0312 14:35:35.430226 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a56d42a-efb4-4956-acab-d12c7ca5276e-kubelet-dir\") pod \"5a56d42a-efb4-4956-acab-d12c7ca5276e\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " Mar 12 14:35:35.430522 master-0 kubenswrapper[7440]: I0312 14:35:35.430506 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5a56d42a-efb4-4956-acab-d12c7ca5276e-var-lock\") pod \"5a56d42a-efb4-4956-acab-d12c7ca5276e\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " Mar 12 14:35:35.430679 master-0 kubenswrapper[7440]: I0312 14:35:35.430666 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access\") pod \"5a56d42a-efb4-4956-acab-d12c7ca5276e\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " Mar 12 14:35:35.430767 master-0 kubenswrapper[7440]: I0312 14:35:35.430348 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a56d42a-efb4-4956-acab-d12c7ca5276e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5a56d42a-efb4-4956-acab-d12c7ca5276e" (UID: "5a56d42a-efb4-4956-acab-d12c7ca5276e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:35:35.430806 master-0 kubenswrapper[7440]: I0312 14:35:35.430789 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a56d42a-efb4-4956-acab-d12c7ca5276e-var-lock" (OuterVolumeSpecName: "var-lock") pod "5a56d42a-efb4-4956-acab-d12c7ca5276e" (UID: "5a56d42a-efb4-4956-acab-d12c7ca5276e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:35:35.431108 master-0 kubenswrapper[7440]: I0312 14:35:35.431090 7440 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a56d42a-efb4-4956-acab-d12c7ca5276e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:35:35.431217 master-0 kubenswrapper[7440]: I0312 14:35:35.431202 7440 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5a56d42a-efb4-4956-acab-d12c7ca5276e-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 14:35:35.436947 master-0 kubenswrapper[7440]: I0312 14:35:35.436480 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5a56d42a-efb4-4956-acab-d12c7ca5276e" (UID: "5a56d42a-efb4-4956-acab-d12c7ca5276e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:35:35.532464 master-0 kubenswrapper[7440]: I0312 14:35:35.532412 7440 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 14:35:35.729629 master-0 kubenswrapper[7440]: I0312 14:35:35.729583 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:35:35.755357 master-0 kubenswrapper[7440]: I0312 14:35:35.755308 7440 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="76e7b395c2a9ba3ff27523b5970961a2bb5a85db216f39e42f2dea82ac7351d4" exitCode=0 Mar 12 14:35:35.755913 master-0 kubenswrapper[7440]: I0312 14:35:35.755372 7440 scope.go:117] "RemoveContainer" containerID="e2b0c764e775c64bb06daa502f6ffcef2b80b99417457721ebe17108234fc61d" Mar 12 14:35:35.755913 master-0 kubenswrapper[7440]: I0312 14:35:35.755542 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 14:35:35.757004 master-0 kubenswrapper[7440]: I0312 14:35:35.756980 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"5a56d42a-efb4-4956-acab-d12c7ca5276e","Type":"ContainerDied","Data":"de0406e113f23db73705a57d2ac92f7e04c405beeb25e91cf51ec912fcd90a38"} Mar 12 14:35:35.757060 master-0 kubenswrapper[7440]: I0312 14:35:35.757004 7440 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:35:35.757094 master-0 kubenswrapper[7440]: I0312 14:35:35.757009 7440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de0406e113f23db73705a57d2ac92f7e04c405beeb25e91cf51ec912fcd90a38" Mar 12 14:35:35.760776 master-0 kubenswrapper[7440]: I0312 14:35:35.760738 7440 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"38d6f94bd36743b5e1de43d22e67db88c9c5b063935ce36f553f6e277d2085b0"} Mar 12 14:35:35.761121 master-0 kubenswrapper[7440]: I0312 14:35:35.761046 7440 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:35:35.770195 master-0 kubenswrapper[7440]: I0312 14:35:35.770163 7440 scope.go:117] "RemoveContainer" containerID="76e7b395c2a9ba3ff27523b5970961a2bb5a85db216f39e42f2dea82ac7351d4" Mar 12 14:35:35.788572 master-0 kubenswrapper[7440]: I0312 14:35:35.788527 7440 scope.go:117] "RemoveContainer" containerID="e520d98d7cf8903cafb8595cf7b3f03df14b8a00d253f1fd4abb1292c29d616a" Mar 12 14:35:35.803515 master-0 kubenswrapper[7440]: I0312 14:35:35.803467 7440 scope.go:117] "RemoveContainer" containerID="e2b0c764e775c64bb06daa502f6ffcef2b80b99417457721ebe17108234fc61d" Mar 12 14:35:35.804011 master-0 kubenswrapper[7440]: E0312 14:35:35.803957 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2b0c764e775c64bb06daa502f6ffcef2b80b99417457721ebe17108234fc61d\": container with ID starting with e2b0c764e775c64bb06daa502f6ffcef2b80b99417457721ebe17108234fc61d not found: ID does not exist" containerID="e2b0c764e775c64bb06daa502f6ffcef2b80b99417457721ebe17108234fc61d" Mar 12 14:35:35.804210 master-0 kubenswrapper[7440]: I0312 14:35:35.804032 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2b0c764e775c64bb06daa502f6ffcef2b80b99417457721ebe17108234fc61d"} err="failed to get container status \"e2b0c764e775c64bb06daa502f6ffcef2b80b99417457721ebe17108234fc61d\": rpc error: code = NotFound desc = could not find container \"e2b0c764e775c64bb06daa502f6ffcef2b80b99417457721ebe17108234fc61d\": container with ID starting with e2b0c764e775c64bb06daa502f6ffcef2b80b99417457721ebe17108234fc61d not found: ID does not exist" Mar 12 14:35:35.804210 master-0 kubenswrapper[7440]: I0312 14:35:35.804206 7440 scope.go:117] "RemoveContainer" containerID="76e7b395c2a9ba3ff27523b5970961a2bb5a85db216f39e42f2dea82ac7351d4" Mar 12 14:35:35.804660 master-0 kubenswrapper[7440]: E0312 14:35:35.804616 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76e7b395c2a9ba3ff27523b5970961a2bb5a85db216f39e42f2dea82ac7351d4\": container with ID starting with 76e7b395c2a9ba3ff27523b5970961a2bb5a85db216f39e42f2dea82ac7351d4 not found: ID does not exist" containerID="76e7b395c2a9ba3ff27523b5970961a2bb5a85db216f39e42f2dea82ac7351d4" Mar 12 14:35:35.804660 master-0 kubenswrapper[7440]: I0312 14:35:35.804651 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76e7b395c2a9ba3ff27523b5970961a2bb5a85db216f39e42f2dea82ac7351d4"} err="failed to get container status \"76e7b395c2a9ba3ff27523b5970961a2bb5a85db216f39e42f2dea82ac7351d4\": rpc error: code = NotFound desc = could not find container \"76e7b395c2a9ba3ff27523b5970961a2bb5a85db216f39e42f2dea82ac7351d4\": container with ID starting with 76e7b395c2a9ba3ff27523b5970961a2bb5a85db216f39e42f2dea82ac7351d4 not found: ID does not exist" Mar 12 14:35:35.804790 master-0 kubenswrapper[7440]: I0312 14:35:35.804670 7440 scope.go:117] "RemoveContainer" containerID="e520d98d7cf8903cafb8595cf7b3f03df14b8a00d253f1fd4abb1292c29d616a" Mar 12 14:35:35.804980 master-0 kubenswrapper[7440]: E0312 14:35:35.804892 7440 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e520d98d7cf8903cafb8595cf7b3f03df14b8a00d253f1fd4abb1292c29d616a\": container with ID starting with e520d98d7cf8903cafb8595cf7b3f03df14b8a00d253f1fd4abb1292c29d616a not found: ID does not exist" containerID="e520d98d7cf8903cafb8595cf7b3f03df14b8a00d253f1fd4abb1292c29d616a" Mar 12 14:35:35.805131 master-0 kubenswrapper[7440]: I0312 14:35:35.804978 7440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e520d98d7cf8903cafb8595cf7b3f03df14b8a00d253f1fd4abb1292c29d616a"} err="failed to get container status \"e520d98d7cf8903cafb8595cf7b3f03df14b8a00d253f1fd4abb1292c29d616a\": rpc error: code = NotFound desc = could not find container \"e520d98d7cf8903cafb8595cf7b3f03df14b8a00d253f1fd4abb1292c29d616a\": container with ID starting with e520d98d7cf8903cafb8595cf7b3f03df14b8a00d253f1fd4abb1292c29d616a not found: ID does not exist" Mar 12 14:35:35.835286 master-0 kubenswrapper[7440]: I0312 14:35:35.835233 7440 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 12 14:35:35.835642 master-0 kubenswrapper[7440]: I0312 14:35:35.835617 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 12 14:35:35.835683 master-0 kubenswrapper[7440]: I0312 14:35:35.835659 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 12 14:35:35.835683 master-0 kubenswrapper[7440]: I0312 14:35:35.835679 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 12 14:35:35.835768 master-0 kubenswrapper[7440]: I0312 14:35:35.835753 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 12 14:35:35.835802 master-0 kubenswrapper[7440]: I0312 14:35:35.835755 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:35:35.835802 master-0 kubenswrapper[7440]: I0312 14:35:35.835793 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 12 14:35:35.835874 master-0 kubenswrapper[7440]: I0312 14:35:35.835808 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets" (OuterVolumeSpecName: "secrets") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:35:35.835874 master-0 kubenswrapper[7440]: I0312 14:35:35.835824 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs" (OuterVolumeSpecName: "logs") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:35:35.835874 master-0 kubenswrapper[7440]: I0312 14:35:35.835833 7440 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 12 14:35:35.835874 master-0 kubenswrapper[7440]: I0312 14:35:35.835849 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:35:35.835874 master-0 kubenswrapper[7440]: I0312 14:35:35.835855 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:35:35.836069 master-0 kubenswrapper[7440]: I0312 14:35:35.835982 7440 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config" (OuterVolumeSpecName: "config") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:35:35.836284 master-0 kubenswrapper[7440]: I0312 14:35:35.836248 7440 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:35:35.836284 master-0 kubenswrapper[7440]: I0312 14:35:35.836268 7440 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 12 14:35:35.836284 master-0 kubenswrapper[7440]: I0312 14:35:35.836280 7440 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") on node \"master-0\" DevicePath \"\"" Mar 12 14:35:35.836383 master-0 kubenswrapper[7440]: I0312 14:35:35.836289 7440 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:35:35.836383 master-0 kubenswrapper[7440]: I0312 14:35:35.836297 7440 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:35:35.836383 master-0 kubenswrapper[7440]: I0312 14:35:35.836306 7440 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 12 14:35:40.840980 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 12 14:35:40.859922 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 14:35:40.860250 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 12 14:35:40.861666 master-0 systemd[1]: kubelet.service: Consumed 2min 55.589s CPU time. Mar 12 14:35:40.906670 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 12 14:35:41.048190 master-0 kubenswrapper[37036]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 14:35:41.048190 master-0 kubenswrapper[37036]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 12 14:35:41.048190 master-0 kubenswrapper[37036]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 14:35:41.048190 master-0 kubenswrapper[37036]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 14:35:41.048190 master-0 kubenswrapper[37036]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 12 14:35:41.048190 master-0 kubenswrapper[37036]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 14:35:41.048882 master-0 kubenswrapper[37036]: I0312 14:35:41.048276 37036 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 14:35:41.050386 master-0 kubenswrapper[37036]: W0312 14:35:41.050359 37036 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 14:35:41.050386 master-0 kubenswrapper[37036]: W0312 14:35:41.050376 37036 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 14:35:41.050386 master-0 kubenswrapper[37036]: W0312 14:35:41.050382 37036 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 14:35:41.050386 master-0 kubenswrapper[37036]: W0312 14:35:41.050387 37036 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 14:35:41.050386 master-0 kubenswrapper[37036]: W0312 14:35:41.050392 37036 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 14:35:41.051356 master-0 kubenswrapper[37036]: W0312 14:35:41.050397 37036 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 14:35:41.051356 master-0 kubenswrapper[37036]: W0312 14:35:41.050401 37036 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 14:35:41.051356 master-0 kubenswrapper[37036]: W0312 14:35:41.050405 37036 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 14:35:41.051356 master-0 kubenswrapper[37036]: W0312 14:35:41.050408 37036 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 14:35:41.051356 master-0 kubenswrapper[37036]: W0312 14:35:41.050412 37036 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 14:35:41.051356 master-0 kubenswrapper[37036]: W0312 14:35:41.050415 37036 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 14:35:41.051356 master-0 kubenswrapper[37036]: W0312 14:35:41.050419 37036 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 14:35:41.051356 master-0 kubenswrapper[37036]: W0312 14:35:41.050422 37036 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 14:35:41.051356 master-0 kubenswrapper[37036]: W0312 14:35:41.050426 37036 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 14:35:41.051356 master-0 kubenswrapper[37036]: W0312 14:35:41.050429 37036 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 14:35:41.051356 master-0 kubenswrapper[37036]: W0312 14:35:41.050433 37036 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 14:35:41.051356 master-0 kubenswrapper[37036]: W0312 14:35:41.050436 37036 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 14:35:41.051356 master-0 kubenswrapper[37036]: W0312 14:35:41.050440 37036 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 14:35:41.051356 master-0 kubenswrapper[37036]: W0312 14:35:41.050444 37036 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 14:35:41.051356 master-0 kubenswrapper[37036]: W0312 14:35:41.050447 37036 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 14:35:41.051356 master-0 kubenswrapper[37036]: W0312 14:35:41.050450 37036 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 14:35:41.051356 master-0 kubenswrapper[37036]: W0312 14:35:41.050454 37036 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 14:35:41.051356 master-0 kubenswrapper[37036]: W0312 14:35:41.050464 37036 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 14:35:41.051356 master-0 kubenswrapper[37036]: W0312 14:35:41.050468 37036 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 14:35:41.051356 master-0 kubenswrapper[37036]: W0312 14:35:41.050472 37036 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 14:35:41.052248 master-0 kubenswrapper[37036]: W0312 14:35:41.050475 37036 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 14:35:41.052248 master-0 kubenswrapper[37036]: W0312 14:35:41.050479 37036 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 14:35:41.052248 master-0 kubenswrapper[37036]: W0312 14:35:41.050482 37036 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 14:35:41.052248 master-0 kubenswrapper[37036]: W0312 14:35:41.050486 37036 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 14:35:41.052248 master-0 kubenswrapper[37036]: W0312 14:35:41.050490 37036 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 14:35:41.052248 master-0 kubenswrapper[37036]: W0312 14:35:41.050494 37036 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 14:35:41.052248 master-0 kubenswrapper[37036]: W0312 14:35:41.050497 37036 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 14:35:41.052248 master-0 kubenswrapper[37036]: W0312 14:35:41.050501 37036 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 14:35:41.052248 master-0 kubenswrapper[37036]: W0312 14:35:41.050504 37036 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 14:35:41.052248 master-0 kubenswrapper[37036]: W0312 14:35:41.050508 37036 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 14:35:41.052248 master-0 kubenswrapper[37036]: W0312 14:35:41.050512 37036 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 14:35:41.052248 master-0 kubenswrapper[37036]: W0312 14:35:41.050515 37036 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 14:35:41.052248 master-0 kubenswrapper[37036]: W0312 14:35:41.050518 37036 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 14:35:41.052248 master-0 kubenswrapper[37036]: W0312 14:35:41.050522 37036 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 14:35:41.052248 master-0 kubenswrapper[37036]: W0312 14:35:41.050525 37036 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 14:35:41.052248 master-0 kubenswrapper[37036]: W0312 14:35:41.050529 37036 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 14:35:41.052248 master-0 kubenswrapper[37036]: W0312 14:35:41.050532 37036 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 14:35:41.052248 master-0 kubenswrapper[37036]: W0312 14:35:41.050536 37036 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 14:35:41.052248 master-0 kubenswrapper[37036]: W0312 14:35:41.050541 37036 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 14:35:41.052248 master-0 kubenswrapper[37036]: W0312 14:35:41.050545 37036 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 14:35:41.053132 master-0 kubenswrapper[37036]: W0312 14:35:41.050550 37036 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 14:35:41.053132 master-0 kubenswrapper[37036]: W0312 14:35:41.050553 37036 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 14:35:41.053132 master-0 kubenswrapper[37036]: W0312 14:35:41.050557 37036 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 14:35:41.053132 master-0 kubenswrapper[37036]: W0312 14:35:41.050560 37036 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 14:35:41.053132 master-0 kubenswrapper[37036]: W0312 14:35:41.050564 37036 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 14:35:41.053132 master-0 kubenswrapper[37036]: W0312 14:35:41.050568 37036 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 14:35:41.053132 master-0 kubenswrapper[37036]: W0312 14:35:41.050571 37036 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 14:35:41.053132 master-0 kubenswrapper[37036]: W0312 14:35:41.050575 37036 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 14:35:41.053132 master-0 kubenswrapper[37036]: W0312 14:35:41.050578 37036 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 14:35:41.053132 master-0 kubenswrapper[37036]: W0312 14:35:41.050582 37036 feature_gate.go:330] unrecognized feature gate: Example Mar 12 14:35:41.053132 master-0 kubenswrapper[37036]: W0312 14:35:41.050585 37036 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 14:35:41.053132 master-0 kubenswrapper[37036]: W0312 14:35:41.050589 37036 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 14:35:41.053132 master-0 kubenswrapper[37036]: W0312 14:35:41.050593 37036 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 14:35:41.053132 master-0 kubenswrapper[37036]: W0312 14:35:41.050597 37036 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 14:35:41.053132 master-0 kubenswrapper[37036]: W0312 14:35:41.050601 37036 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 14:35:41.053132 master-0 kubenswrapper[37036]: W0312 14:35:41.050605 37036 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 14:35:41.053132 master-0 kubenswrapper[37036]: W0312 14:35:41.050609 37036 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 14:35:41.053132 master-0 kubenswrapper[37036]: W0312 14:35:41.050613 37036 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 14:35:41.053132 master-0 kubenswrapper[37036]: W0312 14:35:41.050616 37036 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 14:35:41.053132 master-0 kubenswrapper[37036]: W0312 14:35:41.050621 37036 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 14:35:41.053835 master-0 kubenswrapper[37036]: W0312 14:35:41.050746 37036 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 14:35:41.053835 master-0 kubenswrapper[37036]: W0312 14:35:41.050751 37036 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 14:35:41.053835 master-0 kubenswrapper[37036]: W0312 14:35:41.050754 37036 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 14:35:41.053835 master-0 kubenswrapper[37036]: W0312 14:35:41.050758 37036 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 14:35:41.053835 master-0 kubenswrapper[37036]: W0312 14:35:41.050761 37036 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 14:35:41.053835 master-0 kubenswrapper[37036]: W0312 14:35:41.050765 37036 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 14:35:41.053835 master-0 kubenswrapper[37036]: W0312 14:35:41.050770 37036 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 14:35:41.053835 master-0 kubenswrapper[37036]: I0312 14:35:41.050869 37036 flags.go:64] FLAG: --address="0.0.0.0" Mar 12 14:35:41.053835 master-0 kubenswrapper[37036]: I0312 14:35:41.050878 37036 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 12 14:35:41.053835 master-0 kubenswrapper[37036]: I0312 14:35:41.050884 37036 flags.go:64] FLAG: --anonymous-auth="true" Mar 12 14:35:41.053835 master-0 kubenswrapper[37036]: I0312 14:35:41.050890 37036 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 12 14:35:41.053835 master-0 kubenswrapper[37036]: I0312 14:35:41.050909 37036 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 12 14:35:41.053835 master-0 kubenswrapper[37036]: I0312 14:35:41.050913 37036 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 12 14:35:41.053835 master-0 kubenswrapper[37036]: I0312 14:35:41.050919 37036 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 12 14:35:41.053835 master-0 kubenswrapper[37036]: I0312 14:35:41.050925 37036 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 12 14:35:41.053835 master-0 kubenswrapper[37036]: I0312 14:35:41.050930 37036 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 12 14:35:41.053835 master-0 kubenswrapper[37036]: I0312 14:35:41.050934 37036 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 12 14:35:41.053835 master-0 kubenswrapper[37036]: I0312 14:35:41.050938 37036 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 12 14:35:41.053835 master-0 kubenswrapper[37036]: I0312 14:35:41.050942 37036 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 12 14:35:41.053835 master-0 kubenswrapper[37036]: I0312 14:35:41.050947 37036 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 12 14:35:41.053835 master-0 kubenswrapper[37036]: I0312 14:35:41.050951 37036 flags.go:64] FLAG: --cgroup-root="" Mar 12 14:35:41.053835 master-0 kubenswrapper[37036]: I0312 14:35:41.050955 37036 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 12 14:35:41.054694 master-0 kubenswrapper[37036]: I0312 14:35:41.050960 37036 flags.go:64] FLAG: --client-ca-file="" Mar 12 14:35:41.054694 master-0 kubenswrapper[37036]: I0312 14:35:41.050965 37036 flags.go:64] FLAG: --cloud-config="" Mar 12 14:35:41.054694 master-0 kubenswrapper[37036]: I0312 14:35:41.050969 37036 flags.go:64] FLAG: --cloud-provider="" Mar 12 14:35:41.054694 master-0 kubenswrapper[37036]: I0312 14:35:41.050974 37036 flags.go:64] FLAG: --cluster-dns="[]" Mar 12 14:35:41.054694 master-0 kubenswrapper[37036]: I0312 14:35:41.050980 37036 flags.go:64] FLAG: --cluster-domain="" Mar 12 14:35:41.054694 master-0 kubenswrapper[37036]: I0312 14:35:41.050983 37036 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 12 14:35:41.054694 master-0 kubenswrapper[37036]: I0312 14:35:41.050988 37036 flags.go:64] FLAG: --config-dir="" Mar 12 14:35:41.054694 master-0 kubenswrapper[37036]: I0312 14:35:41.050992 37036 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 12 14:35:41.054694 master-0 kubenswrapper[37036]: I0312 14:35:41.050997 37036 flags.go:64] FLAG: --container-log-max-files="5" Mar 12 14:35:41.054694 master-0 kubenswrapper[37036]: I0312 14:35:41.051003 37036 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 12 14:35:41.054694 master-0 kubenswrapper[37036]: I0312 14:35:41.051007 37036 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 12 14:35:41.054694 master-0 kubenswrapper[37036]: I0312 14:35:41.051011 37036 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 12 14:35:41.054694 master-0 kubenswrapper[37036]: I0312 14:35:41.051017 37036 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 12 14:35:41.054694 master-0 kubenswrapper[37036]: I0312 14:35:41.051041 37036 flags.go:64] FLAG: --contention-profiling="false" Mar 12 14:35:41.054694 master-0 kubenswrapper[37036]: I0312 14:35:41.051048 37036 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 12 14:35:41.054694 master-0 kubenswrapper[37036]: I0312 14:35:41.051052 37036 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 12 14:35:41.054694 master-0 kubenswrapper[37036]: I0312 14:35:41.051057 37036 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 12 14:35:41.054694 master-0 kubenswrapper[37036]: I0312 14:35:41.051061 37036 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 12 14:35:41.054694 master-0 kubenswrapper[37036]: I0312 14:35:41.051066 37036 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 12 14:35:41.054694 master-0 kubenswrapper[37036]: I0312 14:35:41.051072 37036 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 12 14:35:41.054694 master-0 kubenswrapper[37036]: I0312 14:35:41.051076 37036 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 12 14:35:41.054694 master-0 kubenswrapper[37036]: I0312 14:35:41.051080 37036 flags.go:64] FLAG: --enable-load-reader="false" Mar 12 14:35:41.054694 master-0 kubenswrapper[37036]: I0312 14:35:41.051084 37036 flags.go:64] FLAG: --enable-server="true" Mar 12 14:35:41.054694 master-0 kubenswrapper[37036]: I0312 14:35:41.051089 37036 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 12 14:35:41.054694 master-0 kubenswrapper[37036]: I0312 14:35:41.051094 37036 flags.go:64] FLAG: --event-burst="100" Mar 12 14:35:41.055708 master-0 kubenswrapper[37036]: I0312 14:35:41.051098 37036 flags.go:64] FLAG: --event-qps="50" Mar 12 14:35:41.055708 master-0 kubenswrapper[37036]: I0312 14:35:41.051102 37036 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 12 14:35:41.055708 master-0 kubenswrapper[37036]: I0312 14:35:41.051106 37036 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 12 14:35:41.055708 master-0 kubenswrapper[37036]: I0312 14:35:41.051110 37036 flags.go:64] FLAG: --eviction-hard="" Mar 12 14:35:41.055708 master-0 kubenswrapper[37036]: I0312 14:35:41.051115 37036 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 12 14:35:41.055708 master-0 kubenswrapper[37036]: I0312 14:35:41.051119 37036 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 12 14:35:41.055708 master-0 kubenswrapper[37036]: I0312 14:35:41.051123 37036 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 12 14:35:41.055708 master-0 kubenswrapper[37036]: I0312 14:35:41.051127 37036 flags.go:64] FLAG: --eviction-soft="" Mar 12 14:35:41.055708 master-0 kubenswrapper[37036]: I0312 14:35:41.051131 37036 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 12 14:35:41.055708 master-0 kubenswrapper[37036]: I0312 14:35:41.051136 37036 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 12 14:35:41.055708 master-0 kubenswrapper[37036]: I0312 14:35:41.051140 37036 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 12 14:35:41.055708 master-0 kubenswrapper[37036]: I0312 14:35:41.051144 37036 flags.go:64] FLAG: --experimental-mounter-path="" Mar 12 14:35:41.055708 master-0 kubenswrapper[37036]: I0312 14:35:41.051147 37036 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 12 14:35:41.055708 master-0 kubenswrapper[37036]: I0312 14:35:41.051153 37036 flags.go:64] FLAG: --fail-swap-on="true" Mar 12 14:35:41.055708 master-0 kubenswrapper[37036]: I0312 14:35:41.051157 37036 flags.go:64] FLAG: --feature-gates="" Mar 12 14:35:41.055708 master-0 kubenswrapper[37036]: I0312 14:35:41.051162 37036 flags.go:64] FLAG: --file-check-frequency="20s" Mar 12 14:35:41.055708 master-0 kubenswrapper[37036]: I0312 14:35:41.051166 37036 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 12 14:35:41.055708 master-0 kubenswrapper[37036]: I0312 14:35:41.051171 37036 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 12 14:35:41.055708 master-0 kubenswrapper[37036]: I0312 14:35:41.051179 37036 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 12 14:35:41.055708 master-0 kubenswrapper[37036]: I0312 14:35:41.051183 37036 flags.go:64] FLAG: --healthz-port="10248" Mar 12 14:35:41.055708 master-0 kubenswrapper[37036]: I0312 14:35:41.051188 37036 flags.go:64] FLAG: --help="false" Mar 12 14:35:41.055708 master-0 kubenswrapper[37036]: I0312 14:35:41.051192 37036 flags.go:64] FLAG: --hostname-override="" Mar 12 14:35:41.055708 master-0 kubenswrapper[37036]: I0312 14:35:41.051196 37036 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 12 14:35:41.055708 master-0 kubenswrapper[37036]: I0312 14:35:41.051200 37036 flags.go:64] FLAG: --http-check-frequency="20s" Mar 12 14:35:41.055708 master-0 kubenswrapper[37036]: I0312 14:35:41.051205 37036 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 12 14:35:41.056603 master-0 kubenswrapper[37036]: I0312 14:35:41.051232 37036 flags.go:64] FLAG: --image-credential-provider-config="" Mar 12 14:35:41.056603 master-0 kubenswrapper[37036]: I0312 14:35:41.051237 37036 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 12 14:35:41.056603 master-0 kubenswrapper[37036]: I0312 14:35:41.051241 37036 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 12 14:35:41.056603 master-0 kubenswrapper[37036]: I0312 14:35:41.051245 37036 flags.go:64] FLAG: --image-service-endpoint="" Mar 12 14:35:41.056603 master-0 kubenswrapper[37036]: I0312 14:35:41.051249 37036 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 12 14:35:41.056603 master-0 kubenswrapper[37036]: I0312 14:35:41.051253 37036 flags.go:64] FLAG: --kube-api-burst="100" Mar 12 14:35:41.056603 master-0 kubenswrapper[37036]: I0312 14:35:41.051257 37036 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 12 14:35:41.056603 master-0 kubenswrapper[37036]: I0312 14:35:41.051262 37036 flags.go:64] FLAG: --kube-api-qps="50" Mar 12 14:35:41.056603 master-0 kubenswrapper[37036]: I0312 14:35:41.051265 37036 flags.go:64] FLAG: --kube-reserved="" Mar 12 14:35:41.056603 master-0 kubenswrapper[37036]: I0312 14:35:41.051270 37036 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 12 14:35:41.056603 master-0 kubenswrapper[37036]: I0312 14:35:41.051273 37036 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 12 14:35:41.056603 master-0 kubenswrapper[37036]: I0312 14:35:41.051278 37036 flags.go:64] FLAG: --kubelet-cgroups="" Mar 12 14:35:41.056603 master-0 kubenswrapper[37036]: I0312 14:35:41.051281 37036 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 12 14:35:41.056603 master-0 kubenswrapper[37036]: I0312 14:35:41.051285 37036 flags.go:64] FLAG: --lock-file="" Mar 12 14:35:41.056603 master-0 kubenswrapper[37036]: I0312 14:35:41.051289 37036 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 12 14:35:41.056603 master-0 kubenswrapper[37036]: I0312 14:35:41.051293 37036 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 12 14:35:41.056603 master-0 kubenswrapper[37036]: I0312 14:35:41.051297 37036 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 12 14:35:41.056603 master-0 kubenswrapper[37036]: I0312 14:35:41.051304 37036 flags.go:64] FLAG: --log-json-split-stream="false" Mar 12 14:35:41.056603 master-0 kubenswrapper[37036]: I0312 14:35:41.051308 37036 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 12 14:35:41.056603 master-0 kubenswrapper[37036]: I0312 14:35:41.051312 37036 flags.go:64] FLAG: --log-text-split-stream="false" Mar 12 14:35:41.056603 master-0 kubenswrapper[37036]: I0312 14:35:41.051316 37036 flags.go:64] FLAG: --logging-format="text" Mar 12 14:35:41.056603 master-0 kubenswrapper[37036]: I0312 14:35:41.051320 37036 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 12 14:35:41.056603 master-0 kubenswrapper[37036]: I0312 14:35:41.051325 37036 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 12 14:35:41.056603 master-0 kubenswrapper[37036]: I0312 14:35:41.051329 37036 flags.go:64] FLAG: --manifest-url="" Mar 12 14:35:41.056603 master-0 kubenswrapper[37036]: I0312 14:35:41.051333 37036 flags.go:64] FLAG: --manifest-url-header="" Mar 12 14:35:41.057816 master-0 kubenswrapper[37036]: I0312 14:35:41.051341 37036 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 12 14:35:41.057816 master-0 kubenswrapper[37036]: I0312 14:35:41.051345 37036 flags.go:64] FLAG: --max-open-files="1000000" Mar 12 14:35:41.057816 master-0 kubenswrapper[37036]: I0312 14:35:41.051351 37036 flags.go:64] FLAG: --max-pods="110" Mar 12 14:35:41.057816 master-0 kubenswrapper[37036]: I0312 14:35:41.051355 37036 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 12 14:35:41.057816 master-0 kubenswrapper[37036]: I0312 14:35:41.051359 37036 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 12 14:35:41.057816 master-0 kubenswrapper[37036]: I0312 14:35:41.051363 37036 flags.go:64] FLAG: --memory-manager-policy="None" Mar 12 14:35:41.057816 master-0 kubenswrapper[37036]: I0312 14:35:41.051367 37036 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 12 14:35:41.057816 master-0 kubenswrapper[37036]: I0312 14:35:41.051371 37036 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 12 14:35:41.057816 master-0 kubenswrapper[37036]: I0312 14:35:41.051375 37036 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 12 14:35:41.057816 master-0 kubenswrapper[37036]: I0312 14:35:41.051380 37036 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 12 14:35:41.057816 master-0 kubenswrapper[37036]: I0312 14:35:41.051392 37036 flags.go:64] FLAG: --node-status-max-images="50" Mar 12 14:35:41.057816 master-0 kubenswrapper[37036]: I0312 14:35:41.051396 37036 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 12 14:35:41.057816 master-0 kubenswrapper[37036]: I0312 14:35:41.051400 37036 flags.go:64] FLAG: --oom-score-adj="-999" Mar 12 14:35:41.057816 master-0 kubenswrapper[37036]: I0312 14:35:41.051404 37036 flags.go:64] FLAG: --pod-cidr="" Mar 12 14:35:41.057816 master-0 kubenswrapper[37036]: I0312 14:35:41.051408 37036 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 12 14:35:41.057816 master-0 kubenswrapper[37036]: I0312 14:35:41.051414 37036 flags.go:64] FLAG: --pod-manifest-path="" Mar 12 14:35:41.057816 master-0 kubenswrapper[37036]: I0312 14:35:41.051418 37036 flags.go:64] FLAG: --pod-max-pids="-1" Mar 12 14:35:41.057816 master-0 kubenswrapper[37036]: I0312 14:35:41.051423 37036 flags.go:64] FLAG: --pods-per-core="0" Mar 12 14:35:41.057816 master-0 kubenswrapper[37036]: I0312 14:35:41.051427 37036 flags.go:64] FLAG: --port="10250" Mar 12 14:35:41.057816 master-0 kubenswrapper[37036]: I0312 14:35:41.051431 37036 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 12 14:35:41.057816 master-0 kubenswrapper[37036]: I0312 14:35:41.051435 37036 flags.go:64] FLAG: --provider-id="" Mar 12 14:35:41.057816 master-0 kubenswrapper[37036]: I0312 14:35:41.051439 37036 flags.go:64] FLAG: --qos-reserved="" Mar 12 14:35:41.057816 master-0 kubenswrapper[37036]: I0312 14:35:41.051443 37036 flags.go:64] FLAG: --read-only-port="10255" Mar 12 14:35:41.057816 master-0 kubenswrapper[37036]: I0312 14:35:41.051447 37036 flags.go:64] FLAG: --register-node="true" Mar 12 14:35:41.059836 master-0 kubenswrapper[37036]: I0312 14:35:41.051451 37036 flags.go:64] FLAG: --register-schedulable="true" Mar 12 14:35:41.059836 master-0 kubenswrapper[37036]: I0312 14:35:41.051455 37036 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 12 14:35:41.059836 master-0 kubenswrapper[37036]: I0312 14:35:41.051477 37036 flags.go:64] FLAG: --registry-burst="10" Mar 12 14:35:41.059836 master-0 kubenswrapper[37036]: I0312 14:35:41.051481 37036 flags.go:64] FLAG: --registry-qps="5" Mar 12 14:35:41.059836 master-0 kubenswrapper[37036]: I0312 14:35:41.051486 37036 flags.go:64] FLAG: --reserved-cpus="" Mar 12 14:35:41.059836 master-0 kubenswrapper[37036]: I0312 14:35:41.051490 37036 flags.go:64] FLAG: --reserved-memory="" Mar 12 14:35:41.059836 master-0 kubenswrapper[37036]: I0312 14:35:41.051496 37036 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 12 14:35:41.059836 master-0 kubenswrapper[37036]: I0312 14:35:41.051501 37036 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 12 14:35:41.059836 master-0 kubenswrapper[37036]: I0312 14:35:41.051505 37036 flags.go:64] FLAG: --rotate-certificates="false" Mar 12 14:35:41.059836 master-0 kubenswrapper[37036]: I0312 14:35:41.051512 37036 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 12 14:35:41.059836 master-0 kubenswrapper[37036]: I0312 14:35:41.051516 37036 flags.go:64] FLAG: --runonce="false" Mar 12 14:35:41.059836 master-0 kubenswrapper[37036]: I0312 14:35:41.051520 37036 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 12 14:35:41.059836 master-0 kubenswrapper[37036]: I0312 14:35:41.051525 37036 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 12 14:35:41.059836 master-0 kubenswrapper[37036]: I0312 14:35:41.051529 37036 flags.go:64] FLAG: --seccomp-default="false" Mar 12 14:35:41.059836 master-0 kubenswrapper[37036]: I0312 14:35:41.051534 37036 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 12 14:35:41.059836 master-0 kubenswrapper[37036]: I0312 14:35:41.051538 37036 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 12 14:35:41.059836 master-0 kubenswrapper[37036]: I0312 14:35:41.051542 37036 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 12 14:35:41.059836 master-0 kubenswrapper[37036]: I0312 14:35:41.051546 37036 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 12 14:35:41.059836 master-0 kubenswrapper[37036]: I0312 14:35:41.051557 37036 flags.go:64] FLAG: --storage-driver-password="root" Mar 12 14:35:41.059836 master-0 kubenswrapper[37036]: I0312 14:35:41.051562 37036 flags.go:64] FLAG: --storage-driver-secure="false" Mar 12 14:35:41.059836 master-0 kubenswrapper[37036]: I0312 14:35:41.051566 37036 flags.go:64] FLAG: --storage-driver-table="stats" Mar 12 14:35:41.059836 master-0 kubenswrapper[37036]: I0312 14:35:41.051570 37036 flags.go:64] FLAG: --storage-driver-user="root" Mar 12 14:35:41.059836 master-0 kubenswrapper[37036]: I0312 14:35:41.051576 37036 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 12 14:35:41.059836 master-0 kubenswrapper[37036]: I0312 14:35:41.051580 37036 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 12 14:35:41.059836 master-0 kubenswrapper[37036]: I0312 14:35:41.051585 37036 flags.go:64] FLAG: --system-cgroups="" Mar 12 14:35:41.060982 master-0 kubenswrapper[37036]: I0312 14:35:41.051589 37036 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 12 14:35:41.060982 master-0 kubenswrapper[37036]: I0312 14:35:41.051595 37036 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 12 14:35:41.060982 master-0 kubenswrapper[37036]: I0312 14:35:41.051600 37036 flags.go:64] FLAG: --tls-cert-file="" Mar 12 14:35:41.060982 master-0 kubenswrapper[37036]: I0312 14:35:41.051604 37036 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 12 14:35:41.060982 master-0 kubenswrapper[37036]: I0312 14:35:41.051609 37036 flags.go:64] FLAG: --tls-min-version="" Mar 12 14:35:41.060982 master-0 kubenswrapper[37036]: I0312 14:35:41.051614 37036 flags.go:64] FLAG: --tls-private-key-file="" Mar 12 14:35:41.060982 master-0 kubenswrapper[37036]: I0312 14:35:41.051618 37036 flags.go:64] FLAG: --topology-manager-policy="none" Mar 12 14:35:41.060982 master-0 kubenswrapper[37036]: I0312 14:35:41.051622 37036 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 12 14:35:41.060982 master-0 kubenswrapper[37036]: I0312 14:35:41.051626 37036 flags.go:64] FLAG: --topology-manager-scope="container" Mar 12 14:35:41.060982 master-0 kubenswrapper[37036]: I0312 14:35:41.051631 37036 flags.go:64] FLAG: --v="2" Mar 12 14:35:41.060982 master-0 kubenswrapper[37036]: I0312 14:35:41.051637 37036 flags.go:64] FLAG: --version="false" Mar 12 14:35:41.060982 master-0 kubenswrapper[37036]: I0312 14:35:41.051642 37036 flags.go:64] FLAG: --vmodule="" Mar 12 14:35:41.060982 master-0 kubenswrapper[37036]: I0312 14:35:41.051648 37036 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 12 14:35:41.060982 master-0 kubenswrapper[37036]: I0312 14:35:41.051653 37036 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 12 14:35:41.060982 master-0 kubenswrapper[37036]: W0312 14:35:41.051752 37036 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 14:35:41.060982 master-0 kubenswrapper[37036]: W0312 14:35:41.051757 37036 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 14:35:41.060982 master-0 kubenswrapper[37036]: W0312 14:35:41.051763 37036 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 14:35:41.060982 master-0 kubenswrapper[37036]: W0312 14:35:41.051768 37036 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 14:35:41.060982 master-0 kubenswrapper[37036]: W0312 14:35:41.051773 37036 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 14:35:41.060982 master-0 kubenswrapper[37036]: W0312 14:35:41.051777 37036 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 14:35:41.060982 master-0 kubenswrapper[37036]: W0312 14:35:41.051781 37036 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 14:35:41.060982 master-0 kubenswrapper[37036]: W0312 14:35:41.051785 37036 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 14:35:41.061842 master-0 kubenswrapper[37036]: W0312 14:35:41.051789 37036 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 14:35:41.061842 master-0 kubenswrapper[37036]: W0312 14:35:41.051792 37036 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 14:35:41.061842 master-0 kubenswrapper[37036]: W0312 14:35:41.051796 37036 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 14:35:41.061842 master-0 kubenswrapper[37036]: W0312 14:35:41.051802 37036 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 14:35:41.061842 master-0 kubenswrapper[37036]: W0312 14:35:41.051805 37036 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 14:35:41.061842 master-0 kubenswrapper[37036]: W0312 14:35:41.051809 37036 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 14:35:41.061842 master-0 kubenswrapper[37036]: W0312 14:35:41.051812 37036 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 14:35:41.061842 master-0 kubenswrapper[37036]: W0312 14:35:41.051816 37036 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 14:35:41.061842 master-0 kubenswrapper[37036]: W0312 14:35:41.051820 37036 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 14:35:41.061842 master-0 kubenswrapper[37036]: W0312 14:35:41.051823 37036 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 14:35:41.061842 master-0 kubenswrapper[37036]: W0312 14:35:41.051827 37036 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 14:35:41.061842 master-0 kubenswrapper[37036]: W0312 14:35:41.051832 37036 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 14:35:41.061842 master-0 kubenswrapper[37036]: W0312 14:35:41.051836 37036 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 14:35:41.061842 master-0 kubenswrapper[37036]: W0312 14:35:41.051839 37036 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 14:35:41.061842 master-0 kubenswrapper[37036]: W0312 14:35:41.051843 37036 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 14:35:41.061842 master-0 kubenswrapper[37036]: W0312 14:35:41.051846 37036 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 14:35:41.061842 master-0 kubenswrapper[37036]: W0312 14:35:41.051850 37036 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 14:35:41.061842 master-0 kubenswrapper[37036]: W0312 14:35:41.051854 37036 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 14:35:41.061842 master-0 kubenswrapper[37036]: W0312 14:35:41.051858 37036 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 14:35:41.061842 master-0 kubenswrapper[37036]: W0312 14:35:41.051861 37036 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 14:35:41.062740 master-0 kubenswrapper[37036]: W0312 14:35:41.051865 37036 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 14:35:41.062740 master-0 kubenswrapper[37036]: W0312 14:35:41.051869 37036 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 14:35:41.062740 master-0 kubenswrapper[37036]: W0312 14:35:41.051872 37036 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 14:35:41.062740 master-0 kubenswrapper[37036]: W0312 14:35:41.051876 37036 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 14:35:41.062740 master-0 kubenswrapper[37036]: W0312 14:35:41.051880 37036 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 14:35:41.062740 master-0 kubenswrapper[37036]: W0312 14:35:41.051884 37036 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 14:35:41.062740 master-0 kubenswrapper[37036]: W0312 14:35:41.051889 37036 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 14:35:41.062740 master-0 kubenswrapper[37036]: W0312 14:35:41.051904 37036 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 14:35:41.062740 master-0 kubenswrapper[37036]: W0312 14:35:41.051908 37036 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 14:35:41.062740 master-0 kubenswrapper[37036]: W0312 14:35:41.051912 37036 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 14:35:41.062740 master-0 kubenswrapper[37036]: W0312 14:35:41.051916 37036 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 14:35:41.062740 master-0 kubenswrapper[37036]: W0312 14:35:41.051919 37036 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 14:35:41.062740 master-0 kubenswrapper[37036]: W0312 14:35:41.051922 37036 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 14:35:41.062740 master-0 kubenswrapper[37036]: W0312 14:35:41.051926 37036 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 14:35:41.062740 master-0 kubenswrapper[37036]: W0312 14:35:41.051929 37036 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 14:35:41.062740 master-0 kubenswrapper[37036]: W0312 14:35:41.051934 37036 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 14:35:41.062740 master-0 kubenswrapper[37036]: W0312 14:35:41.051938 37036 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 14:35:41.062740 master-0 kubenswrapper[37036]: W0312 14:35:41.051942 37036 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 14:35:41.062740 master-0 kubenswrapper[37036]: W0312 14:35:41.051945 37036 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 14:35:41.062740 master-0 kubenswrapper[37036]: W0312 14:35:41.051948 37036 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 14:35:41.066001 master-0 kubenswrapper[37036]: W0312 14:35:41.051952 37036 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 14:35:41.066001 master-0 kubenswrapper[37036]: W0312 14:35:41.051956 37036 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 14:35:41.066001 master-0 kubenswrapper[37036]: W0312 14:35:41.051959 37036 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 14:35:41.066001 master-0 kubenswrapper[37036]: W0312 14:35:41.051963 37036 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 14:35:41.066001 master-0 kubenswrapper[37036]: W0312 14:35:41.051966 37036 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 14:35:41.066001 master-0 kubenswrapper[37036]: W0312 14:35:41.051970 37036 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 14:35:41.066001 master-0 kubenswrapper[37036]: W0312 14:35:41.051973 37036 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 14:35:41.066001 master-0 kubenswrapper[37036]: W0312 14:35:41.051977 37036 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 14:35:41.066001 master-0 kubenswrapper[37036]: W0312 14:35:41.051981 37036 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 14:35:41.066001 master-0 kubenswrapper[37036]: W0312 14:35:41.051984 37036 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 14:35:41.066001 master-0 kubenswrapper[37036]: W0312 14:35:41.051988 37036 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 14:35:41.066001 master-0 kubenswrapper[37036]: W0312 14:35:41.051991 37036 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 14:35:41.066001 master-0 kubenswrapper[37036]: W0312 14:35:41.051994 37036 feature_gate.go:330] unrecognized feature gate: Example Mar 12 14:35:41.066001 master-0 kubenswrapper[37036]: W0312 14:35:41.051998 37036 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 14:35:41.066001 master-0 kubenswrapper[37036]: W0312 14:35:41.052001 37036 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 14:35:41.066001 master-0 kubenswrapper[37036]: W0312 14:35:41.052005 37036 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 14:35:41.066001 master-0 kubenswrapper[37036]: W0312 14:35:41.052011 37036 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 14:35:41.066001 master-0 kubenswrapper[37036]: W0312 14:35:41.052016 37036 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 14:35:41.066001 master-0 kubenswrapper[37036]: W0312 14:35:41.052022 37036 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 14:35:41.066669 master-0 kubenswrapper[37036]: W0312 14:35:41.052027 37036 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 14:35:41.066669 master-0 kubenswrapper[37036]: W0312 14:35:41.052033 37036 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 14:35:41.066669 master-0 kubenswrapper[37036]: W0312 14:35:41.052038 37036 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 14:35:41.066669 master-0 kubenswrapper[37036]: W0312 14:35:41.052042 37036 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 14:35:41.066669 master-0 kubenswrapper[37036]: W0312 14:35:41.052046 37036 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 14:35:41.066669 master-0 kubenswrapper[37036]: I0312 14:35:41.052052 37036 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 14:35:41.066669 master-0 kubenswrapper[37036]: I0312 14:35:41.056731 37036 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 12 14:35:41.066669 master-0 kubenswrapper[37036]: I0312 14:35:41.056772 37036 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 14:35:41.066669 master-0 kubenswrapper[37036]: W0312 14:35:41.056868 37036 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 14:35:41.066669 master-0 kubenswrapper[37036]: W0312 14:35:41.056964 37036 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 14:35:41.066669 master-0 kubenswrapper[37036]: W0312 14:35:41.056973 37036 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 14:35:41.066669 master-0 kubenswrapper[37036]: W0312 14:35:41.056978 37036 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 14:35:41.066669 master-0 kubenswrapper[37036]: W0312 14:35:41.056983 37036 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 14:35:41.066669 master-0 kubenswrapper[37036]: W0312 14:35:41.056988 37036 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 14:35:41.066669 master-0 kubenswrapper[37036]: W0312 14:35:41.056994 37036 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 14:35:41.067076 master-0 kubenswrapper[37036]: W0312 14:35:41.057000 37036 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 14:35:41.067076 master-0 kubenswrapper[37036]: W0312 14:35:41.057005 37036 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 14:35:41.067076 master-0 kubenswrapper[37036]: W0312 14:35:41.057013 37036 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 14:35:41.067076 master-0 kubenswrapper[37036]: W0312 14:35:41.057019 37036 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 14:35:41.067076 master-0 kubenswrapper[37036]: W0312 14:35:41.057024 37036 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 14:35:41.067076 master-0 kubenswrapper[37036]: W0312 14:35:41.057029 37036 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 14:35:41.067076 master-0 kubenswrapper[37036]: W0312 14:35:41.057035 37036 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 14:35:41.067076 master-0 kubenswrapper[37036]: W0312 14:35:41.057043 37036 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 14:35:41.067076 master-0 kubenswrapper[37036]: W0312 14:35:41.057049 37036 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 14:35:41.067076 master-0 kubenswrapper[37036]: W0312 14:35:41.057055 37036 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 14:35:41.067076 master-0 kubenswrapper[37036]: W0312 14:35:41.057059 37036 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 14:35:41.067076 master-0 kubenswrapper[37036]: W0312 14:35:41.057065 37036 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 14:35:41.067076 master-0 kubenswrapper[37036]: W0312 14:35:41.057070 37036 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 14:35:41.067076 master-0 kubenswrapper[37036]: W0312 14:35:41.057075 37036 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 14:35:41.067076 master-0 kubenswrapper[37036]: W0312 14:35:41.057081 37036 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 14:35:41.067076 master-0 kubenswrapper[37036]: W0312 14:35:41.057087 37036 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 14:35:41.067076 master-0 kubenswrapper[37036]: W0312 14:35:41.057093 37036 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 14:35:41.067076 master-0 kubenswrapper[37036]: W0312 14:35:41.057099 37036 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 14:35:41.067076 master-0 kubenswrapper[37036]: W0312 14:35:41.057104 37036 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 14:35:41.067689 master-0 kubenswrapper[37036]: W0312 14:35:41.057109 37036 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 14:35:41.067689 master-0 kubenswrapper[37036]: W0312 14:35:41.057114 37036 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 14:35:41.067689 master-0 kubenswrapper[37036]: W0312 14:35:41.057119 37036 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 14:35:41.067689 master-0 kubenswrapper[37036]: W0312 14:35:41.057124 37036 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 14:35:41.067689 master-0 kubenswrapper[37036]: W0312 14:35:41.057128 37036 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 14:35:41.067689 master-0 kubenswrapper[37036]: W0312 14:35:41.057134 37036 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 14:35:41.067689 master-0 kubenswrapper[37036]: W0312 14:35:41.057139 37036 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 14:35:41.067689 master-0 kubenswrapper[37036]: W0312 14:35:41.057145 37036 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 14:35:41.067689 master-0 kubenswrapper[37036]: W0312 14:35:41.057150 37036 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 14:35:41.067689 master-0 kubenswrapper[37036]: W0312 14:35:41.057199 37036 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 14:35:41.067689 master-0 kubenswrapper[37036]: W0312 14:35:41.057205 37036 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 14:35:41.067689 master-0 kubenswrapper[37036]: W0312 14:35:41.057211 37036 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 14:35:41.067689 master-0 kubenswrapper[37036]: W0312 14:35:41.057216 37036 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 14:35:41.067689 master-0 kubenswrapper[37036]: W0312 14:35:41.057222 37036 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 14:35:41.067689 master-0 kubenswrapper[37036]: W0312 14:35:41.057228 37036 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 14:35:41.067689 master-0 kubenswrapper[37036]: W0312 14:35:41.057235 37036 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 14:35:41.067689 master-0 kubenswrapper[37036]: W0312 14:35:41.057240 37036 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 14:35:41.067689 master-0 kubenswrapper[37036]: W0312 14:35:41.057246 37036 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 14:35:41.067689 master-0 kubenswrapper[37036]: W0312 14:35:41.057251 37036 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 14:35:41.067689 master-0 kubenswrapper[37036]: W0312 14:35:41.057257 37036 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 14:35:41.068670 master-0 kubenswrapper[37036]: W0312 14:35:41.057262 37036 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 14:35:41.068670 master-0 kubenswrapper[37036]: W0312 14:35:41.057267 37036 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 14:35:41.068670 master-0 kubenswrapper[37036]: W0312 14:35:41.057272 37036 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 14:35:41.068670 master-0 kubenswrapper[37036]: W0312 14:35:41.057278 37036 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 14:35:41.068670 master-0 kubenswrapper[37036]: W0312 14:35:41.057284 37036 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 14:35:41.068670 master-0 kubenswrapper[37036]: W0312 14:35:41.057291 37036 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 14:35:41.068670 master-0 kubenswrapper[37036]: W0312 14:35:41.057296 37036 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 14:35:41.068670 master-0 kubenswrapper[37036]: W0312 14:35:41.057301 37036 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 14:35:41.068670 master-0 kubenswrapper[37036]: W0312 14:35:41.057307 37036 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 14:35:41.068670 master-0 kubenswrapper[37036]: W0312 14:35:41.057312 37036 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 14:35:41.068670 master-0 kubenswrapper[37036]: W0312 14:35:41.057317 37036 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 14:35:41.068670 master-0 kubenswrapper[37036]: W0312 14:35:41.057322 37036 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 14:35:41.068670 master-0 kubenswrapper[37036]: W0312 14:35:41.057327 37036 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 14:35:41.068670 master-0 kubenswrapper[37036]: W0312 14:35:41.057331 37036 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 14:35:41.068670 master-0 kubenswrapper[37036]: W0312 14:35:41.057336 37036 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 14:35:41.068670 master-0 kubenswrapper[37036]: W0312 14:35:41.057341 37036 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 14:35:41.068670 master-0 kubenswrapper[37036]: W0312 14:35:41.057346 37036 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 14:35:41.068670 master-0 kubenswrapper[37036]: W0312 14:35:41.057351 37036 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 14:35:41.068670 master-0 kubenswrapper[37036]: W0312 14:35:41.057356 37036 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 14:35:41.068670 master-0 kubenswrapper[37036]: W0312 14:35:41.057361 37036 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 14:35:41.069556 master-0 kubenswrapper[37036]: W0312 14:35:41.057366 37036 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 14:35:41.069556 master-0 kubenswrapper[37036]: W0312 14:35:41.057371 37036 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 14:35:41.069556 master-0 kubenswrapper[37036]: W0312 14:35:41.057376 37036 feature_gate.go:330] unrecognized feature gate: Example Mar 12 14:35:41.069556 master-0 kubenswrapper[37036]: W0312 14:35:41.057381 37036 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 14:35:41.069556 master-0 kubenswrapper[37036]: W0312 14:35:41.057386 37036 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 14:35:41.069556 master-0 kubenswrapper[37036]: W0312 14:35:41.057390 37036 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 14:35:41.069556 master-0 kubenswrapper[37036]: I0312 14:35:41.057400 37036 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 14:35:41.069556 master-0 kubenswrapper[37036]: W0312 14:35:41.057564 37036 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 14:35:41.069556 master-0 kubenswrapper[37036]: W0312 14:35:41.057576 37036 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 14:35:41.069556 master-0 kubenswrapper[37036]: W0312 14:35:41.057582 37036 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 14:35:41.069556 master-0 kubenswrapper[37036]: W0312 14:35:41.057590 37036 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 14:35:41.069556 master-0 kubenswrapper[37036]: W0312 14:35:41.057596 37036 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 14:35:41.069556 master-0 kubenswrapper[37036]: W0312 14:35:41.057601 37036 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 14:35:41.069556 master-0 kubenswrapper[37036]: W0312 14:35:41.057606 37036 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 14:35:41.069556 master-0 kubenswrapper[37036]: W0312 14:35:41.057611 37036 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 14:35:41.070759 master-0 kubenswrapper[37036]: W0312 14:35:41.057617 37036 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 14:35:41.070759 master-0 kubenswrapper[37036]: W0312 14:35:41.057622 37036 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 14:35:41.070759 master-0 kubenswrapper[37036]: W0312 14:35:41.057628 37036 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 14:35:41.070759 master-0 kubenswrapper[37036]: W0312 14:35:41.057632 37036 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 14:35:41.070759 master-0 kubenswrapper[37036]: W0312 14:35:41.057637 37036 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 14:35:41.070759 master-0 kubenswrapper[37036]: W0312 14:35:41.057642 37036 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 14:35:41.070759 master-0 kubenswrapper[37036]: W0312 14:35:41.057646 37036 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 14:35:41.070759 master-0 kubenswrapper[37036]: W0312 14:35:41.057653 37036 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 14:35:41.070759 master-0 kubenswrapper[37036]: W0312 14:35:41.057658 37036 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 14:35:41.070759 master-0 kubenswrapper[37036]: W0312 14:35:41.057663 37036 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 14:35:41.070759 master-0 kubenswrapper[37036]: W0312 14:35:41.057669 37036 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 14:35:41.070759 master-0 kubenswrapper[37036]: W0312 14:35:41.057677 37036 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 14:35:41.070759 master-0 kubenswrapper[37036]: W0312 14:35:41.057683 37036 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 14:35:41.070759 master-0 kubenswrapper[37036]: W0312 14:35:41.057689 37036 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 14:35:41.070759 master-0 kubenswrapper[37036]: W0312 14:35:41.057694 37036 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 14:35:41.070759 master-0 kubenswrapper[37036]: W0312 14:35:41.057698 37036 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 14:35:41.070759 master-0 kubenswrapper[37036]: W0312 14:35:41.057704 37036 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 14:35:41.070759 master-0 kubenswrapper[37036]: W0312 14:35:41.057709 37036 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 14:35:41.070759 master-0 kubenswrapper[37036]: W0312 14:35:41.057714 37036 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 14:35:41.071518 master-0 kubenswrapper[37036]: W0312 14:35:41.057721 37036 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 14:35:41.071518 master-0 kubenswrapper[37036]: W0312 14:35:41.057727 37036 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 14:35:41.071518 master-0 kubenswrapper[37036]: W0312 14:35:41.057733 37036 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 14:35:41.071518 master-0 kubenswrapper[37036]: W0312 14:35:41.057739 37036 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 14:35:41.071518 master-0 kubenswrapper[37036]: W0312 14:35:41.057744 37036 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 14:35:41.071518 master-0 kubenswrapper[37036]: W0312 14:35:41.057749 37036 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 14:35:41.071518 master-0 kubenswrapper[37036]: W0312 14:35:41.057754 37036 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 14:35:41.071518 master-0 kubenswrapper[37036]: W0312 14:35:41.057758 37036 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 14:35:41.071518 master-0 kubenswrapper[37036]: W0312 14:35:41.057763 37036 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 14:35:41.071518 master-0 kubenswrapper[37036]: W0312 14:35:41.057768 37036 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 14:35:41.071518 master-0 kubenswrapper[37036]: W0312 14:35:41.057773 37036 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 14:35:41.071518 master-0 kubenswrapper[37036]: W0312 14:35:41.057777 37036 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 14:35:41.071518 master-0 kubenswrapper[37036]: W0312 14:35:41.057782 37036 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 14:35:41.071518 master-0 kubenswrapper[37036]: W0312 14:35:41.057787 37036 feature_gate.go:330] unrecognized feature gate: Example Mar 12 14:35:41.071518 master-0 kubenswrapper[37036]: W0312 14:35:41.057792 37036 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 14:35:41.071518 master-0 kubenswrapper[37036]: W0312 14:35:41.057796 37036 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 14:35:41.071518 master-0 kubenswrapper[37036]: W0312 14:35:41.057801 37036 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 14:35:41.071518 master-0 kubenswrapper[37036]: W0312 14:35:41.057808 37036 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 14:35:41.071518 master-0 kubenswrapper[37036]: W0312 14:35:41.057813 37036 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 14:35:41.071518 master-0 kubenswrapper[37036]: W0312 14:35:41.057819 37036 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 14:35:41.072296 master-0 kubenswrapper[37036]: W0312 14:35:41.057825 37036 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 14:35:41.072296 master-0 kubenswrapper[37036]: W0312 14:35:41.057830 37036 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 14:35:41.072296 master-0 kubenswrapper[37036]: W0312 14:35:41.057836 37036 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 14:35:41.072296 master-0 kubenswrapper[37036]: W0312 14:35:41.057841 37036 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 14:35:41.072296 master-0 kubenswrapper[37036]: W0312 14:35:41.057846 37036 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 14:35:41.072296 master-0 kubenswrapper[37036]: W0312 14:35:41.057851 37036 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 14:35:41.072296 master-0 kubenswrapper[37036]: W0312 14:35:41.057856 37036 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 14:35:41.072296 master-0 kubenswrapper[37036]: W0312 14:35:41.057861 37036 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 14:35:41.072296 master-0 kubenswrapper[37036]: W0312 14:35:41.057865 37036 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 14:35:41.072296 master-0 kubenswrapper[37036]: W0312 14:35:41.057870 37036 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 14:35:41.072296 master-0 kubenswrapper[37036]: W0312 14:35:41.057877 37036 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 14:35:41.072296 master-0 kubenswrapper[37036]: W0312 14:35:41.057883 37036 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 14:35:41.072296 master-0 kubenswrapper[37036]: W0312 14:35:41.057889 37036 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 14:35:41.072296 master-0 kubenswrapper[37036]: W0312 14:35:41.057931 37036 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 14:35:41.072296 master-0 kubenswrapper[37036]: W0312 14:35:41.057938 37036 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 14:35:41.072296 master-0 kubenswrapper[37036]: W0312 14:35:41.057945 37036 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 14:35:41.072296 master-0 kubenswrapper[37036]: W0312 14:35:41.057950 37036 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 14:35:41.072296 master-0 kubenswrapper[37036]: W0312 14:35:41.057955 37036 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 14:35:41.072296 master-0 kubenswrapper[37036]: W0312 14:35:41.057960 37036 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 14:35:41.073246 master-0 kubenswrapper[37036]: W0312 14:35:41.057965 37036 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 14:35:41.073246 master-0 kubenswrapper[37036]: W0312 14:35:41.057970 37036 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 14:35:41.073246 master-0 kubenswrapper[37036]: W0312 14:35:41.057975 37036 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 14:35:41.073246 master-0 kubenswrapper[37036]: W0312 14:35:41.057980 37036 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 14:35:41.073246 master-0 kubenswrapper[37036]: W0312 14:35:41.057985 37036 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 14:35:41.073246 master-0 kubenswrapper[37036]: W0312 14:35:41.057990 37036 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 14:35:41.073246 master-0 kubenswrapper[37036]: I0312 14:35:41.057999 37036 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 14:35:41.073246 master-0 kubenswrapper[37036]: I0312 14:35:41.058194 37036 server.go:940] "Client rotation is on, will bootstrap in background" Mar 12 14:35:41.073246 master-0 kubenswrapper[37036]: I0312 14:35:41.062096 37036 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 12 14:35:41.073246 master-0 kubenswrapper[37036]: I0312 14:35:41.062356 37036 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 12 14:35:41.073246 master-0 kubenswrapper[37036]: I0312 14:35:41.062726 37036 server.go:997] "Starting client certificate rotation" Mar 12 14:35:41.073246 master-0 kubenswrapper[37036]: I0312 14:35:41.062755 37036 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 12 14:35:41.073246 master-0 kubenswrapper[37036]: I0312 14:35:41.063530 37036 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 14:35:41.073730 master-0 kubenswrapper[37036]: I0312 14:35:41.064119 37036 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-13 14:02:38 +0000 UTC, rotation deadline is 2026-03-13 09:11:04.955285653 +0000 UTC Mar 12 14:35:41.073730 master-0 kubenswrapper[37036]: I0312 14:35:41.064151 37036 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h35m23.891136092s for next certificate rotation Mar 12 14:35:41.073730 master-0 kubenswrapper[37036]: I0312 14:35:41.068243 37036 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 14:35:41.073730 master-0 kubenswrapper[37036]: I0312 14:35:41.071597 37036 log.go:25] "Validated CRI v1 runtime API" Mar 12 14:35:41.076203 master-0 kubenswrapper[37036]: I0312 14:35:41.076151 37036 log.go:25] "Validated CRI v1 image API" Mar 12 14:35:41.078067 master-0 kubenswrapper[37036]: I0312 14:35:41.078027 37036 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 14:35:41.088100 master-0 kubenswrapper[37036]: I0312 14:35:41.087974 37036 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 847b9f13-6083-4550-852f-e0336cfa76ca:/dev/vda3 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 12 14:35:41.088789 master-0 kubenswrapper[37036]: I0312 14:35:41.088016 37036 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/031300aa1cb0172a0d2afed31c2d6390d62119757876eb5bc01076e0f90336fb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/031300aa1cb0172a0d2afed31c2d6390d62119757876eb5bc01076e0f90336fb/userdata/shm major:0 minor:435 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/04b735b224daf50d8a4394bad34d733739b181daca3e401220cb41161ddee701/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/04b735b224daf50d8a4394bad34d733739b181daca3e401220cb41161ddee701/userdata/shm major:0 minor:429 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0797fe88dc9adea8392e9b93088b1a0313bddd85f5318d3039e5b08dcf043b58/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0797fe88dc9adea8392e9b93088b1a0313bddd85f5318d3039e5b08dcf043b58/userdata/shm major:0 minor:333 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1086c8d5071e504e73694312636385db33200a4d801de67bcefe278f7df988d9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1086c8d5071e504e73694312636385db33200a4d801de67bcefe278f7df988d9/userdata/shm major:0 minor:769 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1325db6b5fc63da3d3f80a9e903b690f2007b20dd9156b1536d772080219b0fc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1325db6b5fc63da3d3f80a9e903b690f2007b20dd9156b1536d772080219b0fc/userdata/shm major:0 minor:843 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/133614914dd24d9ac9613df300e1e5f9690b2a5705765951b6217919a73bd40b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/133614914dd24d9ac9613df300e1e5f9690b2a5705765951b6217919a73bd40b/userdata/shm major:0 minor:1090 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1349683c6b7a48b60ff43680722efbbec3a557f6a028d5afab1d1b9c68ad3a50/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1349683c6b7a48b60ff43680722efbbec3a557f6a028d5afab1d1b9c68ad3a50/userdata/shm major:0 minor:720 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/16c9911f528d88ff6368917af5d3a0bfb97b0cd22d43dad86b75920f982a3c90/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/16c9911f528d88ff6368917af5d3a0bfb97b0cd22d43dad86b75920f982a3c90/userdata/shm major:0 minor:777 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/19d81290fc93fac6e353ccf6f4dabde5040333c3260c06c3a57f91c397c38d86/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/19d81290fc93fac6e353ccf6f4dabde5040333c3260c06c3a57f91c397c38d86/userdata/shm major:0 minor:724 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1ba5c83b988cf94fb241db9240f0b33554a204e49670a14cf13953d488a8abe8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1ba5c83b988cf94fb241db9240f0b33554a204e49670a14cf13953d488a8abe8/userdata/shm major:0 minor:269 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1cc258e5add24f89b3e9a9a1502a4d4f7e01fa0c35af8f6d6a9076b7b4e48345/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1cc258e5add24f89b3e9a9a1502a4d4f7e01fa0c35af8f6d6a9076b7b4e48345/userdata/shm major:0 minor:239 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/210d19917e7415e5f1763dbc60d79ff661ed77ac9ff9582758b201449af2e08f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/210d19917e7415e5f1763dbc60d79ff661ed77ac9ff9582758b201449af2e08f/userdata/shm major:0 minor:308 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2376cfb1ee60c237c8964f78aeee837ea12e09f11b9b3dfc1320568c3b4a4743/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2376cfb1ee60c237c8964f78aeee837ea12e09f11b9b3dfc1320568c3b4a4743/userdata/shm major:0 minor:770 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/241f858261d65330369ee282a68caee5de8979050ed624a101ccc38bb5423e5f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/241f858261d65330369ee282a68caee5de8979050ed624a101ccc38bb5423e5f/userdata/shm major:0 minor:705 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/273deb0b6a9c20f6e288a8f04dbffa2d991224ef0582918efc29bdb17656c1b9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/273deb0b6a9c20f6e288a8f04dbffa2d991224ef0582918efc29bdb17656c1b9/userdata/shm major:0 minor:148 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2d3eaf559f7c7fc8939b6cb1adf4ce35f6ab04af130fc43628777d00ccfd15a4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2d3eaf559f7c7fc8939b6cb1adf4ce35f6ab04af130fc43628777d00ccfd15a4/userdata/shm major:0 minor:532 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2e21aa41c709714c621e81f34dd2940d383309852477d3447a69f2b11767e16e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2e21aa41c709714c621e81f34dd2940d383309852477d3447a69f2b11767e16e/userdata/shm major:0 minor:419 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2ed4af146d2bc6a8dae65fe67eb8f5e0b4dce64f0e0b6991bdd46a09447f48de/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2ed4af146d2bc6a8dae65fe67eb8f5e0b4dce64f0e0b6991bdd46a09447f48de/userdata/shm major:0 minor:245 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/360de6d7cd6901ac994724b265fa41deda5af26bfc1f5396acb31cdc3acfea90/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/360de6d7cd6901ac994724b265fa41deda5af26bfc1f5396acb31cdc3acfea90/userdata/shm major:0 minor:48 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/39547af9c96ab9ffa0c68d5520b2aefe82b1e2e9c5c31895677204de893a9b6a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/39547af9c96ab9ffa0c68d5520b2aefe82b1e2e9c5c31895677204de893a9b6a/userdata/shm major:0 minor:918 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3e2810ad638aff3594c8253ba5203ae1a01b05deb352d63eb28794aa543ce257/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3e2810ad638aff3594c8253ba5203ae1a01b05deb352d63eb28794aa543ce257/userdata/shm major:0 minor:820 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/422b72f1d9f4ed3748b07f1e5c14fad3faa59d5f9a198007cce69e02be1d9fa2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/422b72f1d9f4ed3748b07f1e5c14fad3faa59d5f9a198007cce69e02be1d9fa2/userdata/shm major:0 minor:99 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/43ed8c1a4973dd17aafd4ecf7a139cc5fe9ab8ae42ddeb20c5c40716650f035f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/43ed8c1a4973dd17aafd4ecf7a139cc5fe9ab8ae42ddeb20c5c40716650f035f/userdata/shm major:0 minor:257 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/44f838e36ef84ec07445889d3aec1d687c84ce529c36e9146d695bf4ed4afa8f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/44f838e36ef84ec07445889d3aec1d687c84ce529c36e9146d695bf4ed4afa8f/userdata/shm major:0 minor:725 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/47bb0848ead40d3cf654dbab8841bba9aaf69454627f9510e73ce08c4830d731/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/47bb0848ead40d3cf654dbab8841bba9aaf69454627f9510e73ce08c4830d731/userdata/shm major:0 minor:370 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/48b23f5b2fb0b4600ed151be719911ca6e8598a87db7cece2fed00b00050b177/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/48b23f5b2fb0b4600ed151be719911ca6e8598a87db7cece2fed00b00050b177/userdata/shm major:0 minor:549 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4962f86c890ab9be604d23a0da920ebdb05a4b0dbc30671f52da23640f2df151/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4962f86c890ab9be604d23a0da920ebdb05a4b0dbc30671f52da23640f2df151/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4e4174446867a7a20182ef847c837a9996a0c6baab2ed07f50687234fab093d4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4e4174446867a7a20182ef847c837a9996a0c6baab2ed07f50687234fab093d4/userdata/shm major:0 minor:1130 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5679426d37d3354caeeb4580675058670c5c7ef6fa2efa546a861e1c9f923e06/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5679426d37d3354caeeb4580675058670c5c7ef6fa2efa546a861e1c9f923e06/userdata/shm major:0 minor:630 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/57327dd3cf51a7946c6428acbb4cffd5439484941e4f876980813ac47338ecdb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/57327dd3cf51a7946c6428acbb4cffd5439484941e4f876980813ac47338ecdb/userdata/shm major:0 minor:578 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5913774b8f250bfb47692670821ad697d9a92cb0aca0d95d6ebaa53a1397311f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5913774b8f250bfb47692670821ad697d9a92cb0aca0d95d6ebaa53a1397311f/userdata/shm major:0 minor:75 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/59d708b78a7b260fc1f5fce51861156cd584df9875d86be3a6175021610d5f66/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/59d708b78a7b260fc1f5fce51861156cd584df9875d86be3a6175021610d5f66/userdata/shm major:0 minor:281 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5a8c18378832b96fedb1cc482f9c56eff1b7bedfc155a7a794d6f4818bd05ce5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5a8c18378832b96fedb1cc482f9c56eff1b7bedfc155a7a794d6f4818bd05ce5/userdata/shm major:0 minor:1215 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6248f60ded635728b07f9ffbb9d72d48359f97cdb83b7f5d2e6153af60f77309/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6248f60ded635728b07f9ffbb9d72d48359f97cdb83b7f5d2e6153af60f77309/userdata/shm major:0 minor:1102 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/643a9eb1fc3e8f464aba2201dd6fa47d57c365903e1554bd77d2fd4b8d623917/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/643a9eb1fc3e8f464aba2201dd6fa47d57c365903e1554bd77d2fd4b8d623917/userdata/shm major:0 minor:254 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/667a33334db41ad265e60ff8664b098419b2a584d575b100118b0dcbbdce439e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/667a33334db41ad265e60ff8664b098419b2a584d575b100118b0dcbbdce439e/userdata/shm major:0 minor:260 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6724dfeb711ea97e4c0311828871b84e605df95c88e47b984ac33b84e0c182f2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6724dfeb711ea97e4c0311828871b84e605df95c88e47b984ac33b84e0c182f2/userdata/shm major:0 minor:344 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6a6f22295caf5561da4b53d5d1d44905e37cde1c7951dfd83965f63ee4f0c534/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6a6f22295caf5561da4b53d5d1d44905e37cde1c7951dfd83965f63ee4f0c534/userdata/shm major:0 minor:1055 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6f063e04e3f4cea4c5a58314f5a114923174086e042c2c243d9038f9f34bad2b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6f063e04e3f4cea4c5a58314f5a114923174086e042c2c243d9038f9f34bad2b/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/75d2cc73f5d8290489c2ec72fc148a6f125ffa59eaf8f20c0252b0060ef642a3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/75d2cc73f5d8290489c2ec72fc148a6f125ffa59eaf8f20c0252b0060ef642a3/userdata/shm major:0 minor:89 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7ad7c4acbfd0070259486f35a18b99f96bb34f57c1bf16a0b81a55c2de084162/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7ad7c4acbfd0070259486f35a18b99f96bb34f57c1bf16a0b81a55c2de084162/userdata/shm major:0 minor:129 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7bbac52760e3fcba097d54391f795f027fe56fcf9f7e33e8c515250455992a3b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7bbac52760e3fcba097d54391f795f027fe56fcf9f7e33e8c515250455992a3b/userdata/shm major:0 minor:279 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7e1bd495d46e0c7a0ac9149686af3fabe8525fa70c85e91b10cc34e43bcb54d8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7e1bd495d46e0c7a0ac9149686af3fabe8525fa70c85e91b10cc34e43bcb54d8/userdata/shm major:0 minor:697 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7f4e5afa4afe018a7c389e007a13d614d179ad2102c4e104bffdef509a1d7c7b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7f4e5afa4afe018a7c389e007a13d614d179ad2102c4e104bffdef509a1d7c7b/userdata/shm major:0 minor:764 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/81cd0864a54b3fb544c03e1c4cc3bb2a1e8301732b585b1ac0d2dad7435e59f9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/81cd0864a54b3fb544c03e1c4cc3bb2a1e8301732b585b1ac0d2dad7435e59f9/userdata/shm major:0 minor:506 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/84ea14c79c9435282226e3a70b4b302086d9d4276408c71b8e887b9f85e1f795/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/84ea14c79c9435282226e3a70b4b302086d9d4276408c71b8e887b9f85e1f795/userdata/shm major:0 minor:248 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8bae2bf48688fed38a08346cb01a13f07f5d6ebf571f08738d916c6d12d3bb19/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8bae2bf48688fed38a08346cb01a13f07f5d6ebf571f08738d916c6d12d3bb19/userdata/shm major:0 minor:778 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9ae8ffe0fbe6457550dbcfde92cc569b256c78e408c6b4f88c41a2524eefcfab/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9ae8ffe0fbe6457550dbcfde92cc569b256c78e408c6b4f88c41a2524eefcfab/userdata/shm major:0 minor:304 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a071b87c5a3a1d570849d8f30a4ef18e47cf5ac7ae26cb6fa07ebd774622be6c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a071b87c5a3a1d570849d8f30a4ef18e47cf5ac7ae26cb6fa07ebd774622be6c/userdata/shm major:0 minor:469 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a41bc83813b39c2fa459a0e9284786027dca250eb150090c47a705729e7d08f5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a41bc83813b39c2fa459a0e9284786027dca250eb150090c47a705729e7d08f5/userdata/shm major:0 minor:587 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a6ab4911ef54a5ef7fd92d9752905d7377429179c56c4e77bafea0e6505d40e2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a6ab4911ef54a5ef7fd92d9752905d7377429179c56c4e77bafea0e6505d40e2/userdata/shm major:0 minor:436 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a917672632ddd41ece955a9caf8b6f8e502d8c6d1a179cc7a84283068844b577/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a917672632ddd41ece955a9caf8b6f8e502d8c6d1a179cc7a84283068844b577/userdata/shm major:0 minor:536 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aca8c7cb3cefb96ea167603c4fdab132577bdaf6be51eb609e79f8b9ea4df1b7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aca8c7cb3cefb96ea167603c4fdab132577bdaf6be51eb609e79f8b9ea4df1b7/userdata/shm major:0 minor:325 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b00ca20b86c203586e283f8a194f1ae9775853a076e1989c48f1365bb1141a67/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b00ca20b86c203586e283f8a194f1ae9775853a076e1989c48f1365bb1141a67/userdata/shm major:0 minor:1083 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b067750f065ba84cd14fac759b144c851d17dfcf9ba98a9096e90f8e2906332d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b067750f065ba84cd14fac759b144c851d17dfcf9ba98a9096e90f8e2906332d/userdata/shm major:0 minor:523 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b0d9b5d35890bf7ee8f33755b50b3d62e47a389cd7d7e50fa4af660965af6cae/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b0d9b5d35890bf7ee8f33755b50b3d62e47a389cd7d7e50fa4af660965af6cae/userdata/shm major:0 minor:318 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b1a27def0943392bc851926036706c077e2c62d9404ab94e4d470faf771c9199/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b1a27def0943392bc851926036706c077e2c62d9404ab94e4d470faf771c9199/userdata/shm major:0 minor:992 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b4d899998f745455ee9f9d0e86782192bfb9c3fa197ad167b3e3e1e3896ea9e7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b4d899998f745455ee9f9d0e86782192bfb9c3fa197ad167b3e3e1e3896ea9e7/userdata/shm major:0 minor:271 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b4e230d3f789f82e2598481603b93fd52d829378a89dce8399b53642cd4db5c4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b4e230d3f789f82e2598481603b93fd52d829378a89dce8399b53642cd4db5c4/userdata/shm major:0 minor:1191 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b820d186bee28edd1c55ac6380a6987416ca51ef3ff64ae7bf3a04304904c238/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b820d186bee28edd1c55ac6380a6987416ca51ef3ff64ae7bf3a04304904c238/userdata/shm major:0 minor:252 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b91ed73a339c21ab18d17bc789c0ba3301a928d38dce2afb46b197b75f34b51e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b91ed73a339c21ab18d17bc789c0ba3301a928d38dce2afb46b197b75f34b51e/userdata/shm major:0 minor:712 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ba6778d1fdc6908e0a785cdabed807cc4f2dd052e1c7ef6d135e92d89f5e89d1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ba6778d1fdc6908e0a785cdabed807cc4f2dd052e1c7ef6d135e92d89f5e89d1/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bb2ba7d0c1c51336231f0b223ca71f794a5f473f0c46059600789cebd6ae818f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bb2ba7d0c1c51336231f0b223ca71f794a5f473f0c46059600789cebd6ae818f/userdata/shm major:0 minor:238 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bc3c55d0c455838629b8ab5cf95b13e36cb5ff08d49b778a2bbce43b9948d568/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bc3c55d0c455838629b8ab5cf95b13e36cb5ff08d49b778a2bbce43b9948d568/userdata/shm major:0 minor:600 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c0057d7bbbc9bd9f44bd51e3c80dfbe61d922316757a135f4fb3b8485ad4e5e9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c0057d7bbbc9bd9f44bd51e3c80dfbe61d922316757a135f4fb3b8485ad4e5e9/userdata/shm major:0 minor:315 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c08577925424813ee777936cf83e1b718ae5ce815b0089c7d7f01bbc45cd2891/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c08577925424813ee777936cf83e1b718ae5ce815b0089c7d7f01bbc45cd2891/userdata/shm major:0 minor:91 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cf474d719fe021709d76198dcf6233015fdb798e1bd5aaff8f16e8ee1cf431e4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cf474d719fe021709d76198dcf6233015fdb798e1bd5aaff8f16e8ee1cf431e4/userdata/shm major:0 minor:427 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d46849ab9a3cac26570e0fb5ca7236cfad3a52459d3d93f56a2bd305b0ad9cd4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d46849ab9a3cac26570e0fb5ca7236cfad3a52459d3d93f56a2bd305b0ad9cd4/userdata/shm major:0 minor:598 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d546c5397e398d2fa2328f65fedfe1cce52498d31ad5c371f9043b0bc9f34f16/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d546c5397e398d2fa2328f65fedfe1cce52498d31ad5c371f9043b0bc9f34f16/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d6cba419a6f6e1067b6ba753b668a42fc154b7b841036f746eeb0f9473a12dda/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d6cba419a6f6e1067b6ba753b668a42fc154b7b841036f746eeb0f9473a12dda/userdata/shm major:0 minor:595 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dc05a7757105e04e114bec1d0c6d1948857cd13293222846a43aed00c9eb7e9e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dc05a7757105e04e114bec1d0c6d1948857cd13293222846a43aed00c9eb7e9e/userdata/shm major:0 minor:585 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ddff8978b61211cf6981c8dcb5ac20ebbd703343ccf0d4864c6b4d8c7b748d88/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ddff8978b61211cf6981c8dcb5ac20ebbd703343ccf0d4864c6b4d8c7b748d88/userdata/shm major:0 minor:776 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e3ded18e3d6f447b9e66f1d69e24e4a3db671b9e96141bd007fb10aec777b522/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e3ded18e3d6f447b9e66f1d69e24e4a3db671b9e96141bd007fb10aec777b522/userdata/shm major:0 minor:272 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e7f98f2c20f8a17639a398b1fbfbba35de0dedfd7ce02e92e1a2182183ee86ac/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e7f98f2c20f8a17639a398b1fbfbba35de0dedfd7ce02e92e1a2182183ee86ac/userdata/shm major:0 minor:1049 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f02823618c817a57f5deb9d5aa242eb2274591837e55328914242489612536a0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f02823618c817a57f5deb9d5aa242eb2274591837e55328914242489612536a0/userdata/shm major:0 minor:722 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f0298c9e8c7173c3949586fa731c073a558897f0792064c146633191e5244fab/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f0298c9e8c7173c3949586fa731c073a558897f0792064c146633191e5244fab/userdata/shm major:0 minor:985 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fa512b9d1c47fba8ce4517c7ff55b3a36d2662e583e6b6952289b14b55413ef1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fa512b9d1c47fba8ce4517c7ff55b3a36d2662e583e6b6952289b14b55413ef1/userdata/shm major:0 minor:1051 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fb9c2d52a7f820046d4d8f7dbc4ab42d1bcf38f9fbb4f9b3e069dc056c52a7d9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fb9c2d52a7f820046d4d8f7dbc4ab42d1bcf38f9fbb4f9b3e069dc056c52a7d9/userdata/shm major:0 minor:114 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/06eb9f4b-167e-435b-8ef6-ae44fc0b85a9/volumes/kubernetes.io~projected/kube-api-access-276qm:{mountpoint:/var/lib/kubelet/pods/06eb9f4b-167e-435b-8ef6-ae44fc0b85a9/volumes/kubernetes.io~projected/kube-api-access-276qm major:0 minor:809 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/06eb9f4b-167e-435b-8ef6-ae44fc0b85a9/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/06eb9f4b-167e-435b-8ef6-ae44fc0b85a9/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:799 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/07a6a1d6-fecf-4847-b7c1-160d5d7320fb/volumes/kubernetes.io~projected/kube-api-access-cqh9t:{mountpoint:/var/lib/kubelet/pods/07a6a1d6-fecf-4847-b7c1-160d5d7320fb/volumes/kubernetes.io~projected/kube-api-access-cqh9t major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/07a6a1d6-fecf-4847-b7c1-160d5d7320fb/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/07a6a1d6-fecf-4847-b7c1-160d5d7320fb/volumes/kubernetes.io~secret/srv-cert major:0 minor:592 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/08ea0d9f-0635-4759-803e-572eca2f2d34/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/08ea0d9f-0635-4759-803e-572eca2f2d34/volumes/kubernetes.io~projected/kube-api-access major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/08ea0d9f-0635-4759-803e-572eca2f2d34/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/08ea0d9f-0635-4759-803e-572eca2f2d34/volumes/kubernetes.io~secret/serving-cert major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0a898118-6d01-4211-92f0-43967b75405c/volumes/kubernetes.io~projected/kube-api-access-8rfxl:{mountpoint:/var/lib/kubelet/pods/0a898118-6d01-4211-92f0-43967b75405c/volumes/kubernetes.io~projected/kube-api-access-8rfxl major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0a898118-6d01-4211-92f0-43967b75405c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0a898118-6d01-4211-92f0-43967b75405c/volumes/kubernetes.io~secret/serving-cert major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1047bb4a-135f-488d-9399-0518cb3a827d/volumes/kubernetes.io~projected/kube-api-access-flj9j:{mountpoint:/var/lib/kubelet/pods/1047bb4a-135f-488d-9399-0518cb3a827d/volumes/kubernetes.io~projected/kube-api-access-flj9j major:0 minor:980 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1047bb4a-135f-488d-9399-0518cb3a827d/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/1047bb4a-135f-488d-9399-0518cb3a827d/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:972 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1bba274a-38c7-4d13-88a5-6bc39228416c/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/1bba274a-38c7-4d13-88a5-6bc39228416c/volumes/kubernetes.io~projected/kube-api-access major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1bba274a-38c7-4d13-88a5-6bc39228416c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1bba274a-38c7-4d13-88a5-6bc39228416c/volumes/kubernetes.io~secret/serving-cert major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1bc0d552-01c7-4212-a551-d16419f2dc80/volumes/kubernetes.io~projected/kube-api-access-vpq4d:{mountpoint:/var/lib/kubelet/pods/1bc0d552-01c7-4212-a551-d16419f2dc80/volumes/kubernetes.io~projected/kube-api-access-vpq4d major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1bc0d552-01c7-4212-a551-d16419f2dc80/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/1bc0d552-01c7-4212-a551-d16419f2dc80/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:594 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1f9b15c6-b4ee-4907-8daa-376e3b438896/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/1f9b15c6-b4ee-4907-8daa-376e3b438896/volumes/kubernetes.io~projected/ca-certs major:0 minor:409 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1f9b15c6-b4ee-4907-8daa-376e3b438896/volumes/kubernetes.io~projected/kube-api-access-w7nnk:{mountpoint:/var/lib/kubelet/pods/1f9b15c6-b4ee-4907-8daa-376e3b438896/volumes/kubernetes.io~projected/kube-api-access-w7nnk major:0 minor:499 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/272b53c4-134c-404d-9a27-c7371415b1f7/volumes/kubernetes.io~projected/kube-api-access-nqqcc:{mountpoint:/var/lib/kubelet/pods/272b53c4-134c-404d-9a27-c7371415b1f7/volumes/kubernetes.io~projected/kube-api-access-nqqcc major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/272b53c4-134c-404d-9a27-c7371415b1f7/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/272b53c4-134c-404d-9a27-c7371415b1f7/volumes/kubernetes.io~secret/srv-cert major:0 minor:588 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2f59d485-9f69-4f36-836e-6338f84b7d69/volumes/kubernetes.io~projected/kube-api-access-fbwl8:{mountpoint:/var/lib/kubelet/pods/2f59d485-9f69-4f36-836e-6338f84b7d69/volumes/kubernetes.io~projected/kube-api-access-fbwl8 major:0 minor:717 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3815db41-fe01-43f6-b75c-4ccca9124f51/volumes/kubernetes.io~projected/kube-api-access-shknb:{mountpoint:/var/lib/kubelet/pods/3815db41-fe01-43f6-b75c-4ccca9124f51/volumes/kubernetes.io~projected/kube-api-access-shknb major:0 minor:526 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39252b5a-d014-4319-ad81-3c1bf2ef585e/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/39252b5a-d014-4319-ad81-3c1bf2ef585e/volumes/kubernetes.io~projected/ca-certs major:0 minor:533 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39252b5a-d014-4319-ad81-3c1bf2ef585e/volumes/kubernetes.io~projected/kube-api-access-ktncx:{mountpoint:/var/lib/kubelet/pods/39252b5a-d014-4319-ad81-3c1bf2ef585e/volumes/kubernetes.io~projected/kube-api-access-ktncx major:0 minor:548 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39252b5a-d014-4319-ad81-3c1bf2ef585e/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/39252b5a-d014-4319-ad81-3c1bf2ef585e/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:544 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39bda5b8-c748-4023-8680-8e8454512e5b/volumes/kubernetes.io~projected/kube-api-access-4krm9:{mountpoint:/var/lib/kubelet/pods/39bda5b8-c748-4023-8680-8e8454512e5b/volumes/kubernetes.io~projected/kube-api-access-4krm9 major:0 minor:628 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39bda5b8-c748-4023-8680-8e8454512e5b/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/39bda5b8-c748-4023-8680-8e8454512e5b/volumes/kubernetes.io~secret/encryption-config major:0 minor:626 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39bda5b8-c748-4023-8680-8e8454512e5b/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/39bda5b8-c748-4023-8680-8e8454512e5b/volumes/kubernetes.io~secret/etcd-client major:0 minor:627 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39bda5b8-c748-4023-8680-8e8454512e5b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/39bda5b8-c748-4023-8680-8e8454512e5b/volumes/kubernetes.io~secret/serving-cert major:0 minor:625 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3dc73c14-852d-4957-b6ac-84366ba0594f/volumes/kubernetes.io~projected/kube-api-access-sc9zd:{mountpoint:/var/lib/kubelet/pods/3dc73c14-852d-4957-b6ac-84366ba0594f/volumes/kubernetes.io~projected/kube-api-access-sc9zd major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3dc73c14-852d-4957-b6ac-84366ba0594f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3dc73c14-852d-4957-b6ac-84366ba0594f/volumes/kubernetes.io~secret/serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3ec846db-e344-4f9e-95e6-7a0055f52766/volumes/kubernetes.io~projected/kube-api-access-tkgft:{mountpoint:/var/lib/kubelet/pods/3ec846db-e344-4f9e-95e6-7a0055f52766/volumes/kubernetes.io~projected/kube-api-access-tkgft major:0 minor:512 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3ec846db-e344-4f9e-95e6-7a0055f52766/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/3ec846db-e344-4f9e-95e6-7a0055f52766/volumes/kubernetes.io~secret/metrics-tls major:0 minor:527 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3edaa533-ecbb-443e-a270-4cb4f923daf6/volumes/kubernetes.io~projected/kube-api-access-smwtd:{mountpoint:/var/lib/kubelet/pods/3edaa533-ecbb-443e-a270-4cb4f923daf6/volumes/kubernetes.io~projected/kube-api-access-smwtd major:0 minor:664 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3edaa533-ecbb-443e-a270-4cb4f923daf6/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/3edaa533-ecbb-443e-a270-4cb4f923daf6/volumes/kubernetes.io~secret/cert major:0 minor:765 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3edaa533-ecbb-443e-a270-4cb4f923daf6/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/3edaa533-ecbb-443e-a270-4cb4f923daf6/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:766 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3f72fbbe-69f0-4622-be05-b839ff9b4d45/volumes/kubernetes.io~projected/kube-api-access-2mbjg:{mountpoint:/var/lib/kubelet/pods/3f72fbbe-69f0-4622-be05-b839ff9b4d45/volumes/kubernetes.io~projected/kube-api-access-2mbjg major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3f72fbbe-69f0-4622-be05-b839ff9b4d45/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3f72fbbe-69f0-4622-be05-b839ff9b4d45/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/40912d56-8288-4d58-ad91-7455bd460887/volumes/kubernetes.io~projected/kube-api-access-l9gvf:{mountpoint:/var/lib/kubelet/pods/40912d56-8288-4d58-ad91-7455bd460887/volumes/kubernetes.io~projected/kube-api-access-l9gvf major:0 minor:305 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/40912d56-8288-4d58-ad91-7455bd460887/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/40912d56-8288-4d58-ad91-7455bd460887/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:301 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/42dbcb8f-e8c4-413e-977d-40aa6df226aa/volumes/kubernetes.io~projected/kube-api-access-j47xv:{mountpoint:/var/lib/kubelet/pods/42dbcb8f-e8c4-413e-977d-40aa6df226aa/volumes/kubernetes.io~projected/kube-api-access-j47xv major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/42dbcb8f-e8c4-413e-977d-40aa6df226aa/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/42dbcb8f-e8c4-413e-977d-40aa6df226aa/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:591 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba/volumes/kubernetes.io~projected/kube-api-access-ms688:{mountpoint:/var/lib/kubelet/pods/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba/volumes/kubernetes.io~projected/kube-api-access-ms688 major:0 minor:1101 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config major:0 minor:1096 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba/volumes/kubernetes.io~secret/prometheus-operator-tls:{mountpoint:/var/lib/kubelet/pods/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba/volumes/kubernetes.io~secret/prometheus-operator-tls major:0 minor:1100 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/volumes/kubernetes.io~projected/kube-api-access-qhdq5:{mountpoint:/var/lib/kubelet/pods/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/volumes/kubernetes.io~projected/kube-api-access-qhdq5 major:0 minor:259 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/volumes/kubernetes.io~secret/metrics-tls major:0 minor:423 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4ef01b7f-f7cb-4fd4-a75d-fe7a657d68d4/volumes/kubernetes.io~projected/kube-api-access-fdzwp:{mountpoint:/var/lib/kubelet/pods/4ef01b7f-f7cb-4fd4-a75d-fe7a657d68d4/volumes/kubernetes.io~projected/kube-api-access-fdzwp major:0 minor:340 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/57930a54-89ab-4ec8-a504-74035bb74d63/volumes/kubernetes.io~projected/kube-api-access-d6z8v:{mountpoint:/var/lib/kubelet/pods/57930a54-89ab-4ec8-a504-74035bb74d63/volumes/kubernetes.io~projected/kube-api-access-d6z8v major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/57930a54-89ab-4ec8-a504-74035bb74d63/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/57930a54-89ab-4ec8-a504-74035bb74d63/volumes/kubernetes.io~secret/serving-cert major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59f21770-429b-4b63-82fd-50ce0daf698d/volumes/kubernetes.io~projected/kube-api-access-qxdqn:{mountpoint:/var/lib/kubelet/pods/59f21770-429b-4b63-82fd-50ce0daf698d/volumes/kubernetes.io~projected/kube-api-access-qxdqn major:0 minor:1129 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59f21770-429b-4b63-82fd-50ce0daf698d/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/59f21770-429b-4b63-82fd-50ce0daf698d/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config major:0 minor:1124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59f21770-429b-4b63-82fd-50ce0daf698d/volumes/kubernetes.io~secret/openshift-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/59f21770-429b-4b63-82fd-50ce0daf698d/volumes/kubernetes.io~secret/openshift-state-metrics-tls major:0 minor:365 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5fb06459-09da-4620-91cf-8c3fe8f425db/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/5fb06459-09da-4620-91cf-8c3fe8f425db/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:375 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5fb06459-09da-4620-91cf-8c3fe8f425db/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/5fb06459-09da-4620-91cf-8c3fe8f425db/volumes/kubernetes.io~empty-dir/tmp major:0 minor:408 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5fb06459-09da-4620-91cf-8c3fe8f425db/volumes/kubernetes.io~projected/kube-api-access-zv69s:{mountpoint:/var/lib/kubelet/pods/5fb06459-09da-4620-91cf-8c3fe8f425db/volumes/kubernetes.io~projected/kube-api-access-zv69s major:0 minor:403 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/61d829d7-38e1-4826-942c-f7317c4a4bec/volumes/kubernetes.io~projected/kube-api-access-zqx42:{mountpoint:/var/lib/kubelet/pods/61d829d7-38e1-4826-942c-f7317c4a4bec/volumes/kubernetes.io~projected/kube-api-access-zqx42 major:0 minor:984 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/61d829d7-38e1-4826-942c-f7317c4a4bec/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/61d829d7-38e1-4826-942c-f7317c4a4bec/volumes/kubernetes.io~secret/proxy-tls major:0 minor:979 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/61de099a-410b-4d30-83e8-19cf5901cb27/volumes/kubernetes.io~projected/kube-api-access-9czc5:{mountpoint:/var/lib/kubelet/pods/61de099a-410b-4d30-83e8-19cf5901cb27/volumes/kubernetes.io~projected/kube-api-access-9czc5 major:0 minor:377 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/61de099a-410b-4d30-83e8-19cf5901cb27/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/61de099a-410b-4d30-83e8-19cf5901cb27/volumes/kubernetes.io~secret/signing-key major:0 minor:376 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a/volumes/kubernetes.io~projected/kube-api-access-jcz8p:{mountpoint:/var/lib/kubelet/pods/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a/volumes/kubernetes.io~projected/kube-api-access-jcz8p major:0 minor:1089 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a/volumes/kubernetes.io~secret/certs major:0 minor:1080 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:1081 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b77ad35-2fff-47bb-ad34-abb3868b09a9/volumes/kubernetes.io~projected/kube-api-access-m97zx:{mountpoint:/var/lib/kubelet/pods/6b77ad35-2fff-47bb-ad34-abb3868b09a9/volumes/kubernetes.io~projected/kube-api-access-m97zx major:0 minor:807 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b77ad35-2fff-47bb-ad34-abb3868b09a9/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/6b77ad35-2fff-47bb-ad34-abb3868b09a9/volumes/kubernetes.io~secret/proxy-tls major:0 minor:798 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6defef79-6058-466a-ae0b-8eb9258126be/volumes/kubernetes.io~projected/kube-api-access-zxt4g:{mountpoint:/var/lib/kubelet/pods/6defef79-6058-466a-ae0b-8eb9258126be/volumes/kubernetes.io~projected/kube-api-access-zxt4g major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6defef79-6058-466a-ae0b-8eb9258126be/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/6defef79-6058-466a-ae0b-8eb9258126be/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6f5cd3ff-ced6-47e3-8054-d83053d87680/volumes/kubernetes.io~projected/kube-api-access-7dkwb:{mountpoint:/var/lib/kubelet/pods/6f5cd3ff-ced6-47e3-8054-d83053d87680/volumes/kubernetes.io~projected/kube-api-access-7dkwb major:0 minor:307 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6f5cd3ff-ced6-47e3-8054-d83053d87680/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/6f5cd3ff-ced6-47e3-8054-d83053d87680/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:306 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70710a0b-8b5d-40f5-b726-fd5e2836ffbe/volumes/kubernetes.io~projected/kube-api-access-b9cfq:{mountpoint:/var/lib/kubelet/pods/70710a0b-8b5d-40f5-b726-fd5e2836ffbe/volumes/kubernetes.io~projected/kube-api-access-b9cfq major:0 minor:718 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b/volumes/kubernetes.io~projected/kube-api-access-jh2zk:{mountpoint:/var/lib/kubelet/pods/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b/volumes/kubernetes.io~projected/kube-api-access-jh2zk major:0 minor:886 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b/volumes/kubernetes.io~secret/encryption-config major:0 minor:887 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b/volumes/kubernetes.io~secret/etcd-client major:0 minor:885 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b/volumes/kubernetes.io~secret/serving-cert major:0 minor:364 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7433d9bf-4edf-4787-a7a1-e5102c7264c7/volumes/kubernetes.io~projected/kube-api-access-t4q4w:{mountpoint:/var/lib/kubelet/pods/7433d9bf-4edf-4787-a7a1-e5102c7264c7/volumes/kubernetes.io~projected/kube-api-access-t4q4w major:0 minor:98 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7433d9bf-4edf-4787-a7a1-e5102c7264c7/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/7433d9bf-4edf-4787-a7a1-e5102c7264c7/volumes/kubernetes.io~secret/metrics-tls major:0 minor:94 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/761993bb-2cba-4e1a-b304-36a24817af94/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/761993bb-2cba-4e1a-b304-36a24817af94/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/761993bb-2cba-4e1a-b304-36a24817af94/volumes/kubernetes.io~projected/kube-api-access-2k4mx:{mountpoint:/var/lib/kubelet/pods/761993bb-2cba-4e1a-b304-36a24817af94/volumes/kubernetes.io~projected/kube-api-access-2k4mx major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/761993bb-2cba-4e1a-b304-36a24817af94/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/761993bb-2cba-4e1a-b304-36a24817af94/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/76d596c0-6a41-43e1-9516-aee9ad834ec2/volumes/kubernetes.io~projected/kube-api-access-c4pvp:{mountpoint:/var/lib/kubelet/pods/76d596c0-6a41-43e1-9516-aee9ad834ec2/volumes/kubernetes.io~projected/kube-api-access-c4pvp major:0 minor:263 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/76d596c0-6a41-43e1-9516-aee9ad834ec2/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/76d596c0-6a41-43e1-9516-aee9ad834ec2/volumes/kubernetes.io~secret/serving-cert major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7fdce71e-8085-4316-be40-e535530c2ca4/volumes/kubernetes.io~projected/kube-api-access-5bdqv:{mountpoint:/var/lib/kubelet/pods/7fdce71e-8085-4316-be40-e535530c2ca4/volumes/kubernetes.io~projected/kube-api-access-5bdqv major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7fdce71e-8085-4316-be40-e535530c2ca4/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/7fdce71e-8085-4316-be40-e535530c2ca4/volumes/kubernetes.io~secret/metrics-certs major:0 minor:590 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8106d14a-b448-4dd1-bccd-926f85394b5d/volumes/kubernetes.io~projected/kube-api-access-jtqp6:{mountpoint:/var/lib/kubelet/pods/8106d14a-b448-4dd1-bccd-926f85394b5d/volumes/kubernetes.io~projected/kube-api-access-jtqp6 major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8106d14a-b448-4dd1-bccd-926f85394b5d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/8106d14a-b448-4dd1-bccd-926f85394b5d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/85459175-2c9c-425d-bdfb-0a79c92ed110/volumes/kubernetes.io~projected/kube-api-access-v8tts:{mountpoint:/var/lib/kubelet/pods/85459175-2c9c-425d-bdfb-0a79c92ed110/volumes/kubernetes.io~projected/kube-api-access-v8tts major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/85459175-2c9c-425d-bdfb-0a79c92ed110/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/85459175-2c9c-425d-bdfb-0a79c92ed110/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:593 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8660cef9-0ab3-453e-a4b9-c243daa6ddb0/volumes/kubernetes.io~projected/kube-api-access-clj2j:{mountpoint:/var/lib/kubelet/pods/8660cef9-0ab3-453e-a4b9-c243daa6ddb0/volumes/kubernetes.io~projected/kube-api-access-clj2j major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea/volumes/kubernetes.io~projected/kube-api-access-2z8pd:{mountpoint:/var/lib/kubelet/pods/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea/volumes/kubernetes.io~projected/kube-api-access-2z8pd major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:418 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:410 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c6b9f13-4a3a-4920-a84b-f76516501f81/volumes/kubernetes.io~projected/kube-api-access-2vnhl:{mountpoint:/var/lib/kubelet/pods/8c6b9f13-4a3a-4920-a84b-f76516501f81/volumes/kubernetes.io~projected/kube-api-access-2vnhl major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c6b9f13-4a3a-4920-a84b-f76516501f81/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/8c6b9f13-4a3a-4920-a84b-f76516501f81/volumes/kubernetes.io~secret/metrics-tls major:0 minor:411 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8d775283-2696-4411-8ddf-d4e6000f0a0c/volumes/kubernetes.io~projected/kube-api-access-lcwrv:{mountpoint:/var/lib/kubelet/pods/8d775283-2696-4411-8ddf-d4e6000f0a0c/ Mar 12 14:35:41.089284 master-0 kubenswrapper[37036]: volumes/kubernetes.io~projected/kube-api-access-lcwrv major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8d775283-2696-4411-8ddf-d4e6000f0a0c/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/8d775283-2696-4411-8ddf-d4e6000f0a0c/volumes/kubernetes.io~secret/etcd-client major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8d775283-2696-4411-8ddf-d4e6000f0a0c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8d775283-2696-4411-8ddf-d4e6000f0a0c/volumes/kubernetes.io~secret/serving-cert major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8dd912f8-2c4d-4a0a-ba41-918ab5c235ba/volumes/kubernetes.io~projected/kube-api-access-27tm9:{mountpoint:/var/lib/kubelet/pods/8dd912f8-2c4d-4a0a-ba41-918ab5c235ba/volumes/kubernetes.io~projected/kube-api-access-27tm9 major:0 minor:319 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8dd912f8-2c4d-4a0a-ba41-918ab5c235ba/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/8dd912f8-2c4d-4a0a-ba41-918ab5c235ba/volumes/kubernetes.io~secret/webhook-certs major:0 minor:316 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e4d9407-ff79-4396-a37f-896617e024d4/volumes/kubernetes.io~projected/kube-api-access-sjsjh:{mountpoint:/var/lib/kubelet/pods/8e4d9407-ff79-4396-a37f-896617e024d4/volumes/kubernetes.io~projected/kube-api-access-sjsjh major:0 minor:421 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e4d9407-ff79-4396-a37f-896617e024d4/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/8e4d9407-ff79-4396-a37f-896617e024d4/volumes/kubernetes.io~secret/proxy-tls major:0 minor:420 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e733069-752a-4140-83eb-8287f1bce1a7/volumes/kubernetes.io~projected/kube-api-access-qvngn:{mountpoint:/var/lib/kubelet/pods/8e733069-752a-4140-83eb-8287f1bce1a7/volumes/kubernetes.io~projected/kube-api-access-qvngn major:0 minor:303 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/900b2a0e-1e2b-41a3-86f5-639ec1e95969/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/900b2a0e-1e2b-41a3-86f5-639ec1e95969/volumes/kubernetes.io~secret/tls-certificates major:0 minor:1045 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/95c11263-0d68-4b11-bcfd-bcb0e96a6988/volumes/kubernetes.io~projected/kube-api-access-6pfns:{mountpoint:/var/lib/kubelet/pods/95c11263-0d68-4b11-bcfd-bcb0e96a6988/volumes/kubernetes.io~projected/kube-api-access-6pfns major:0 minor:105 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9757756c-cb67-4b6f-99c3-dd63f904897a/volumes/kubernetes.io~projected/kube-api-access-hxnzm:{mountpoint:/var/lib/kubelet/pods/9757756c-cb67-4b6f-99c3-dd63f904897a/volumes/kubernetes.io~projected/kube-api-access-hxnzm major:0 minor:118 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9757edbb-8ce2-4513-9b32-a552df50634c/volumes/kubernetes.io~projected/kube-api-access-m2cq8:{mountpoint:/var/lib/kubelet/pods/9757edbb-8ce2-4513-9b32-a552df50634c/volumes/kubernetes.io~projected/kube-api-access-m2cq8 major:0 minor:670 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9757edbb-8ce2-4513-9b32-a552df50634c/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/9757edbb-8ce2-4513-9b32-a552df50634c/volumes/kubernetes.io~secret/cert major:0 minor:669 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/99433993-93cf-46cb-bb66-485672cb2554/volumes/kubernetes.io~projected/kube-api-access-2dlf2:{mountpoint:/var/lib/kubelet/pods/99433993-93cf-46cb-bb66-485672cb2554/volumes/kubernetes.io~projected/kube-api-access-2dlf2 major:0 minor:874 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/99433993-93cf-46cb-bb66-485672cb2554/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/99433993-93cf-46cb-bb66-485672cb2554/volumes/kubernetes.io~secret/serving-cert major:0 minor:864 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2/volumes/kubernetes.io~projected/kube-api-access major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2/volumes/kubernetes.io~secret/serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a2435b91-86d6-415b-a978-34cc859e74f2/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/a2435b91-86d6-415b-a978-34cc859e74f2/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:262 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a2435b91-86d6-415b-a978-34cc859e74f2/volumes/kubernetes.io~projected/kube-api-access-qkmrv:{mountpoint:/var/lib/kubelet/pods/a2435b91-86d6-415b-a978-34cc859e74f2/volumes/kubernetes.io~projected/kube-api-access-qkmrv major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a2435b91-86d6-415b-a978-34cc859e74f2/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/a2435b91-86d6-415b-a978-34cc859e74f2/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:412 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a35674af-162c-4a4a-8605-158b2326267e/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/a35674af-162c-4a4a-8605-158b2326267e/volumes/kubernetes.io~projected/kube-api-access major:0 minor:699 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a35674af-162c-4a4a-8605-158b2326267e/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a35674af-162c-4a4a-8605-158b2326267e/volumes/kubernetes.io~secret/serving-cert major:0 minor:704 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a81be38f-e07e-4863-8d61-fdefc2713a6a/volumes/kubernetes.io~projected/kube-api-access-b7krt:{mountpoint:/var/lib/kubelet/pods/a81be38f-e07e-4863-8d61-fdefc2713a6a/volumes/kubernetes.io~projected/kube-api-access-b7krt major:0 minor:711 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a81be38f-e07e-4863-8d61-fdefc2713a6a/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/a81be38f-e07e-4863-8d61-fdefc2713a6a/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config major:0 minor:638 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a81be38f-e07e-4863-8d61-fdefc2713a6a/volumes/kubernetes.io~secret/kube-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/a81be38f-e07e-4863-8d61-fdefc2713a6a/volumes/kubernetes.io~secret/kube-state-metrics-tls major:0 minor:522 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/addf66af-4d97-4c1e-960d-ace98c27961b/volumes/kubernetes.io~projected/kube-api-access-l6d7w:{mountpoint:/var/lib/kubelet/pods/addf66af-4d97-4c1e-960d-ace98c27961b/volumes/kubernetes.io~projected/kube-api-access-l6d7w major:0 minor:1190 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/addf66af-4d97-4c1e-960d-ace98c27961b/volumes/kubernetes.io~secret/client-ca-bundle:{mountpoint:/var/lib/kubelet/pods/addf66af-4d97-4c1e-960d-ace98c27961b/volumes/kubernetes.io~secret/client-ca-bundle major:0 minor:1189 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/addf66af-4d97-4c1e-960d-ace98c27961b/volumes/kubernetes.io~secret/secret-metrics-client-certs:{mountpoint:/var/lib/kubelet/pods/addf66af-4d97-4c1e-960d-ace98c27961b/volumes/kubernetes.io~secret/secret-metrics-client-certs major:0 minor:1188 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/addf66af-4d97-4c1e-960d-ace98c27961b/volumes/kubernetes.io~secret/secret-metrics-server-tls:{mountpoint:/var/lib/kubelet/pods/addf66af-4d97-4c1e-960d-ace98c27961b/volumes/kubernetes.io~secret/secret-metrics-server-tls major:0 minor:1187 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7/volumes/kubernetes.io~projected/kube-api-access-67sxk:{mountpoint:/var/lib/kubelet/pods/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7/volumes/kubernetes.io~projected/kube-api-access-67sxk major:0 minor:674 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:667 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:521 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b9d51570-06dd-4e2f-9c19-07fb694279ae/volumes/kubernetes.io~projected/kube-api-access-2cqkl:{mountpoint:/var/lib/kubelet/pods/b9d51570-06dd-4e2f-9c19-07fb694279ae/volumes/kubernetes.io~projected/kube-api-access-2cqkl major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cba33300-f7ef-4547-97ff-62e223da79cf/volumes/kubernetes.io~projected/kube-api-access-6qv7x:{mountpoint:/var/lib/kubelet/pods/cba33300-f7ef-4547-97ff-62e223da79cf/volumes/kubernetes.io~projected/kube-api-access-6qv7x major:0 minor:719 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d00a8cc7-7774-40bd-94a1-9ac2d0f63234/volumes/kubernetes.io~projected/kube-api-access-bbv7q:{mountpoint:/var/lib/kubelet/pods/d00a8cc7-7774-40bd-94a1-9ac2d0f63234/volumes/kubernetes.io~projected/kube-api-access-bbv7q major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d00a8cc7-7774-40bd-94a1-9ac2d0f63234/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d00a8cc7-7774-40bd-94a1-9ac2d0f63234/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d56089bf-177c-492d-8964-73a45574e7ed/volumes/kubernetes.io~projected/kube-api-access-f2gnl:{mountpoint:/var/lib/kubelet/pods/d56089bf-177c-492d-8964-73a45574e7ed/volumes/kubernetes.io~projected/kube-api-access-f2gnl major:0 minor:314 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dd29b21c-7a0e-4311-952f-427b00468e66/volumes/kubernetes.io~projected/kube-api-access-rcq7v:{mountpoint:/var/lib/kubelet/pods/dd29b21c-7a0e-4311-952f-427b00468e66/volumes/kubernetes.io~projected/kube-api-access-rcq7v major:0 minor:767 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dd29b21c-7a0e-4311-952f-427b00468e66/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/dd29b21c-7a0e-4311-952f-427b00468e66/volumes/kubernetes.io~secret/serving-cert major:0 minor:677 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc/volumes/kubernetes.io~projected/kube-api-access-dtp2z:{mountpoint:/var/lib/kubelet/pods/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc/volumes/kubernetes.io~projected/kube-api-access-dtp2z major:0 minor:738 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:736 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/df31c4c2-304e-4bad-8e6f-18c174eba675/volumes/kubernetes.io~projected/kube-api-access-gg62n:{mountpoint:/var/lib/kubelet/pods/df31c4c2-304e-4bad-8e6f-18c174eba675/volumes/kubernetes.io~projected/kube-api-access-gg62n major:0 minor:875 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/df31c4c2-304e-4bad-8e6f-18c174eba675/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/df31c4c2-304e-4bad-8e6f-18c174eba675/volumes/kubernetes.io~secret/serving-cert major:0 minor:865 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e2742559-1f28-4f2c-a873-d6a9348972fb/volumes/kubernetes.io~projected/kube-api-access-nfz8z:{mountpoint:/var/lib/kubelet/pods/e2742559-1f28-4f2c-a873-d6a9348972fb/volumes/kubernetes.io~projected/kube-api-access-nfz8z major:0 minor:668 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9/volumes/kubernetes.io~projected/kube-api-access-wwtr9:{mountpoint:/var/lib/kubelet/pods/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9/volumes/kubernetes.io~projected/kube-api-access-wwtr9 major:0 minor:147 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9/volumes/kubernetes.io~secret/webhook-cert major:0 minor:140 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e7f6ebd3-98c8-457c-a88c-7e81270f01b5/volumes/kubernetes.io~projected/kube-api-access-56twk:{mountpoint:/var/lib/kubelet/pods/e7f6ebd3-98c8-457c-a88c-7e81270f01b5/volumes/kubernetes.io~projected/kube-api-access-56twk major:0 minor:1046 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e7f6ebd3-98c8-457c-a88c-7e81270f01b5/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/e7f6ebd3-98c8-457c-a88c-7e81270f01b5/volumes/kubernetes.io~secret/default-certificate major:0 minor:1044 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e7f6ebd3-98c8-457c-a88c-7e81270f01b5/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/e7f6ebd3-98c8-457c-a88c-7e81270f01b5/volumes/kubernetes.io~secret/metrics-certs major:0 minor:1043 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e7f6ebd3-98c8-457c-a88c-7e81270f01b5/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/e7f6ebd3-98c8-457c-a88c-7e81270f01b5/volumes/kubernetes.io~secret/stats-auth major:0 minor:1039 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ef5679f7-5bf5-409d-b74b-64a9cbb6c701/volumes/kubernetes.io~projected/kube-api-access-vv6gf:{mountpoint:/var/lib/kubelet/pods/ef5679f7-5bf5-409d-b74b-64a9cbb6c701/volumes/kubernetes.io~projected/kube-api-access-vv6gf major:0 minor:1214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ef5679f7-5bf5-409d-b74b-64a9cbb6c701/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/ef5679f7-5bf5-409d-b74b-64a9cbb6c701/volumes/kubernetes.io~secret/cert major:0 minor:1207 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ef824102-83a5-4629-8057-d4f1a57a530d/volumes/kubernetes.io~projected/kube-api-access-5kvhc:{mountpoint:/var/lib/kubelet/pods/ef824102-83a5-4629-8057-d4f1a57a530d/volumes/kubernetes.io~projected/kube-api-access-5kvhc major:0 minor:795 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ef824102-83a5-4629-8057-d4f1a57a530d/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/ef824102-83a5-4629-8057-d4f1a57a530d/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:541 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ef824102-83a5-4629-8057-d4f1a57a530d/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/ef824102-83a5-4629-8057-d4f1a57a530d/volumes/kubernetes.io~secret/webhook-cert major:0 minor:733 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f3c13c5f-3d1f-4e0a-b77b-732255680086/volumes/kubernetes.io~projected/kube-api-access-wmrqg:{mountpoint:/var/lib/kubelet/pods/f3c13c5f-3d1f-4e0a-b77b-732255680086/volumes/kubernetes.io~projected/kube-api-access-wmrqg major:0 minor:756 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f3c13c5f-3d1f-4e0a-b77b-732255680086/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/f3c13c5f-3d1f-4e0a-b77b-732255680086/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:745 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f569ed3b-924d-4829-b192-f508ee70658d/volumes/kubernetes.io~projected/kube-api-access-62ptf:{mountpoint:/var/lib/kubelet/pods/f569ed3b-924d-4829-b192-f508ee70658d/volumes/kubernetes.io~projected/kube-api-access-62ptf major:0 minor:760 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f569ed3b-924d-4829-b192-f508ee70658d/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/f569ed3b-924d-4829-b192-f508ee70658d/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:757 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f7b68603-8af3-4a50-8d39-86bbcdf1c597/volumes/kubernetes.io~projected/kube-api-access-vntrg:{mountpoint:/var/lib/kubelet/pods/f7b68603-8af3-4a50-8d39-86bbcdf1c597/volumes/kubernetes.io~projected/kube-api-access-vntrg major:0 minor:1047 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649/volumes/kubernetes.io~projected/kube-api-access-mmcz9:{mountpoint:/var/lib/kubelet/pods/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649/volumes/kubernetes.io~projected/kube-api-access-mmcz9 major:0 minor:1082 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649/volumes/kubernetes.io~secret/federate-client-tls:{mountpoint:/var/lib/kubelet/pods/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649/volumes/kubernetes.io~secret/federate-client-tls major:0 minor:1009 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649/volumes/kubernetes.io~secret/secret-telemeter-client:{mountpoint:/var/lib/kubelet/pods/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649/volumes/kubernetes.io~secret/secret-telemeter-client major:0 minor:1077 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config major:0 minor:1037 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649/volumes/kubernetes.io~secret/telemeter-client-tls:{mountpoint:/var/lib/kubelet/pods/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649/volumes/kubernetes.io~secret/telemeter-client-tls major:0 minor:1072 fsType:tmpfs blockSize:0} overlay_0-1001:{mountpoint:/var/lib/containers/storage/overlay/e9d87d87c606792daf6c6b0385b7a464a3c588e8034ed06c00f4a9223fef2cdc/merged major:0 minor:1001 fsType:overlay blockSize:0} overlay_0-1005:{mountpoint:/var/lib/containers/storage/overlay/85f25a42668b2d8f8ac98fcd8d29d177e3c4de217e418936ea65b175b3cf7533/merged major:0 minor:1005 fsType:overlay blockSize:0} overlay_0-101:{mountpoint:/var/lib/containers/storage/overlay/f9d3c84142517cd138f2413be7ba207ae7d7046b40bf9ac399e2c38b3a171198/merged major:0 minor:101 fsType:overlay blockSize:0} overlay_0-1014:{mountpoint:/var/lib/containers/storage/overlay/4cc134f9a5cd137a95af0619e57eaebb3220b97c00a8691b75a3176981868a04/merged major:0 minor:1014 fsType:overlay blockSize:0} overlay_0-1015:{mountpoint:/var/lib/containers/storage/overlay/e0aa3fb5f31c92612b3331b88ac6db5f08ba1fcc385a1f78317089fbde85515e/merged major:0 minor:1015 fsType:overlay blockSize:0} overlay_0-1021:{mountpoint:/var/lib/containers/storage/overlay/698d87aa18f3cef35205a71dbb366b2a8a3b65ece4befb4848729be851b0cf76/merged major:0 minor:1021 fsType:overlay blockSize:0} overlay_0-1026:{mountpoint:/var/lib/containers/storage/overlay/de70e0b1abe5c65920d60055c544cea0c3a810a9a4268f92ec464483ab9da89d/merged major:0 minor:1026 fsType:overlay blockSize:0} overlay_0-103:{mountpoint:/var/lib/containers/storage/overlay/4c53aaa3c766c2e21b92b2cba6cdee74b21d19b33610cf550c4c808bc7d7686c/merged major:0 minor:103 fsType:overlay blockSize:0} overlay_0-1030:{mountpoint:/var/lib/containers/storage/overlay/503135a3055577c7861a65492d6a13d77e902d2f70fa9c4aeac1457916ebc554/merged major:0 minor:1030 fsType:overlay blockSize:0} overlay_0-1036:{mountpoint:/var/lib/containers/storage/overlay/c3f502afbf870e480c1c94d0f27414a6e308929e12936850951aefc141d1c0c8/merged major:0 minor:1036 fsType:overlay blockSize:0} overlay_0-1053:{mountpoint:/var/lib/containers/storage/overlay/359baabc94fd417186be7801408f5d213ed1ca2b25042e276a2b7d3f8e3ccaa0/merged major:0 minor:1053 fsType:overlay blockSize:0} overlay_0-1060:{mountpoint:/var/lib/containers/storage/overlay/6d677da9d1fc8264cd5c50fdfe84866553af7c26039eac7d6405ec68b50ac2b8/merged major:0 minor:1060 fsType:overlay blockSize:0} overlay_0-1063:{mountpoint:/var/lib/containers/storage/overlay/d83e065ea8f0012f9ac9d60234c68c7fc199d472cd4d5ee3a13a355838494369/merged major:0 minor:1063 fsType:overlay blockSize:0} overlay_0-1068:{mountpoint:/var/lib/containers/storage/overlay/72f8eb4f1204c87433153d00745af4dde07ed87d75e214963289f73557c9f93e/merged major:0 minor:1068 fsType:overlay blockSize:0} overlay_0-1070:{mountpoint:/var/lib/containers/storage/overlay/40a8eda613a45fbf3650748dd934b536b0e04548637a110982fd7650aaa08427/merged major:0 minor:1070 fsType:overlay blockSize:0} overlay_0-1078:{mountpoint:/var/lib/containers/storage/overlay/0136b1dcc64ad0d3a3a8a60ecab64f2bb4fea0df611999b2575e2abe12a6aad5/merged major:0 minor:1078 fsType:overlay blockSize:0} overlay_0-1092:{mountpoint:/var/lib/containers/storage/overlay/51d81061591a8eac68a7a85fec38c970a4fe1e25474ae3b3343edeba1f14701d/merged major:0 minor:1092 fsType:overlay blockSize:0} overlay_0-1094:{mountpoint:/var/lib/containers/storage/overlay/2a69a51b552187acb3dfa82824730c57c368946b50b92484caf37eba36c68b55/merged major:0 minor:1094 fsType:overlay blockSize:0} overlay_0-1104:{mountpoint:/var/lib/containers/storage/overlay/1c8e0efeb2035e541e6f9fcab3ada756cd333159cc6bad9b2ab9396b73e20d36/merged major:0 minor:1104 fsType:overlay blockSize:0} overlay_0-1106:{mountpoint:/var/lib/containers/storage/overlay/95374d213e6877887c45e1805956b4bf2d7605976f105363d0a1df2444db0e52/merged major:0 minor:1106 fsType:overlay blockSize:0} overlay_0-1108:{mountpoint:/var/lib/containers/storage/overlay/00511068baeea93ec3326d4be40509e6a5ae16a51c41350e556126ca4ef7bb3e/merged major:0 minor:1108 fsType:overlay blockSize:0} overlay_0-111:{mountpoint:/var/lib/containers/storage/overlay/771a6ef444cfd1a1b4d7c9ae02cfe4a381844a9234f2b1114fb37abd54bb46ce/merged major:0 minor:111 fsType:overlay blockSize:0} overlay_0-1114:{mountpoint:/var/lib/containers/storage/overlay/a8ea0b25767934d3569df743669b02e1938fe931f99a67873e4e4701b06bba92/merged major:0 minor:1114 fsType:overlay blockSize:0} overlay_0-1119:{mountpoint:/var/lib/containers/storage/overlay/b7974d5c1525c5cb500cc9610820c1e16eca027aa3eb034caa29095b9a244528/merged major:0 minor:1119 fsType:overlay blockSize:0} overlay_0-1121:{mountpoint:/var/lib/containers/storage/overlay/23b76cacf71d3d3ee8e5230b5f7b7787d043610c813aa526d3ae39f35668052f/merged major:0 minor:1121 fsType:overlay blockSize:0} overlay_0-1128:{mountpoint:/var/lib/containers/storage/overlay/a04e49de18072cfa62d551f2dc9607273bb6ded1d2e496b9494a3d44a9b4feb2/merged major:0 minor:1128 fsType:overlay blockSize:0} overlay_0-1132:{mountpoint:/var/lib/containers/storage/overlay/2b000f33e91284ee25052775726c8942709788d05c0488f8133b951d04b067d6/merged major:0 minor:1132 fsType:overlay blockSize:0} overlay_0-1134:{mountpoint:/var/lib/containers/storage/overlay/8386dd9e8c9a46ee7c5128b13bf33a88ecb762ddd83dc30382b0bcdf9ba3b857/merged major:0 minor:1134 fsType:overlay blockSize:0} overlay_0-1136:{mountpoint:/var/lib/containers/storage/overlay/e8cdf84c16333dc8a543aabd29b37e065afc53e93d076c800f590f99ddc68757/merged major:0 minor:1136 fsType:overlay blockSize:0} overlay_0-1138:{mountpoint:/var/lib/containers/storage/overlay/d35603de4da84a58c78c4c9346be5ba798a04630b9661ba5c2595bcbe428ea84/merged major:0 minor:1138 fsType:overlay blockSize:0} overlay_0-1140:{mountpoint:/var/lib/containers/storage/overlay/2ca6d5b3d0c08367c17a76d156bd3182c942b8efc3277dd922e6304cda113756/merged major:0 minor:1140 fsType:overlay blockSize:0} overlay_0-1146:{mountpoint:/var/lib/containers/storage/overlay/88c41e909f345f6b40f89ba86b3eb675044d70c8aaa13a80ee34fdee4a201099/merged major:0 minor:1146 fsType:overlay blockSize:0} overlay_0-1151:{mountpoint:/var/lib/containers/storage/overlay/8169ba4f260226deedc768dc0d21871ebc2ea33599dfe689a6893603a406fd54/merged major:0 minor:1151 fsType:overlay blockSize:0} overlay_0-1153:{mountpoint:/var/lib/containers/storage/overlay/15196be25517fbad9944ca8dc5798db6b0dfc14cd6e8be6ca5fba20a9e1f7ffc/merged major:0 minor:1153 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/3619cef3dbc8d8eae831d0928051aefa4e241d3e82c098af355bbce1c657a0c8/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-1162:{mountpoint:/var/lib/containers/storage/overlay/2567884e70b347ff084db5029fca4e0e0a71c69fd490283ec03afd68303273e0/merged major:0 minor:1162 fsType:overlay blockSize:0} overlay_0-1167:{mountpoint:/var/lib/containers/storage/overlay/5c1c83b7d583a5b451e484c8059c29486d6518757576e9a4c2b1433a26d02cc7/merged major:0 minor:1167 fsType:overlay blockSize:0} overlay_0-1169:{mountpoint:/var/lib/containers/storage/overlay/5ea2f48b5b1a637f0d99348e697208b48b51412f9b27e566c193089d04338513/merged major:0 minor:1169 fsType:overlay blockSize:0} overlay_0-1194:{mountpoint:/var/lib/containers/storage/overlay/bb25475b7e0ff36e22f19b486b395f605e34a33532aa7304ef64b19aa12a5119/merged major:0 minor:1194 fsType:overlay blockSize:0} overlay_0-1195:{mountpoint:/var/lib/containers/storage/overlay/c6b68ca2d4d460c031c0429bf08738f76af5491169a1fa5a45aabb0b86c59a09/merged major:0 minor:1195 fsType:overlay blockSize:0} overlay_0-1197:{mountpoint:/var/lib/containers/storage/overlay/28da6316d4e90d5b2f22b58294ededf72e1d5983d74e6ce755c345196ea24257/merged major:0 minor:1197 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/3e15df7b511d07530e8097da3b8fc8943a006dd08de698a3ec232b5a6b93b12b/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-1217:{mountpoint:/var/lib/containers/storage/overlay/ea84a008904efd96d72c0a2049f0a355f93392b68616d5552dec0ea8caa72a92/merged major:0 minor:1217 fsType:overlay blockSize:0} overlay_0-1219:{mountpoint:/var/lib/containers/storage/overlay/505e461d18c30a095d5934997edb13000d7a47bf33ad9b20853232f97a64fa25/merged major:0 minor:1219 fsType:overlay blockSize:0} overlay_0-132:{mountpoint:/var/lib/containers/storage/overlay/15f01deb4ab497d91f622675f60f0e855693766540ae9ff4e2a30e81bbf6fb54/merged major:0 minor:132 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/db9c10fb78b7c84d5bad58fa27fd84070b6de6bbd1e2895d431829cdc64003a5/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/162fd1652dab1e53525d2bd9ed8782172ccc1ca9fd57d981bec3a3431cbaeb13/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-138:{mountpoint:/var/lib/containers/storage/overlay/6da148745ca9151032b82a2c1e9898728b4bc4124b86b9ea39dc939518e1051e/merged major:0 minor:138 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/15e3d3a61f49cc873877c380ff1ba610ae69ab0dff1f84e044be83d77dd1fe21/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/ca4ef5f273658c2f0ac27108513a03e4717e00aca75e6361a5a5f98b3b101af5/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-160:{mountpoint:/var/lib/containers/storage/overlay/e089634cb7842a25768d0bb516d7162efb8d63ffcf2bc16a3d6f36dc60460161/merged major:0 minor:160 fsType:overlay blockSize:0} overlay_0-162:{mountpoint:/var/lib/containers/storage/overlay/014d95194b4028a53626398a38c08d36cbb9f3098e27be75a0d89e0b4d3d9ba5/merged major:0 minor:162 fsType:overlay blockSize:0} overlay_0-165:{mountpoint:/var/lib/containers/storage/overlay/32162fd12a572a71591fdd1359e6d4544cababb7dbe316303f8c83b0020411e7/merged major:0 minor:165 fsType:overlay blockSize:0} overlay_0-167:{mountpoint:/var/lib/containers/storage/overlay/9f6e00018799d2796cedb84da3a413e5128c0e4293f054377062145baf870d98/merged major:0 minor:167 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/3b513ee961359669705362e7371564f1d0e17baafb3007206507ed2010ada3ab/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/2e72d41f6d85089a5d77d079151d819aa1c77ef1e52fb75fb34dfa0ede6a2314/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/32124ee8694c9b34488bab7eb742dc9b5f377fbc178316c0d5aeb6608b8f77cc/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/24df0a275e56935a697db889bbc9bbbd27e90f78ca2db30863de7bb506ef7398/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/a9c601662dd223a12d57aa63a7ed568f61cfc64c895610665161c008680f6d90/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/e83dd687fd60d175c9b00e32df7e8d7c347edf1f3dcd1749a948e9ca1cf28890/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/bd1616afefaffaa6083e4928cacadbbf145fbcefa7a46945b703568c2c6207ab/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/61dc9b4a330d6dc1210a239eabc75def091e83b251cd64c9b08d3566dd90c3c8/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-229:{mountpoint:/var/lib/containers/storage/overlay/bf0f819f911c93926b4d16635f54f6f9ea5e415d106cd7d02c2b8d101b3cef5d/merged major:0 minor:229 fsType:overlay blockSize:0} overlay_0-265:{mountpoint:/var/lib/containers/storage/overlay/a9637eadba2e5e770c4200d2bc633f5420326caa798514b5fb1340b07723d32d/merged major:0 minor:265 fsType:overlay blockSize:0} overlay_0-267:{mountpoint:/var/lib/containers/storage/overlay/1d6da2fac25bb5f9d861ced1c5a42a2f289fe3119a915dac81404c1480874283/merged major:0 minor:267 fsType:overlay blockSize:0} overlay_0-275:{mountpoint:/var/lib/containers/storage/overlay/30501fea601fb38955f429099bc47c2376772448c3284ec89d82a4485d0274ac/merged major:0 minor:275 fsType:overlay blockSize:0} overlay_0-277:{mountpoint:/var/lib/containers/storage/overlay/baa44976eb7c61a33f6e8fb1f9b8869132324d62873f7e84dc01352a8ba11735/merged major:0 minor:277 fsType:overlay blockSize:0} overlay_0-283:{mountpoint:/var/lib/containers/storage/overlay/4d33795c48e6c06a6bba3e78a2e1f63b523f5454e3a423e7f0a79a876ee04a3d/merged major:0 minor:283 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/1ee374b9e875f8e97bdc3f51208e8e0f4095ddea09a39d9017b13d0e34097401/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/b4737cfb0bee92a9b39bd98408228b425332bce3e47632c13f04d91db556f58e/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/0b5995d6ab4cb1d37852d1385eda72c1305fb81f36822392b0f9eb0d5df6636a/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-291:{mountpoint:/var/lib/containers/storage/overlay/6d9634c168c7db74feb81783daef03e386fda835e5c7ab8ddab1fbc252214bc8/merged major:0 minor:291 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/084579f7fda123e3eacb1faeae140270ffd5afc52088f22ea8e34202cc3e30b3/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/139e598740f254c07cdc42110c5735a5226ae18efcdc86c26683ca34ac686033/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/6bb6dffd18980dc8faf8a352db4a975c02ccc90f05f7257e1d5871754f0ea951/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/bb1a76c3a3934eb3479404c0933495e566a1555931e23a36a69c56824bdde124/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-310:{mountpoint:/var/lib/containers/storage/overlay/948c9f77b063159944c4bcd634da0c249036d7d3d5b3d362fb6bd3ed5bb1d3ea/merged major:0 minor:310 fsType:overlay blockSize:0} overlay_0-313:{mountpoint:/var/lib/containers/storage/overlay/093fe8c29ac319f65dacffd1c0acbdf5c9d57c4cb472c071b4b41826c1d92057/merged major:0 minor:313 fsType:overlay blockSize:0} overlay_0-320:{mountpoint:/var/lib/containers/storage/overlay/263d3cf66d9c592bac79d4ea39398f78eb16290cd4cbc2ac0b174cd4e2eac7ca/merged major:0 minor:320 fsType:overlay blockSize:0} overlay_0-322:{mountpoint:/var/lib/containers/storage/overlay/786f696124862660c8c7c972cbbe021eaacaaa01f1a48e2d58ec2dc2feb222ca/merged major:0 minor:322 fsType:overlay blockSize:0} overlay_0-326:{mountpoint:/var/lib/containers/storage/overlay/661280a755388b355cfeccc20cad8e878bc1c2a11205bc4e5c502148684cb14c/merged major:0 minor:326 fsType:overlay blockSize:0} overlay_0-332:{mountpoint:/var/lib/containers/storage/overlay/93c76be0230bd7a56d1666515e009e971c94c522d3413c79cbc581c146b1f221/merged major:0 minor:332 fsType:overlay blockSize:0} overlay_0-342:{mountpoint:/var/lib/containers/storage/overlay/c3bb03d0e1db29d8771746649439f42ba7ad550d13f98ccce37dce65c699794f/merged major:0 minor:342 fsType:overlay blockSize:0} overlay_0-345:{mountpoint:/var/lib/containers/storage/overlay/938bb765f45422e3cce9a77a4765a4866f127763e02344466bba359e8599165c/merged major:0 minor:345 fsType:overlay blockSize:0} overlay_0-346:{mountpoint:/var/lib/containers/storage/overlay/185b5e8cd7d56e4d67a45b91d650ba82d8ea9b9a0d2f3b7085857d9c2c5031ec/merged major:0 minor:346 fsType:overlay blockSize:0} overlay_0-348:{mountpoint:/var/lib/containers/storage/overlay/42cd3b60f299f4af6ccdd3f0fccfdc031415df28d3d4891a8f0f3c9ee270b2fd/merged major:0 minor:348 fsType:overlay blockSize:0} overlay_0-350:{mountpoint:/var/lib/containers/storage/overlay/daee2147e2ba7afff16344b09931b0417f0ae3bc6618e875708760bcb8fab84e/merged major:0 minor:350 fsType:overlay blockSize:0} overlay_0-352:{mountpoint:/var/lib/containers/storage/overlay/e0b29cf46848771286d455431d018b46018c8925b3f240e16d80936aeae3e0f5/merged major:0 minor:352 fsType:overlay blockSize:0} overlay_0-357:{mountpoint:/var/lib/containers/storage/overlay/54d79cea40c32e562dc299d7600e1acdf485a97ea811bfedb2159629f1612878/merged major:0 minor:357 fsType:overlay blockSize:0} overlay_0-359:{mountpoint:/var/lib/containers/storage/overlay/e8a3f0f5610f6babe7318a77a2fc338cc19103ed133e3e64c812598ae8fe2eaf/merged major:0 minor:359 fsType:overlay blockSize:0} overlay_0-360:{mountpoint:/var/lib/containers/storage/overlay/d35af704fa3cbd964f058217004627292bf2c2bc7c16eb604ae32e4ab82ac032/merged major:0 minor:360 fsType:overlay blockSize:0} overlay_0-362:{mountpoint:/var/lib/containers/storage/overlay/cc22c8c878068d04a749a16b50ef91622a38210b29704426be8be1ef321ff887/merged major:0 minor:362 fsType:overlay blockSize:0} overlay_0-363:{mountpoint:/var/lib/containers/storage/overlay/23562b0da925eedbe579aa989d208fbbdf0793df5f762183183fece2f79ab61a/merged major:0 minor:363 fsType:overlay blockSize:0} overlay_0-366:{mountpoint:/var/lib/containers/storage/overlay/6150e19fc0cf18e9d218a6652ffb68c578ac80e94e03895a5b416656312f9824/merged major:0 minor:366 fsType:overlay blockSize:0} overlay_0-372:{mountpoint:/var/lib/containers/storage/overlay/63aa8a8f5c32ca6187a8d88e1951176cc1797d18c6d7757825f3a02e5ba08e08/merged major:0 minor:372 fsType:overlay blockSize:0} overlay_0-374:{mountpoint:/var/lib/containers/storage/overlay/e89d5445b2c6106ad6e5c484090e707ec833d850813500ade21c33dcda01e46e/merged major:0 minor:374 fsType:overlay blockSize:0} overlay_0-379:{mountpoint:/var/lib/containers/storage/overlay/b2f1e3a329f7390d8f0b04155e4206a7c233721a554614025fb1b79e9e5ea234/merged major:0 minor:379 fsType:overlay blockSize:0} overlay_0-381:{mountpoint:/var/lib/containers/storage/overlay/f2cb1e4493afa1af709709b3f5653cd49249fe40e4d9dff176afbc9f80caf73d/merged major:0 minor:381 fsType:overlay blockSize:0} overlay_0-386:{mountpoint:/var/lib/containers/storage/overlay/b1865134ef7939e164b82d5f176e696491185103738418d25c234dfd1155d8f2/merged major:0 minor:386 fsType:overlay blockSize:0} overlay_0-391:{mountpoint:/var/lib/containers/storage/overlay/4ad9085e18da71724136d55f2095748acd5eb847ba252245fc0cadf18b64d56b/merged major:0 minor:391 fsType:overlay blockSize:0} overlay_0-392:{mountpoint:/var/lib/containers/storage/overlay/79bd45c310e13d89d39a608a182aca9fcdaaed45d69d37742cc0a7c99f7322de/merged major:0 minor:392 fsType:overlay blockSize:0} overlay_0-404:{mountpoint:/var/lib/containers/storage/overlay/a83619e5ca0ad774227246a09e0c9f4233ef6c2ba52088511683ac9cfada7655/merged major:0 minor:404 fsType:overlay blockSize:0} overlay_0-416:{mountpoint:/var/lib/containers/storage/overlay/dab89731d34151820dec87498061a56142b8cf4ba250e040a09c12920cbeb0ce/merged major:0 minor:416 fsType:overlay blockSize:0} overlay_0-422:{mountpoint:/var/lib/containers/storage/overlay/5d5236d7285e543b6102f9e0cf0e4ba2419a5a001e196b6e3707e08dfebde413/merged major:0 minor:422 fsType:overlay blockSize:0} overlay_0-426:{mountpoint:/var/lib/containers/storage/overlay/ea232715e5523cc0aaf9ff0246b713b2fe16df1230dbb4f18ad1e7985795cf8d/merged major:0 minor:426 fsType:overlay blockSize:0} overlay_0-43:{mountpoint:/var/lib/containers/storage/overlay/ef451cf4bed3dc8a8797c0bb5e3ef93636fc4fe5c8f402fedda637d9be247932/merged major:0 minor:43 fsType:overlay blockSize:0} overlay_0-430:{mountpoint:/var/lib/containers/storage/overlay/a82e56e41e7e9ee81f90c114e9c3ed0c6c4e573183ea40d6902d63fbc8977bf9/merged major:0 minor:430 fsType:overlay blockSize:0} overlay_0-433:{mountpoint:/var/lib/containers/storage/overlay/c7986a6d3ba4540664198bd6f516834f6a6d9e646426b20ece812194d3374955/merged major:0 minor:433 fsType:overlay blockSize:0} overlay_0-439:{mountpoint:/var/lib/containers/storage/overlay/3fa37d26e60d1d5cedcacb77209edb20e70fe68fc649a1b9d6e724adcfd13b61/merged major:0 minor:439 fsType:overlay blockSize:0} overlay_0-440:{mountpoint:/var/lib/containers/storage/overlay/d6a259285b9e54c886179ed3aaef5abb2770bc388d8a2764529c386465ed14b3/merged major:0 minor:440 fsType:overlay blockSize:0} overlay_0-447:{mountpoint:/var/lib/containers/storage/overlay/367cf769c1e8a647c20847034f0daae45f0fe42b25bbd484b30bb74483477e77/merged major:0 minor:447 fsType:overlay blockSize:0} overlay_0-449:{mountpoint:/var/lib/containers/storage/overlay/45cbb1dbc6ab3ca9e7eed7009262991af67b839dcea221f72e65651e22d09dba/merged major:0 minor:449 fsType:overlay blockSize:0} overlay_0-451:{mountpoint:/var/lib/containers/storage/overlay/07fb9f1d0448255a9c7647215669c3deb3bf0b30f6104eb06d41824b316b93c8/merged major:0 minor:451 fsType:overlay blockSize:0} overlay_0-457:{mountpoint:/var/lib/containers/storage/overlay/af1f6380a25eabffcbbb101d0aa475854410e09a495911f82e7b86512c70aa37/merged major:0 minor:457 fsType:overlay blockSize:0} overlay_0-461:{mountpoint:/var/lib/containers/storage/overlay/012131d416db44ec42ba327d7cd505961ab81ceec19c23577b962b0de19a0397/merged major:0 minor:461 fsType:overlay blockSize:0} overlay_0-463:{mountpoint:/var/lib/containers/storage/overlay/717eb83034fb57a5d58a5b0d01007dda7ee334891ea6fc7a75ead8c49da2f3ce/merged major:0 minor:463 fsType:overlay blockSize:0} overlay_0-472:{mountpoint:/var/lib/containers/storage/overlay/8b45e546bb829c83741ed7d56049992239323f38c265fb06341f4db2c71328a0/merged major:0 minor:472 fsType:overlay blockSize:0} overlay_0-474:{mountpoint:/var/lib/containers/storage/overlay/80d9b77a50d5b55097fd33aa7b7d8fda712e632f905b5c13944b18601563eb7f/merged major:0 minor:474 fsType:overlay blockSize:0} overlay_0-476:{mountpoint:/var/lib/containers/storage/overlay/eafdfb920f1d48d88fd1ab0132c39ab468ed2303694ae04cb01f351c00601739/merged major:0 minor:476 fsType:overlay blockSize:0} overlay_0-480:{mountpoint:/var/lib/containers/storage/overlay/26fbd1ef7f8da39f0a8a14fdde5441a5c2e35c328e15e7b979201f34d339c84e/merged major:0 minor:480 fsType:overlay blockSize:0} overlay_0-49:{mountpoint:/var/lib/containers/storage/overlay/1ad41016ae978edf73414a0cc45dfc8b35799e15aa8c1349c3ff2e418adef635/merged major:0 minor:49 fsType:overlay blockSize:0} overlay_0-492:{mountpoint:/var/lib/containers/storage/overlay/a8f2e2302e8546956ea1755af37b6be718c21d8f4f0ec52a2b07712687c7aa5e/merged major:0 minor:492 fsType:overlay blockSize:0} overlay_0-496:{mountpoint:/var/lib/containers/storage/overlay/4d35742601dcd2f1b5c6d5bf3f158c0a1b2f8249cd8c4bc3a856155e88fb9b2c/merged major:0 minor:496 fsType:overlay blockSize:0} overlay_0-508:{mountpoint:/var/lib/containers/storage/overlay/91e9a85ce159683c204e5f1cad88096ccfc1413ade0c6666948a7272969ee4ff/merged major:0 minor:508 fsType:overlay blockSize:0} overlay_0-510:{mountpoint:/var/lib/containers/storage/overlay/b0438507c50c741adad2728bb281fa988887e82cba018666060ec9dcd7bfa7fb/merged major:0 minor:510 fsType:overlay blockSize:0} overlay_0-516:{mountpoint:/var/lib/containers/storage/overlay/45cd1d3c906bee131663251196d5ca1f9288083a80a6a640024b623b77b4c191/merged major:0 minor:516 fsType:overlay blockSize:0} overlay_0-518:{mountpoint:/var/lib/containers/storage/overlay/6d759fb33f263fafdb5176b93bbb1e1aca399d257671513040b0bf9bc5560452/merged major:0 minor:518 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/803f6a092bb4f2059e816d54f81bbc0b0f7664a1feabcf075300da3d8eb49c39/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-520:{mountpoint:/var/lib/containers/storage/overlay/e17d079362b6cbe9411be38ad9f98e19028f3b5b1c5e0597c97eb831c095a18b/merged major:0 minor:520 fsType:overlay blockSize:0} overlay_0-528:{mountpoint:/var/lib/containers/storage/overlay/a7d1b6e0f1e2f4c2689297ddccaf6c1a857f7b7cf16f17e2624f567826f9622b/merged major:0 minor:528 fsType:overlay blockSize:0} overlay_0-538:{mountpoint:/var/lib/containers/storage/overlay/ef3a83c908e6703940850fa0e294a42aef078b52add5765abf486cdae7d8488f/merged major:0 minor:538 fsType:overlay blockSize:0} overlay_0-542:{mountpoint:/var/lib/containers/storage/overlay/48cdc70b28e84971b60275bc502396993c62bbff7028a64514c60c299d5f4378/merged major:0 minor:542 fsType:overlay blockSize:0} overlay_0-551:{mountpoint:/var/lib/containers/storage/overlay/880e7c67dd9fbc2bea222036dfe3ceff26f7ced12fbeb4bd0be81ea1630aa0a9/merged major:0 minor:551 fsType:overlay blockSize:0} overlay_0-553:{mountpoint:/var/lib/containers/storage/overlay/25e7e2aee1e6f9dd19afc34a7811944d3f2e288791fa4cc90ef416a60ca28a51/merged major:0 minor:553 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/be8bac0418a6485f3224baa1d8e3fb2178b1a50292727aa3dd9be7a470bd411c/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-561:{mountpoint:/var/lib/containers/storage/overlay/f5a5814935ee9ef4d32a9506c11d3388d90f87ac61b1f8ffd13449db33005060/merged major:0 minor:561 fsType:overlay blockSize:0} overlay_0-574:{mountpoint:/var/lib/containers/storage/overlay/825fa8a0c5c5fe6c5bd6ae4c5eafcb925c1e0342e73858ae15e806e21fc7237f/merged major:0 minor:574 fsType:overlay blockSize:0} overlay_0-576:{mountpoint:/var/lib/containers/storage/overlay/4236c93703037732424258550622ab79448b95aeb703bfe4aa7e1ca3df2c0410/merged major:0 minor:576 fsType:overlay blockSize:0} overlay_0-58:{mountpoint:/var/lib/containers/storage/overlay/747b7b5f23b21c240a33c190b1e4893a2a4bd7096644ea232e032881de673d19/merged major:0 minor:58 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/0ce79f38d4a361e82a0702c4ea916eacd5e6ef46468199772c9123c691b95af8/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-603:{mountpoint:/var/lib/containers/storage/overlay/bc6a3e9942f77e8f7ec6ba33ad1d85067fd2c52fd5c39cbde3878675a675455f/merged major:0 minor:603 fsType:overlay blockSize:0} overlay_0-608:{mountpoint:/var/lib/containers/storage/overlay/3c0b8e36ba906c6d8bddfec8898d5a3744335af94c50be292b8258f3ae0eb819/merged major:0 minor:608 fsType:overlay blockSize:0} overlay_0-610:{mountpoint:/var/lib/containers/storage/overlay/6c0efe84633f0e40f2ac22299f6962ce05e4b86af605b28ce2d3828c66ade80d/merged major:0 minor:610 fsType:overlay blockSize:0} overlay_0-612:{mountpoint:/var/lib/containers/storage/overlay/391c1c020645407684eb1019e92ece0d2c29b04870d87e74da41d37fa697a707/merged major:0 minor:612 fsType:overlay blockSize:0} overlay_0-614:{mountpoint:/var/lib/containers/storage/overlay/0b6a3a8df7cfe2980433ae14d251e9af3c1e57437dc2cee183d2d75b1e81a66f/merged major:0 minor:614 fsType:overlay blockSize:0} overlay_0-616:{mountpoint:/var/lib/containers/storage/overlay/652ffaa3fcaa4c08d2db161edb0baf6e9c7d6c56f5f062950dde000c55cbedd3/merged major:0 minor:616 fsType:overlay blockSize:0} overlay_0-619:{mountpoint:/var/lib/containers/storage/overlay/d486387d38757f21f452b404af7c108f89506b9f2f9f30f91a120a3475362e65/merged major:0 minor:619 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/be82516143badacc66ee1ea8d567650c6dcabbd439d40b44fbe8b9320dc5c621/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-635:{mountpoint:/var/lib/containers/storage/overlay/4e7156cdcb64c76c2a89830465d91428fbbd30abed5f878acb79c726f088e82a/merged major:0 minor:635 fsType:overlay blockSize:0} overlay_0-639:{mountpoint:/var/lib/containers/storage/overlay/4715aa8b5e9a88017f8cbae1fc15534ea7d30660620e0f600deb960311fc1e3a/merged major:0 minor:639 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/078c96d0aa3543cb5f1d9f34ca369616ba57e2504099c7e3a0153853b4a6a71b/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-641:{mountpoint:/var/lib/containers/storage/overlay/b494abf8447972e7d13f4a900a1287d31e464f60476ac41c48f679094203707e/merged major:0 minor:641 fsType:overlay blockSize:0} overlay_0-661:{mountpoint:/var/lib/containers/storage/overlay/4b8a6dc9b6472b25a49a1ffcdad78dd12b9af185648d31cba4afb54b79bfbe7c/merged major:0 minor:661 fsType:overlay blockSize:0} overlay_0-663:{mountpoint:/var/lib/containers/storage/overlay/bd6653d0f7da7782f79403271ca19b4abe71b96f3be961ee3173bbdff6addf75/merged major:0 minor:663 fsType:overlay blockSize:0} overlay_0-676:{mountpoint:/var/lib/containers/storage/overlay/820122699c45186239a1057e57709c3a4d7c320456a0a3d5af79c30e2ad0e932/merged major:0 minor:676 fsType:overlay blockSize:0} overlay_0-679:{mountpoint:/var/lib/containers/storage/overlay/3025b7a52ff72a7c48b18631867b114dad3776cd54c5af500c9932bc1bcd2ad7/merged major:0 minor:679 fsType:overlay blockSize:0} overlay_0-695:{mountpoint:/var/lib/containers/storage/overlay/0a527d6be5156ee40de5dc405131c43b91513c1d1f8d3623583649d5ef8f30dd/merged major:0 minor:695 fsType:overlay blockSize:0} overlay_0-700:{mountpoint:/var/lib/containers/storage/overlay/c6ea4afe2a9e61cc8842f23a98f80a683381252b7b99fb442e8f9c7517c63e45/merged major:0 minor:700 fsType:overlay blockSize:0} overlay_0-702:{mountpoint:/var/lib/containers/storage/overlay/8e7f86435b944d11d869bab5fc6bfc8e5d5b9da79ad822483f9541b89f3c0a89/merged major:0 minor:702 fsType:overlay blockSize:0} overlay_0-707:{mountpoint:/var/lib/containers/storage/overlay/215ce6ec4c9ff271009e2d5f3f2204b3f1d3ecfef539c7200ebea02c10354fe9/merged major:0 minor:707 fsType:overlay blockSize:0} overlay_0-71:{mountpoint:/var/lib/containers/storage/overlay/35ed627ff800bd8a6771c9438e4451a71e8d9f0644ae5a277f728758b126d549/merged major:0 minor:71 fsType:overlay blockSize:0} overlay_0-728:{mountpoint:/var/lib/containers/storage/overlay/082c889f439683ca52e57ced6809ec7acf6e9ac400f2b2aa0503253dbf8015d1/merged major:0 minor:728 fsType:overlay blockSize:0} overlay_0-73:{mountpoint:/var/lib/containers/storage/overlay/aace42d44173bfed1fe57f42047f5daee9d536e20d47181f9c35bb56b4e58862/merged major:0 minor:73 fsType:overlay blockSize:0} overlay_0-730:{mountpoint:/var/lib/containers/storage/overlay/09bddc136c34cea22b34220e256e82dbf6efa9cd5886fd4935ff3a81eadafced/merged major:0 minor:730 fsType:overlay blockSize:0} overlay_0-732:{mountpoint:/var/lib/containers/storage/overlay/0d2c2e9233a8de5bb2e5d085d10d4d35ae570ef604d4e1fada99ea312fbd2c0a/merged major:0 minor:732 fsType:overlay blockSize:0} overlay_0-734:{mountpoint:/var/lib/containers/storage/overlay/f539c50268a6f706c730c4bdd2be73038f66e746e44cce1b0069c413a8e34d92/merged major:0 minor:734 fsType:overlay blockSize:0} overlay_0-739:{mountpoint:/var/lib/containers/storage/overlay/ff0d6c36c19e7fe68e0df49ce3c13d0c8e4d8b71b2056f5c8cb4dfea77675abf/merged major:0 minor:739 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/a19332b6e2dcc150985635217f5eaa65fc619c79a4231f73b684b2aa844923df/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-740:{mountpoint:/var/lib/containers/storage/overlay/d2058732f74aa48d561b9ae4b7b6d33eaf54366ff682ce494f7cf10f9456f3ca/merged major:0 minor:740 fsType:overlay blockSize:0} overlay_0-742:{mountpoint:/var/lib/containers/storage/overlay/228bbf987ff6373ac2af90ba690cb1e637798156ec1bb60aa4ca7d425c47ae09/merged major:0 minor:742 fsType:overlay blockSize:0} overlay_0-744:{mountpoint:/var/lib/containers/storage/overlay/d456bc9e1bab5c12d330ab01a74eeeb601b26874ca5db045773ced298d017a88/merged major:0 minor:744 fsType:overlay blockSize:0} overlay_0-751:{mountpoint:/var/lib/containers/storage/overlay/ca42deeb90b1cd4ef85cceca34167908dc9bd5e3c217ebdde1445d6152b8f1f7/merged major:0 minor:751 fsType:overlay blockSize:0} overlay_0-774:{mountpoint:/var/lib/containers/storage/overlay/1bcefe41cfba6475c64c3e6f8d139d10d31bd34c1c8c19facc59391f82d579a4/merged major:0 minor:774 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/2b9bdd543571c11cfda6ac2c656468e954dd1167a0d6450fd1238544721cda72/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-7 Mar 12 14:35:41.089657 master-0 kubenswrapper[37036]: 89:{mountpoint:/var/lib/containers/storage/overlay/acec71e7eb9f5c6c2ddd96479897c67ce67176e49116f462aeada9e5a013d0ea/merged major:0 minor:789 fsType:overlay blockSize:0} overlay_0-80:{mountpoint:/var/lib/containers/storage/overlay/9146b9439a20c00f9e7aadb4bbd4a9ce86e9c419472f41447226a8c7c2b7e4b3/merged major:0 minor:80 fsType:overlay blockSize:0} overlay_0-800:{mountpoint:/var/lib/containers/storage/overlay/0b95bc6abc31a79adf1cebc623474bafdf66c43cc6382880d9359727b5b7ea8a/merged major:0 minor:800 fsType:overlay blockSize:0} overlay_0-812:{mountpoint:/var/lib/containers/storage/overlay/f11207d59fcca5085e5a254daf239c9bfbf1b9d3875821344626c93a7cd4bb9a/merged major:0 minor:812 fsType:overlay blockSize:0} overlay_0-818:{mountpoint:/var/lib/containers/storage/overlay/f3e895af47c7e5702a7b561c267e0bff2d307c4d7247156e1fdbfff9fd4034c3/merged major:0 minor:818 fsType:overlay blockSize:0} overlay_0-825:{mountpoint:/var/lib/containers/storage/overlay/28109083e598f4a2a6d5a8a13646787dff1f87cfbebb7b6fe2878156058c5e57/merged major:0 minor:825 fsType:overlay blockSize:0} overlay_0-852:{mountpoint:/var/lib/containers/storage/overlay/f2b70f94a2cc663500f4615610be6d3ebb5015e758f46b224121c4b1851afa26/merged major:0 minor:852 fsType:overlay blockSize:0} overlay_0-857:{mountpoint:/var/lib/containers/storage/overlay/9cf4dc6bfbe522aeed3d9271fd9c98b7b5ea6e29792e316d5231ba57f6d6b68b/merged major:0 minor:857 fsType:overlay blockSize:0} overlay_0-860:{mountpoint:/var/lib/containers/storage/overlay/e497bcadb18e223006846b693c54a16fca072da8581063a01f7ab449eb961bef/merged major:0 minor:860 fsType:overlay blockSize:0} overlay_0-862:{mountpoint:/var/lib/containers/storage/overlay/ae7e0ee2d69987e77133c610122d5f321df0db75316e6acf2cbd650c7a11c266/merged major:0 minor:862 fsType:overlay blockSize:0} overlay_0-87:{mountpoint:/var/lib/containers/storage/overlay/eedc55ce5641a6458aab5e29a1caf969307bfae6e1ee8ffe2ff703017b61f621/merged major:0 minor:87 fsType:overlay blockSize:0} overlay_0-879:{mountpoint:/var/lib/containers/storage/overlay/f7e84c2f09c9ae34fcd9e0994c8e3dc0bf5acc45b67195769dc24468a8cc4638/merged major:0 minor:879 fsType:overlay blockSize:0} overlay_0-890:{mountpoint:/var/lib/containers/storage/overlay/27a562dd2106a59b8ed866801ba353c9472866930aae4fcce574bf4c6b40c735/merged major:0 minor:890 fsType:overlay blockSize:0} overlay_0-895:{mountpoint:/var/lib/containers/storage/overlay/4e7d863576cf572079e53a98119d1248f8152fe3a8dc4173df0af9187e7de40a/merged major:0 minor:895 fsType:overlay blockSize:0} overlay_0-899:{mountpoint:/var/lib/containers/storage/overlay/993451e52bf1ba1106a31028ff7a07f7ec3dfdbc49adc2029969abe1b75a5f3e/merged major:0 minor:899 fsType:overlay blockSize:0} overlay_0-90:{mountpoint:/var/lib/containers/storage/overlay/f89f841d9eec94522fa8b3046f88b7711f731d42f19d9e5c03964f4c3a685a2b/merged major:0 minor:90 fsType:overlay blockSize:0} overlay_0-901:{mountpoint:/var/lib/containers/storage/overlay/3d19217aa7b6904d5fd3fb8b50ccce0496c54b752848d58ebd468f36f809b95a/merged major:0 minor:901 fsType:overlay blockSize:0} overlay_0-902:{mountpoint:/var/lib/containers/storage/overlay/f0d94210966470d2a10df351e524a8e9b15f1ee60b4ac3f80ec83922ac13eba7/merged major:0 minor:902 fsType:overlay blockSize:0} overlay_0-904:{mountpoint:/var/lib/containers/storage/overlay/731c7692bdf201e7735dfd43f36a19f43363829c7682887148b8719cbbd242eb/merged major:0 minor:904 fsType:overlay blockSize:0} overlay_0-906:{mountpoint:/var/lib/containers/storage/overlay/b057dfddcc4ec789f6cef52a088700227f3e95dcfa277cbc203283856c8aa26f/merged major:0 minor:906 fsType:overlay blockSize:0} overlay_0-908:{mountpoint:/var/lib/containers/storage/overlay/ca94279c41ad27f9b080eeed36ef57710f2c07e6a8c6de2de857dbebd280f773/merged major:0 minor:908 fsType:overlay blockSize:0} overlay_0-917:{mountpoint:/var/lib/containers/storage/overlay/b97d6ad8263fc92a5c59fc562579b546ac5c13b4924e80c0b286f688810bc10a/merged major:0 minor:917 fsType:overlay blockSize:0} overlay_0-926:{mountpoint:/var/lib/containers/storage/overlay/b9af6e5ab8280b09076826c3bb032f3e31ec6c0bd5f1b721406c49767e9ad97b/merged major:0 minor:926 fsType:overlay blockSize:0} overlay_0-935:{mountpoint:/var/lib/containers/storage/overlay/49178337b48e22bc8ba57d9551e11c76197d1ace82fe41df746ea13b2a729ef8/merged major:0 minor:935 fsType:overlay blockSize:0} overlay_0-940:{mountpoint:/var/lib/containers/storage/overlay/6c31f86372c471a8b9791fe8a6b270fb07265d8648957ebfeaf5cf43ba70dedf/merged major:0 minor:940 fsType:overlay blockSize:0} overlay_0-941:{mountpoint:/var/lib/containers/storage/overlay/872c1c0837c48ffd5edd055e57db6f2d99dc1c5f15e12150fafadbc128771736/merged major:0 minor:941 fsType:overlay blockSize:0} overlay_0-956:{mountpoint:/var/lib/containers/storage/overlay/ef524b2390f93f6c5bf09e423dcbdb4dc3cea5048de6e901d3633f9dc890e7fe/merged major:0 minor:956 fsType:overlay blockSize:0} overlay_0-96:{mountpoint:/var/lib/containers/storage/overlay/a1edfcdc30a0080b77d78894325c81d9055c0afb4edec2dad87be5f53f953024/merged major:0 minor:96 fsType:overlay blockSize:0} overlay_0-963:{mountpoint:/var/lib/containers/storage/overlay/2f8d7a56adb8c8a0b990da4f8082f20222b576524552a49452eea31854688e22/merged major:0 minor:963 fsType:overlay blockSize:0} overlay_0-965:{mountpoint:/var/lib/containers/storage/overlay/ddc2d965ddbe6d6cc30becf14b1ba5b7e49cc768ee12015b009af35070a3d3f9/merged major:0 minor:965 fsType:overlay blockSize:0} overlay_0-988:{mountpoint:/var/lib/containers/storage/overlay/0f1ab910ef5930b8b29eb1cd1d487952b104059112d130f9a2523672eb6fdb80/merged major:0 minor:988 fsType:overlay blockSize:0} overlay_0-996:{mountpoint:/var/lib/containers/storage/overlay/d000e089c7119eec9d518c13fed6f823868029a3a022d129deca36e6e6fceca1/merged major:0 minor:996 fsType:overlay blockSize:0}] Mar 12 14:35:41.134963 master-0 kubenswrapper[37036]: I0312 14:35:41.133016 37036 manager.go:217] Machine: {Timestamp:2026-03-12 14:35:41.132071297 +0000 UTC m=+0.139812254 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:e4246b74030446349cda326caa7abc15 SystemUUID:e4246b74-0304-4634-9cda-326caa7abc15 BootID:00119185-c574-4bb3-ab0c-7bce10775874 Filesystems:[{Device:/var/lib/kubelet/pods/6f5cd3ff-ced6-47e3-8054-d83053d87680/volumes/kubernetes.io~projected/kube-api-access-7dkwb DeviceMajor:0 DeviceMinor:307 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/df31c4c2-304e-4bad-8e6f-18c174eba675/volumes/kubernetes.io~projected/kube-api-access-gg62n DeviceMajor:0 DeviceMinor:875 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-730 DeviceMajor:0 DeviceMinor:730 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-357 DeviceMajor:0 DeviceMinor:357 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-80 DeviceMajor:0 DeviceMinor:80 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5fb06459-09da-4620-91cf-8c3fe8f425db/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:375 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2e21aa41c709714c621e81f34dd2940d383309852477d3447a69f2b11767e16e/userdata/shm DeviceMajor:0 DeviceMinor:419 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7433d9bf-4edf-4787-a7a1-e5102c7264c7/volumes/kubernetes.io~projected/kube-api-access-t4q4w DeviceMajor:0 DeviceMinor:98 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1325db6b5fc63da3d3f80a9e903b690f2007b20dd9156b1536d772080219b0fc/userdata/shm DeviceMajor:0 DeviceMinor:843 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-917 DeviceMajor:0 DeviceMinor:917 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-661 DeviceMajor:0 DeviceMinor:661 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1021 DeviceMajor:0 DeviceMinor:1021 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d00a8cc7-7774-40bd-94a1-9ac2d0f63234/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/39252b5a-d014-4319-ad81-3c1bf2ef585e/volumes/kubernetes.io~projected/kube-api-access-ktncx DeviceMajor:0 DeviceMinor:548 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-313 DeviceMajor:0 DeviceMinor:313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-561 DeviceMajor:0 DeviceMinor:561 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-283 DeviceMajor:0 DeviceMinor:283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8c6b9f13-4a3a-4920-a84b-f76516501f81/volumes/kubernetes.io~projected/kube-api-access-2vnhl DeviceMajor:0 DeviceMinor:230 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/44f838e36ef84ec07445889d3aec1d687c84ce529c36e9146d695bf4ed4afa8f/userdata/shm DeviceMajor:0 DeviceMinor:725 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a/volumes/kubernetes.io~projected/kube-api-access-jcz8p DeviceMajor:0 DeviceMinor:1089 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a81be38f-e07e-4863-8d61-fdefc2713a6a/volumes/kubernetes.io~secret/kube-state-metrics-tls DeviceMajor:0 DeviceMinor:522 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-553 DeviceMajor:0 DeviceMinor:553 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/16c9911f528d88ff6368917af5d3a0bfb97b0cd22d43dad86b75920f982a3c90/userdata/shm DeviceMajor:0 DeviceMinor:777 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e7f6ebd3-98c8-457c-a88c-7e81270f01b5/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:1039 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-71 DeviceMajor:0 DeviceMinor:71 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9/volumes/kubernetes.io~projected/kube-api-access-wwtr9 DeviceMajor:0 DeviceMinor:147 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/85459175-2c9c-425d-bdfb-0a79c92ed110/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:593 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1005 DeviceMajor:0 DeviceMinor:1005 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e7f6ebd3-98c8-457c-a88c-7e81270f01b5/volumes/kubernetes.io~projected/kube-api-access-56twk DeviceMajor:0 DeviceMinor:1046 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/39547af9c96ab9ffa0c68d5520b2aefe82b1e2e9c5c31895677204de893a9b6a/userdata/shm DeviceMajor:0 DeviceMinor:918 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-734 DeviceMajor:0 DeviceMinor:734 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1108 DeviceMajor:0 DeviceMinor:1108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1140 DeviceMajor:0 DeviceMinor:1140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1153 DeviceMajor:0 DeviceMinor:1153 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/643a9eb1fc3e8f464aba2201dd6fa47d57c365903e1554bd77d2fd4b8d623917/userdata/shm DeviceMajor:0 DeviceMinor:254 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f569ed3b-924d-4829-b192-f508ee70658d/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:757 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6248f60ded635728b07f9ffbb9d72d48359f97cdb83b7f5d2e6153af60f77309/userdata/shm DeviceMajor:0 DeviceMinor:1102 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1195 DeviceMajor:0 DeviceMinor:1195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f3c13c5f-3d1f-4e0a-b77b-732255680086/volumes/kubernetes.io~projected/kube-api-access-wmrqg DeviceMajor:0 DeviceMinor:756 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/9757edbb-8ce2-4513-9b32-a552df50634c/volumes/kubernetes.io~projected/kube-api-access-m2cq8 DeviceMajor:0 DeviceMinor:670 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7fdce71e-8085-4316-be40-e535530c2ca4/volumes/kubernetes.io~projected/kube-api-access-5bdqv DeviceMajor:0 DeviceMinor:123 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cf474d719fe021709d76198dcf6233015fdb798e1bd5aaff8f16e8ee1cf431e4/userdata/shm DeviceMajor:0 DeviceMinor:427 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aca8c7cb3cefb96ea167603c4fdab132577bdaf6be51eb609e79f8b9ea4df1b7/userdata/shm DeviceMajor:0 DeviceMinor:325 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e7f6ebd3-98c8-457c-a88c-7e81270f01b5/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:1043 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/360de6d7cd6901ac994724b265fa41deda5af26bfc1f5396acb31cdc3acfea90/userdata/shm DeviceMajor:0 DeviceMinor:48 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-291 DeviceMajor:0 DeviceMinor:291 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/70710a0b-8b5d-40f5-b726-fd5e2836ffbe/volumes/kubernetes.io~projected/kube-api-access-b9cfq DeviceMajor:0 DeviceMinor:718 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-332 DeviceMajor:0 DeviceMinor:332 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1194 DeviceMajor:0 DeviceMinor:1194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-277 DeviceMajor:0 DeviceMinor:277 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/761993bb-2cba-4e1a-b304-36a24817af94/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:126 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-574 DeviceMajor:0 DeviceMinor:574 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-956 DeviceMajor:0 DeviceMinor:956 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5fb06459-09da-4620-91cf-8c3fe8f425db/volumes/kubernetes.io~projected/kube-api-access-zv69s DeviceMajor:0 DeviceMinor:403 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2d3eaf559f7c7fc8939b6cb1adf4ce35f6ab04af130fc43628777d00ccfd15a4/userdata/shm DeviceMajor:0 DeviceMinor:532 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-551 DeviceMajor:0 DeviceMinor:551 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fa512b9d1c47fba8ce4517c7ff55b3a36d2662e583e6b6952289b14b55413ef1/userdata/shm DeviceMajor:0 DeviceMinor:1051 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ef5679f7-5bf5-409d-b74b-64a9cbb6c701/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:1207 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-132 DeviceMajor:0 DeviceMinor:132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1094 DeviceMajor:0 DeviceMinor:1094 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/57930a54-89ab-4ec8-a504-74035bb74d63/volumes/kubernetes.io~projected/kube-api-access-d6z8v DeviceMajor:0 DeviceMinor:226 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/59d708b78a7b260fc1f5fce51861156cd584df9875d86be3a6175021610d5f66/userdata/shm DeviceMajor:0 DeviceMinor:281 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-439 DeviceMajor:0 DeviceMinor:439 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d00a8cc7-7774-40bd-94a1-9ac2d0f63234/volumes/kubernetes.io~projected/kube-api-access-bbv7q DeviceMajor:0 DeviceMinor:256 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f0298c9e8c7173c3949586fa731c073a558897f0792064c146633191e5244fab/userdata/shm DeviceMajor:0 DeviceMinor:985 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea/volumes/kubernetes.io~projected/kube-api-access-2z8pd DeviceMajor:0 DeviceMinor:213 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/9757edbb-8ce2-4513-9b32-a552df50634c/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:669 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-940 DeviceMajor:0 DeviceMinor:940 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/addf66af-4d97-4c1e-960d-ace98c27961b/volumes/kubernetes.io~secret/secret-metrics-client-certs DeviceMajor:0 DeviceMinor:1188 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1128 DeviceMajor:0 DeviceMinor:1128 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f3c13c5f-3d1f-4e0a-b77b-732255680086/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:745 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3dc73c14-852d-4957-b6ac-84366ba0594f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/59f21770-429b-4b63-82fd-50ce0daf698d/volumes/kubernetes.io~projected/kube-api-access-qxdqn DeviceMajor:0 DeviceMinor:1129 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1167 DeviceMajor:0 DeviceMinor:1167 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-941 DeviceMajor:0 DeviceMinor:941 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ef5679f7-5bf5-409d-b74b-64a9cbb6c701/volumes/kubernetes.io~projected/kube-api-access-vv6gf DeviceMajor:0 DeviceMinor:1214 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-472 DeviceMajor:0 DeviceMinor:472 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a2435b91-86d6-415b-a978-34cc859e74f2/volumes/kubernetes.io~projected/kube-api-access-qkmrv DeviceMajor:0 DeviceMinor:244 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1f9b15c6-b4ee-4907-8daa-376e3b438896/volumes/kubernetes.io~projected/kube-api-access-w7nnk DeviceMajor:0 DeviceMinor:499 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-695 DeviceMajor:0 DeviceMinor:695 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:232 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-167 DeviceMajor:0 DeviceMinor:167 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/08ea0d9f-0635-4759-803e-572eca2f2d34/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:235 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2ed4af146d2bc6a8dae65fe67eb8f5e0b4dce64f0e0b6991bdd46a09447f48de/userdata/shm DeviceMajor:0 DeviceMinor:245 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-518 DeviceMajor:0 DeviceMinor:518 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-906 DeviceMajor:0 DeviceMinor:906 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ddff8978b61211cf6981c8dcb5ac20ebbd703343ccf0d4864c6b4d8c7b748d88/userdata/shm DeviceMajor:0 DeviceMinor:776 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1096 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-610 DeviceMajor:0 DeviceMinor:610 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b91ed73a339c21ab18d17bc789c0ba3301a928d38dce2afb46b197b75f34b51e/userdata/shm DeviceMajor:0 DeviceMinor:712 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-229 DeviceMajor:0 DeviceMinor:229 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-404 DeviceMajor:0 DeviceMinor:404 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7bbac52760e3fcba097d54391f795f027fe56fcf9f7e33e8c515250455992a3b/userdata/shm DeviceMajor:0 DeviceMinor:279 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-265 DeviceMajor:0 DeviceMinor:265 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/210d19917e7415e5f1763dbc60d79ff661ed77ac9ff9582758b201449af2e08f/userdata/shm DeviceMajor:0 DeviceMinor:308 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1cc258e5add24f89b3e9a9a1502a4d4f7e01fa0c35af8f6d6a9076b7b4e48345/userdata/shm DeviceMajor:0 DeviceMinor:239 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d46849ab9a3cac26570e0fb5ca7236cfad3a52459d3d93f56a2bd305b0ad9cd4/userdata/shm DeviceMajor:0 DeviceMinor:598 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1030 DeviceMajor:0 DeviceMinor:1030 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-774 DeviceMajor:0 DeviceMinor:774 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-988 DeviceMajor:0 DeviceMinor:988 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-162 DeviceMajor:0 DeviceMinor:162 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-416 DeviceMajor:0 DeviceMinor:416 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8dd912f8-2c4d-4a0a-ba41-918ab5c235ba/volumes/kubernetes.io~projected/kube-api-access-27tm9 DeviceMajor:0 DeviceMinor:319 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1047bb4a-135f-488d-9399-0518cb3a827d/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:972 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9ae8ffe0fbe6457550dbcfde92cc569b256c78e408c6b4f88c41a2524eefcfab/userdata/shm DeviceMajor:0 DeviceMinor:304 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-700 DeviceMajor:0 DeviceMinor:700 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-496 DeviceMajor:0 DeviceMinor:496 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7ad7c4acbfd0070259486f35a18b99f96bb34f57c1bf16a0b81a55c2de084162/userdata/shm DeviceMajor:0 DeviceMinor:129 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/volumes/kubernetes.io~projected/kube-api-access-qhdq5 DeviceMajor:0 DeviceMinor:259 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:410 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-926 DeviceMajor:0 DeviceMinor:926 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6defef79-6058-466a-ae0b-8eb9258126be/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b4d899998f745455ee9f9d0e86782192bfb9c3fa197ad167b3e3e1e3896ea9e7/userdata/shm DeviceMajor:0 DeviceMinor:271 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a41bc83813b39c2fa459a0e9284786027dca250eb150090c47a705729e7d08f5/userdata/shm DeviceMajor:0 DeviceMinor:587 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f569ed3b-924d-4829-b192-f508ee70658d/volumes/kubernetes.io~projected/kube-api-access-62ptf DeviceMajor:0 DeviceMinor:760 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-352 DeviceMajor:0 DeviceMinor:352 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-852 DeviceMajor:0 DeviceMinor:852 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1151 DeviceMajor:0 DeviceMinor:1151 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1146 DeviceMajor:0 DeviceMinor:1146 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/addf66af-4d97-4c1e-960d-ace98c27961b/volumes/kubernetes.io~secret/secret-metrics-server-tls DeviceMajor:0 DeviceMinor:1187 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-379 DeviceMajor:0 DeviceMinor:379 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1f9b15c6-b4ee-4907-8daa-376e3b438896/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:409 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/dd29b21c-7a0e-4311-952f-427b00468e66/volumes/kubernetes.io~projected/kube-api-access-rcq7v DeviceMajor:0 DeviceMinor:767 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/99433993-93cf-46cb-bb66-485672cb2554/volumes/kubernetes.io~projected/kube-api-access-2dlf2 DeviceMajor:0 DeviceMinor:874 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1037 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/667a33334db41ad265e60ff8664b098419b2a584d575b100118b0dcbbdce439e/userdata/shm DeviceMajor:0 DeviceMinor:260 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-58 DeviceMajor:0 DeviceMinor:58 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/57327dd3cf51a7946c6428acbb4cffd5439484941e4f876980813ac47338ecdb/userdata/shm DeviceMajor:0 DeviceMinor:578 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-818 DeviceMajor:0 DeviceMinor:818 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1015 DeviceMajor:0 DeviceMinor:1015 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1070 DeviceMajor:0 DeviceMinor:1070 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dc05a7757105e04e114bec1d0c6d1948857cd13293222846a43aed00c9eb7e9e/userdata/shm DeviceMajor:0 DeviceMinor:585 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2376cfb1ee60c237c8964f78aeee837ea12e09f11b9b3dfc1320568c3b4a4743/userdata/shm DeviceMajor:0 DeviceMinor:770 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-111 DeviceMajor:0 DeviceMinor:111 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6b77ad35-2fff-47bb-ad34-abb3868b09a9/volumes/kubernetes.io~projected/kube-api-access-m97zx DeviceMajor:0 DeviceMinor:807 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6f5cd3ff-ced6-47e3-8054-d83053d87680/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:306 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-751 DeviceMajor:0 DeviceMinor:751 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1014 DeviceMajor:0 DeviceMinor:1014 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a81be38f-e07e-4863-8d61-fdefc2713a6a/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:638 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1060 DeviceMajor:0 DeviceMinor:1060 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a2435b91-86d6-415b-a978-34cc859e74f2/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:262 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/031300aa1cb0172a0d2afed31c2d6390d62119757876eb5bc01076e0f90336fb/userdata/shm DeviceMajor:0 DeviceMinor:435 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1217 DeviceMajor:0 DeviceMinor:1217 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/47bb0848ead40d3cf654dbab8841bba9aaf69454627f9510e73ce08c4830d731/userdata/shm DeviceMajor:0 DeviceMinor:370 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-728 DeviceMajor:0 DeviceMinor:728 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649/volumes/kubernetes.io~projected/kube-api-access-mmcz9 DeviceMajor:0 DeviceMinor:1082 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1219 DeviceMajor:0 DeviceMinor:1219 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-480 DeviceMajor:0 DeviceMinor:480 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0a898118-6d01-4211-92f0-43967b75405c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:220 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-702 DeviceMajor:0 DeviceMinor:702 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-447 DeviceMajor:0 DeviceMinor:447 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1086c8d5071e504e73694312636385db33200a4d801de67bcefe278f7df988d9/userdata/shm DeviceMajor:0 DeviceMinor:769 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7e1bd495d46e0c7a0ac9149686af3fabe8525fa70c85e91b10cc34e43bcb54d8/userdata/shm DeviceMajor:0 DeviceMinor:697 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-359 DeviceMajor:0 DeviceMinor:359 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e3ded18e3d6f447b9e66f1d69e24e4a3db671b9e96141bd007fb10aec777b522/userdata/shm DeviceMajor:0 DeviceMinor:272 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8e733069-752a-4140-83eb-8287f1bce1a7/volumes/kubernetes.io~projected/kube-api-access-qvngn DeviceMajor:0 DeviceMinor:303 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/39252b5a-d014-4319-ad81-3c1bf2ef585e/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:533 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/06eb9f4b-167e-435b-8ef6-ae44fc0b85a9/volumes/kubernetes.io~projected/kube-api-access-276qm DeviceMajor:0 DeviceMinor:809 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-676 DeviceMajor:0 DeviceMinor:676 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1197 DeviceMajor:0 DeviceMinor:1197 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ba6778d1fdc6908e0a785cdabed807cc4f2dd052e1c7ef6d135e92d89f5e89d1/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d6cba419a6f6e1067b6ba753b668a42fc154b7b841036f746eeb0f9473a12dda/userdata/shm DeviceMajor:0 DeviceMinor:595 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-322 DeviceMajor:0 DeviceMinor:322 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1078 DeviceMajor:0 DeviceMinor:1078 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1162 DeviceMajor:0 DeviceMinor:1162 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-457 DeviceMajor:0 DeviceMinor:457 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b9d51570-06dd-4e2f-9c19-07fb694279ae/volumes/kubernetes.io~projected/kube-api-access-2cqkl DeviceMajor:0 DeviceMinor:264 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-908 DeviceMajor:0 DeviceMinor:908 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-619 DeviceMajor:0 DeviceMinor:619 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-374 DeviceMajor:0 DeviceMinor:374 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/39bda5b8-c748-4023-8680-8e8454512e5b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:625 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/61de099a-410b-4d30-83e8-19cf5901cb27/volumes/kubernetes.io~projected/kube-api-access-9czc5 DeviceMajor:0 DeviceMinor:377 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:521 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-366 DeviceMajor:0 DeviceMinor:366 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-742 DeviceMajor:0 DeviceMinor:742 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-679 DeviceMajor:0 DeviceMinor:679 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-326 DeviceMajor:0 DeviceMinor:326 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/39bda5b8-c748-4023-8680-8e8454512e5b/volumes/kubernetes.io~projected/kube-api-access-4krm9 DeviceMajor:0 DeviceMinor:628 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bc3c55d0c455838629b8ab5cf95b13e36cb5ff08d49b778a2bbce43b9948d568/userdata/shm DeviceMajor:0 DeviceMinor:600 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-663 DeviceMajor:0 DeviceMinor:663 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-87 DeviceMajor:0 DeviceMinor:87 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/04b735b224daf50d8a4394bad34d733739b181daca3e401220cb41161ddee701/userdata/shm DeviceMajor:0 DeviceMinor:429 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:885 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-935 DeviceMajor:0 DeviceMinor:935 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9757756c-cb67-4b6f-99c3-dd63f904897a/volumes/kubernetes.io~projected/kube-api-access-hxnzm DeviceMajor:0 DeviceMinor:118 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/40912d56-8288-4d58-ad91-7455bd460887/volumes/kubernetes.io~projected/kube-api-access-l9gvf DeviceMajor:0 DeviceMinor:305 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-422 DeviceMajor:0 DeviceMinor:422 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a917672632ddd41ece955a9caf8b6f8e502d8c6d1a179cc7a84283068844b577/userdata/shm DeviceMajor:0 DeviceMinor:536 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3dc73c14-852d-4957-b6ac-84366ba0594f/volumes/kubernetes.io~projected/kube-api-access-sc9zd DeviceMajor:0 DeviceMinor:249 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6724dfeb711ea97e4c0311828871b84e605df95c88e47b984ac33b84e0c182f2/userdata/shm DeviceMajor:0 DeviceMinor:344 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-433 DeviceMajor:0 DeviceMinor:433 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/61d829d7-38e1-4826-942c-f7317c4a4bec/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:979 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-492 DeviceMajor:0 DeviceMinor:492 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-614 DeviceMajor:0 DeviceMinor:614 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-101 DeviceMajor:0 DeviceMinor:101 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8e4d9407-ff79-4396-a37f-896617e024d4/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:420 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f7b68603-8af3-4a50-8d39-86bbcdf1c597/volumes/kubernetes.io~projected/kube-api-access-vntrg DeviceMajor:0 DeviceMinor:1047 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-474 DeviceMajor:0 DeviceMinor:474 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-516 DeviceMajor:0 DeviceMinor:516 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-372 DeviceMajor:0 DeviceMinor:372 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-381 DeviceMajor:0 DeviceMinor:381 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-310 DeviceMajor:0 DeviceMinor:310 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-542 DeviceMajor:0 DeviceMinor:542 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1169 DeviceMajor:0 DeviceMinor:1169 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649/volumes/kubernetes.io~secret/telemeter-client-tls DeviceMajor:0 DeviceMinor:1072 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bb2ba7d0c1c51336231f0b223ca71f794a5f473f0c46059600789cebd6ae818f/userdata/shm DeviceMajor:0 DeviceMinor:238 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-520 DeviceMajor:0 DeviceMinor:520 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0797fe88dc9adea8392e9b93088b1a0313bddd85f5318d3039e5b08dcf043b58/userdata/shm DeviceMajor:0 DeviceMinor:333 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-902 DeviceMajor:0 DeviceMinor:902 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-451 DeviceMajor:0 DeviceMinor:451 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:242 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/42dbcb8f-e8c4-413e-977d-40aa6df226aa/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:591 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f02823618c817a57f5deb9d5aa242eb2274591837e55328914242489612536a0/userdata/shm DeviceMajor:0 DeviceMinor:722 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1bba274a-38c7-4d13-88a5-6bc39228416c/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:228 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-461 DeviceMajor:0 DeviceMinor:461 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-800 DeviceMajor:0 DeviceMinor:800 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0a898118-6d01-4211-92f0-43967b75405c/volumes/kubernetes.io~projected/kube-api-access-8rfxl DeviceMajor:0 DeviceMinor:251 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3edaa533-ecbb-443e-a270-4cb4f923daf6/volumes/kubernetes.io~projected/kube-api-access-smwtd DeviceMajor:0 DeviceMinor:664 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-342 DeviceMajor:0 DeviceMinor:342 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-528 DeviceMajor:0 DeviceMinor:528 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-538 DeviceMajor:0 DeviceMinor:538 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8106d14a-b448-4dd1-bccd-926f85394b5d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:222 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-476 DeviceMajor:0 DeviceMinor:476 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-965 DeviceMajor:0 DeviceMinor:965 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-744 DeviceMajor:0 DeviceMinor:744 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1138 DeviceMajor:0 DeviceMinor:1138 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-603 DeviceMajor:0 DeviceMinor:603 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/99433993-93cf-46cb-bb66-485672cb2554/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:864 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-879 DeviceMajor:0 DeviceMinor:879 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-90 DeviceMajor:0 DeviceMinor:90 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-901 DeviceMajor:0 DeviceMinor:901 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3ec846db-e344-4f9e-95e6-7a0055f52766/volumes/kubernetes.io~projected/kube-api-access-tkgft DeviceMajor:0 DeviceMinor:512 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-346 DeviceMajor:0 DeviceMinor:346 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-739 DeviceMajor:0 DeviceMinor:739 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649/volumes/kubernetes.io~secret/secret-telemeter-client DeviceMajor:0 DeviceMinor:1077 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d56089bf-177c-492d-8964-73a45574e7ed/volumes/kubernetes.io~projected/kube-api-access-f2gnl DeviceMajor:0 DeviceMinor:314 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/39bda5b8-c748-4023-8680-8e8454512e5b/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:627 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a071b87c5a3a1d570849d8f30a4ef18e47cf5ac7ae26cb6fa07ebd774622be6c/userdata/shm DeviceMajor:0 DeviceMinor:469 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d546c5397e398d2fa2328f65fedfe1cce52498d31ad5c371f9043b0bc9f34f16/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-463 DeviceMajor:0 DeviceMinor:463 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/761993bb-2cba-4e1a-b304-36a24817af94/volumes/kubernetes.io~projected/kube-api-access-2k4mx DeviceMajor:0 DeviceMinor:127 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6a6f22295caf5561da4b53d5d1d44905e37cde1c7951dfd83965f63ee4f0c534/userdata/shm DeviceMajor:0 DeviceMinor:1055 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5a8c18378832b96fedb1cc482f9c56eff1b7bedfc155a7a794d6f4818bd05ce5/userdata/shm DeviceMajor:0 DeviceMinor:1215 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:1081 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1106 DeviceMajor:0 DeviceMinor:1106 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4962f86c890ab9be604d23a0da920ebdb05a4b0dbc30671f52da23640f2df151/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/40912d56-8288-4d58-ad91-7455bd460887/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:301 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3edaa533-ecbb-443e-a270-4cb4f923daf6/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:765 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1104 DeviceMajor:0 DeviceMinor:1104 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-138 DeviceMajor:0 DeviceMinor:138 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b820d186bee28edd1c55ac6380a6987416ca51ef3ff64ae7bf3a04304904c238/userdata/shm DeviceMajor:0 DeviceMinor:252 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-740 DeviceMajor:0 DeviceMinor:740 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4ef01b7f-f7cb-4fd4-a75d-fe7a657d68d4/volumes/kubernetes.io~projected/kube-api-access-fdzwp DeviceMajor:0 DeviceMinor:340 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-449 DeviceMajor:0 DeviceMinor:449 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3815db41-fe01-43f6-b75c-4ccca9124f51/volumes/kubernetes.io~projected/kube-api-access-shknb DeviceMajor:0 DeviceMinor:526 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/addf66af-4d97-4c1e-960d-ace98c27961b/volumes/kubernetes.io~projected/kube-api-access-l6d7w DeviceMajor:0 DeviceMinor:1190 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5fb06459-09da-4620-91cf-8c3fe8f425db/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:408 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-508 DeviceMajor:0 DeviceMinor:508 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8bae2bf48688fed38a08346cb01a13f07f5d6ebf571f08738d916c6d12d3bb19/userdata/shm DeviceMajor:0 DeviceMinor:778 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/43ed8c1a4973dd17aafd4ecf7a139cc5fe9ab8ae42ddeb20c5c40716650f035f/userdata/shm DeviceMajor:0 DeviceMinor:257 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8c6b9f13-4a3a-4920-a84b-f76516501f81/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:411 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3edaa533-ecbb-443e-a270-4cb4f923daf6/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:766 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-440 DeviceMajor:0 DeviceMinor:440 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/75d2cc73f5d8290489c2ec72fc148a6f125ffa59eaf8f20c0252b0060ef642a3/userdata/shm DeviceMajor:0 DeviceMinor:89 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e7f98f2c20f8a17639a398b1fbfbba35de0dedfd7ce02e92e1a2182183ee86ac/userdata/shm DeviceMajor:0 DeviceMinor:1049 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b067750f065ba84cd14fac759b144c851d17dfcf9ba98a9096e90f8e2906332d/userdata/shm DeviceMajor:0 DeviceMinor:523 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1134 DeviceMajor:0 DeviceMinor:1134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-430 DeviceMajor:0 DeviceMinor:430 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1063 DeviceMajor:0 DeviceMinor:1063 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/07a6a1d6-fecf-4847-b7c1-160d5d7320fb/volumes/kubernetes.io~projected/kube-api-access-cqh9t DeviceMajor:0 DeviceMinor:247 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7fdce71e-8085-4316-be40-e535530c2ca4/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:590 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-348 DeviceMajor:0 DeviceMinor:348 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-895 DeviceMajor:0 DeviceMinor:895 Capacity:214143315968 Type:v Mar 12 14:35:41.135511 master-0 kubenswrapper[37036]: fs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/42dbcb8f-e8c4-413e-977d-40aa6df226aa/volumes/kubernetes.io~projected/kube-api-access-j47xv DeviceMajor:0 DeviceMinor:227 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/272b53c4-134c-404d-9a27-c7371415b1f7/volumes/kubernetes.io~projected/kube-api-access-nqqcc DeviceMajor:0 DeviceMinor:234 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8d775283-2696-4411-8ddf-d4e6000f0a0c/volumes/kubernetes.io~projected/kube-api-access-lcwrv DeviceMajor:0 DeviceMinor:233 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/84ea14c79c9435282226e3a70b4b302086d9d4276408c71b8e887b9f85e1f795/userdata/shm DeviceMajor:0 DeviceMinor:248 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-510 DeviceMajor:0 DeviceMinor:510 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1047bb4a-135f-488d-9399-0518cb3a827d/volumes/kubernetes.io~projected/kube-api-access-flj9j DeviceMajor:0 DeviceMinor:980 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-320 DeviceMajor:0 DeviceMinor:320 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1092 DeviceMajor:0 DeviceMinor:1092 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-165 DeviceMajor:0 DeviceMinor:165 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1121 DeviceMajor:0 DeviceMinor:1121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/39bda5b8-c748-4023-8680-8e8454512e5b/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:626 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-608 DeviceMajor:0 DeviceMinor:608 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b/volumes/kubernetes.io~projected/kube-api-access-jh2zk DeviceMajor:0 DeviceMinor:886 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fb9c2d52a7f820046d4d8f7dbc4ab42d1bcf38f9fbb4f9b3e069dc056c52a7d9/userdata/shm DeviceMajor:0 DeviceMinor:114 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-43 DeviceMajor:0 DeviceMinor:43 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-362 DeviceMajor:0 DeviceMinor:362 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-789 DeviceMajor:0 DeviceMinor:789 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e2742559-1f28-4f2c-a873-d6a9348972fb/volumes/kubernetes.io~projected/kube-api-access-nfz8z DeviceMajor:0 DeviceMinor:668 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:1080 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b00ca20b86c203586e283f8a194f1ae9775853a076e1989c48f1365bb1141a67/userdata/shm DeviceMajor:0 DeviceMinor:1083 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8d775283-2696-4411-8ddf-d4e6000f0a0c/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:217 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-641 DeviceMajor:0 DeviceMinor:641 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a35674af-162c-4a4a-8605-158b2326267e/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:699 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ef824102-83a5-4629-8057-d4f1a57a530d/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:733 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-860 DeviceMajor:0 DeviceMinor:860 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-963 DeviceMajor:0 DeviceMinor:963 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b4e230d3f789f82e2598481603b93fd52d829378a89dce8399b53642cd4db5c4/userdata/shm DeviceMajor:0 DeviceMinor:1191 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-426 DeviceMajor:0 DeviceMinor:426 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/761993bb-2cba-4e1a-b304-36a24817af94/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-49 DeviceMajor:0 DeviceMinor:49 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-350 DeviceMajor:0 DeviceMinor:350 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/900b2a0e-1e2b-41a3-86f5-639ec1e95969/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:1045 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1114 DeviceMajor:0 DeviceMinor:1114 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5913774b8f250bfb47692670821ad697d9a92cb0aca0d95d6ebaa53a1397311f/userdata/shm DeviceMajor:0 DeviceMinor:75 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1036 DeviceMajor:0 DeviceMinor:1036 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/273deb0b6a9c20f6e288a8f04dbffa2d991224ef0582918efc29bdb17656c1b9/userdata/shm DeviceMajor:0 DeviceMinor:148 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/06eb9f4b-167e-435b-8ef6-ae44fc0b85a9/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:799 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1053 DeviceMajor:0 DeviceMinor:1053 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-639 DeviceMajor:0 DeviceMinor:639 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3e2810ad638aff3594c8253ba5203ae1a01b05deb352d63eb28794aa543ce257/userdata/shm DeviceMajor:0 DeviceMinor:820 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-899 DeviceMajor:0 DeviceMinor:899 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e7f6ebd3-98c8-457c-a88c-7e81270f01b5/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:1044 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7/volumes/kubernetes.io~projected/kube-api-access-67sxk DeviceMajor:0 DeviceMinor:674 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-345 DeviceMajor:0 DeviceMinor:345 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-635 DeviceMajor:0 DeviceMinor:635 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3f72fbbe-69f0-4622-be05-b839ff9b4d45/volumes/kubernetes.io~projected/kube-api-access-2mbjg DeviceMajor:0 DeviceMinor:237 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1bc0d552-01c7-4212-a551-d16419f2dc80/volumes/kubernetes.io~projected/kube-api-access-vpq4d DeviceMajor:0 DeviceMinor:236 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/57930a54-89ab-4ec8-a504-74035bb74d63/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:224 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-275 DeviceMajor:0 DeviceMinor:275 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7f4e5afa4afe018a7c389e007a13d614d179ad2102c4e104bffdef509a1d7c7b/userdata/shm DeviceMajor:0 DeviceMinor:764 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/59f21770-429b-4b63-82fd-50ce0daf698d/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1124 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/95c11263-0d68-4b11-bcfd-bcb0e96a6988/volumes/kubernetes.io~projected/kube-api-access-6pfns DeviceMajor:0 DeviceMinor:105 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-267 DeviceMajor:0 DeviceMinor:267 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:736 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1bba274a-38c7-4d13-88a5-6bc39228416c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:225 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1026 DeviceMajor:0 DeviceMinor:1026 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1bc0d552-01c7-4212-a551-d16419f2dc80/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:594 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/addf66af-4d97-4c1e-960d-ace98c27961b/volumes/kubernetes.io~secret/client-ca-bundle DeviceMajor:0 DeviceMinor:1189 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/cba33300-f7ef-4547-97ff-62e223da79cf/volumes/kubernetes.io~projected/kube-api-access-6qv7x DeviceMajor:0 DeviceMinor:719 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a35674af-162c-4a4a-8605-158b2326267e/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:704 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ef824102-83a5-4629-8057-d4f1a57a530d/volumes/kubernetes.io~projected/kube-api-access-5kvhc DeviceMajor:0 DeviceMinor:795 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a81be38f-e07e-4863-8d61-fdefc2713a6a/volumes/kubernetes.io~projected/kube-api-access-b7krt DeviceMajor:0 DeviceMinor:711 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/48b23f5b2fb0b4600ed151be719911ca6e8598a87db7cece2fed00b00050b177/userdata/shm DeviceMajor:0 DeviceMinor:549 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8dd912f8-2c4d-4a0a-ba41-918ab5c235ba/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:316 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-391 DeviceMajor:0 DeviceMinor:391 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-825 DeviceMajor:0 DeviceMinor:825 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-862 DeviceMajor:0 DeviceMinor:862 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8e4d9407-ff79-4396-a37f-896617e024d4/volumes/kubernetes.io~projected/kube-api-access-sjsjh DeviceMajor:0 DeviceMinor:421 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-103 DeviceMajor:0 DeviceMinor:103 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-857 DeviceMajor:0 DeviceMinor:857 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5679426d37d3354caeeb4580675058670c5c7ef6fa2efa546a861e1c9f923e06/userdata/shm DeviceMajor:0 DeviceMinor:630 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b0d9b5d35890bf7ee8f33755b50b3d62e47a389cd7d7e50fa4af660965af6cae/userdata/shm DeviceMajor:0 DeviceMinor:318 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:364 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1349683c6b7a48b60ff43680722efbbec3a557f6a028d5afab1d1b9c68ad3a50/userdata/shm DeviceMajor:0 DeviceMinor:720 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/422b72f1d9f4ed3748b07f1e5c14fad3faa59d5f9a198007cce69e02be1d9fa2/userdata/shm DeviceMajor:0 DeviceMinor:99 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3f72fbbe-69f0-4622-be05-b839ff9b4d45/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-732 DeviceMajor:0 DeviceMinor:732 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/85459175-2c9c-425d-bdfb-0a79c92ed110/volumes/kubernetes.io~projected/kube-api-access-v8tts DeviceMajor:0 DeviceMinor:231 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1001 DeviceMajor:0 DeviceMinor:1001 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-996 DeviceMajor:0 DeviceMinor:996 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1068 DeviceMajor:0 DeviceMinor:1068 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/81cd0864a54b3fb544c03e1c4cc3bb2a1e8301732b585b1ac0d2dad7435e59f9/userdata/shm DeviceMajor:0 DeviceMinor:506 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-707 DeviceMajor:0 DeviceMinor:707 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ef824102-83a5-4629-8057-d4f1a57a530d/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:541 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2f59d485-9f69-4f36-836e-6338f84b7d69/volumes/kubernetes.io~projected/kube-api-access-fbwl8 DeviceMajor:0 DeviceMinor:717 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1132 DeviceMajor:0 DeviceMinor:1132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649/volumes/kubernetes.io~secret/federate-client-tls DeviceMajor:0 DeviceMinor:1009 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c08577925424813ee777936cf83e1b718ae5ce815b0089c7d7f01bbc45cd2891/userdata/shm DeviceMajor:0 DeviceMinor:91 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1ba5c83b988cf94fb241db9240f0b33554a204e49670a14cf13953d488a8abe8/userdata/shm DeviceMajor:0 DeviceMinor:269 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a2435b91-86d6-415b-a978-34cc859e74f2/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:412 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/dd29b21c-7a0e-4311-952f-427b00468e66/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:677 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:423 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/272b53c4-134c-404d-9a27-c7371415b1f7/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:588 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/133614914dd24d9ac9613df300e1e5f9690b2a5705765951b6217919a73bd40b/userdata/shm DeviceMajor:0 DeviceMinor:1090 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4e4174446867a7a20182ef847c837a9996a0c6baab2ed07f50687234fab093d4/userdata/shm DeviceMajor:0 DeviceMinor:1130 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-73 DeviceMajor:0 DeviceMinor:73 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-812 DeviceMajor:0 DeviceMinor:812 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8660cef9-0ab3-453e-a4b9-c243daa6ddb0/volumes/kubernetes.io~projected/kube-api-access-clj2j DeviceMajor:0 DeviceMinor:209 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a6ab4911ef54a5ef7fd92d9752905d7377429179c56c4e77bafea0e6505d40e2/userdata/shm DeviceMajor:0 DeviceMinor:436 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc/volumes/kubernetes.io~projected/kube-api-access-dtp2z DeviceMajor:0 DeviceMinor:738 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/df31c4c2-304e-4bad-8e6f-18c174eba675/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:865 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1136 DeviceMajor:0 DeviceMinor:1136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/07a6a1d6-fecf-4847-b7c1-160d5d7320fb/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:592 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-96 DeviceMajor:0 DeviceMinor:96 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8106d14a-b448-4dd1-bccd-926f85394b5d/volumes/kubernetes.io~projected/kube-api-access-jtqp6 DeviceMajor:0 DeviceMinor:243 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:418 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/39252b5a-d014-4319-ad81-3c1bf2ef585e/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:544 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:887 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba/volumes/kubernetes.io~projected/kube-api-access-ms688 DeviceMajor:0 DeviceMinor:1101 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:140 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-576 DeviceMajor:0 DeviceMinor:576 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/76d596c0-6a41-43e1-9516-aee9ad834ec2/volumes/kubernetes.io~projected/kube-api-access-c4pvp DeviceMajor:0 DeviceMinor:263 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-612 DeviceMajor:0 DeviceMinor:612 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-392 DeviceMajor:0 DeviceMinor:392 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/61d829d7-38e1-4826-942c-f7317c4a4bec/volumes/kubernetes.io~projected/kube-api-access-zqx42 DeviceMajor:0 DeviceMinor:984 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-904 DeviceMajor:0 DeviceMinor:904 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/76d596c0-6a41-43e1-9516-aee9ad834ec2/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:215 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/08ea0d9f-0635-4759-803e-572eca2f2d34/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:223 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c0057d7bbbc9bd9f44bd51e3c80dfbe61d922316757a135f4fb3b8485ad4e5e9/userdata/shm DeviceMajor:0 DeviceMinor:315 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6b77ad35-2fff-47bb-ad34-abb3868b09a9/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:798 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6defef79-6058-466a-ae0b-8eb9258126be/volumes/kubernetes.io~projected/kube-api-access-zxt4g DeviceMajor:0 DeviceMinor:125 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-160 DeviceMajor:0 DeviceMinor:160 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-890 DeviceMajor:0 DeviceMinor:890 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:667 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-616 DeviceMajor:0 DeviceMinor:616 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba/volumes/kubernetes.io~secret/prometheus-operator-tls DeviceMajor:0 DeviceMinor:1100 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/59f21770-429b-4b63-82fd-50ce0daf698d/volumes/kubernetes.io~secret/openshift-state-metrics-tls DeviceMajor:0 DeviceMinor:365 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-363 DeviceMajor:0 DeviceMinor:363 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-360 DeviceMajor:0 DeviceMinor:360 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1119 DeviceMajor:0 DeviceMinor:1119 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-386 DeviceMajor:0 DeviceMinor:386 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/61de099a-410b-4d30-83e8-19cf5901cb27/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:376 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3ec846db-e344-4f9e-95e6-7a0055f52766/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:527 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/241f858261d65330369ee282a68caee5de8979050ed624a101ccc38bb5423e5f/userdata/shm DeviceMajor:0 DeviceMinor:705 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/19d81290fc93fac6e353ccf6f4dabde5040333c3260c06c3a57f91c397c38d86/userdata/shm DeviceMajor:0 DeviceMinor:724 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b1a27def0943392bc851926036706c077e2c62d9404ab94e4d470faf771c9199/userdata/shm DeviceMajor:0 DeviceMinor:992 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6f063e04e3f4cea4c5a58314f5a114923174086e042c2c243d9038f9f34bad2b/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8d775283-2696-4411-8ddf-d4e6000f0a0c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:221 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7433d9bf-4edf-4787-a7a1-e5102c7264c7/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:94 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:031300aa1cb0172 MacAddress:52:72:fe:a9:a5:dc Speed:10000 Mtu:8900} {Name:04b735b224daf50 MacAddress:a6:a1:f4:34:05:f7 Speed:10000 Mtu:8900} {Name:0797fe88dc9adea MacAddress:5e:ed:2d:6e:b0:33 Speed:10000 Mtu:8900} {Name:1086c8d5071e504 MacAddress:8e:79:63:4d:1f:83 Speed:10000 Mtu:8900} {Name:1325db6b5fc63da MacAddress:fa:82:bb:74:07:f4 Speed:10000 Mtu:8900} {Name:1349683c6b7a48b MacAddress:a6:aa:e6:47:f1:94 Speed:10000 Mtu:8900} {Name:16c9911f528d88f MacAddress:ae:f2:70:c6:c4:8d Speed:10000 Mtu:8900} {Name:19d81290fc93fac MacAddress:32:7f:40:13:11:e3 Speed:10000 Mtu:8900} {Name:1ba5c83b988cf94 MacAddress:be:e7:8f:c3:97:c3 Speed:10000 Mtu:8900} {Name:1cc258e5add24f8 MacAddress:1e:09:54:50:00:60 Speed:10000 Mtu:8900} {Name:210d19917e7415e MacAddress:22:fb:e6:33:c2:aa Speed:10000 Mtu:8900} {Name:2376cfb1ee60c23 MacAddress:de:91:9c:9d:2d:0c Speed:10000 Mtu:8900} {Name:2e21aa41c709714 MacAddress:e2:96:94:f6:fe:91 Speed:10000 Mtu:8900} {Name:2ed4af146d2bc6a MacAddress:46:f0:99:90:ed:48 Speed:10000 Mtu:8900} {Name:39547af9c96ab9f MacAddress:8e:5b:bc:ca:10:7f Speed:10000 Mtu:8900} {Name:3e2810ad638aff3 MacAddress:6a:4e:d1:19:c6:b3 Speed:10000 Mtu:8900} {Name:43ed8c1a4973dd1 MacAddress:8a:c2:6b:45:40:cb Speed:10000 Mtu:8900} {Name:44f838e36ef84ec MacAddress:7e:c1:af:4f:2e:fa Speed:10000 Mtu:8900} {Name:47bb0848ead40d3 MacAddress:8e:f2:96:0a:f2:a9 Speed:10000 Mtu:8900} {Name:48b23f5b2fb0b46 MacAddress:3e:9d:30:3e:5d:79 Speed:10000 Mtu:8900} {Name:5679426d37d3354 MacAddress:c2:2f:59:f6:7e:ba Speed:10000 Mtu:8900} {Name:57327dd3cf51a79 MacAddress:12:7d:ee:2a:16:a8 Speed:10000 Mtu:8900} {Name:59d708b78a7b260 MacAddress:e6:c4:bf:bd:b7:9e Speed:10000 Mtu:8900} {Name:5a8c18378832b96 MacAddress:be:6e:ec:12:e3:42 Speed:10000 Mtu:8900} {Name:6248f60ded63572 MacAddress:9a:f5:ce:4f:db:c5 Speed:10000 Mtu:8900} {Name:643a9eb1fc3e8f4 MacAddress:3a:59:9a:8b:db:91 Speed:10000 Mtu:8900} {Name:667a33334db41ad MacAddress:62:7c:36:d9:01:5f Speed:10000 Mtu:8900} {Name:6724dfeb711ea97 MacAddress:1a:4d:fc:51:86:04 Speed:10000 Mtu:8900} {Name:6a6f22295caf556 MacAddress:12:52:78:95:cf:8a Speed:10000 Mtu:8900} {Name:7bbac52760e3fcb MacAddress:6e:c2:91:d7:8b:cb Speed:10000 Mtu:8900} {Name:7e1bd495d46e0c7 MacAddress:c6:0f:12:6e:d3:78 Speed:10000 Mtu:8900} {Name:7f4e5afa4afe018 MacAddress:56:34:3d:9c:57:0f Speed:10000 Mtu:8900} {Name:81cd0864a54b3fb MacAddress:fa:1b:8a:27:66:a8 Speed:10000 Mtu:8900} {Name:84ea14c79c94352 MacAddress:6e:95:54:25:85:62 Speed:10000 Mtu:8900} {Name:8bae2bf48688fed MacAddress:ca:42:4f:fd:fb:de Speed:10000 Mtu:8900} {Name:9ae8ffe0fbe6457 MacAddress:1a:51:b0:51:77:cc Speed:10000 Mtu:8900} {Name:a41bc83813b39c2 MacAddress:ee:6c:56:3f:00:5b Speed:10000 Mtu:8900} {Name:a6ab4911ef54a5e MacAddress:c6:44:1d:48:57:6f Speed:10000 Mtu:8900} {Name:a917672632ddd41 MacAddress:3a:3f:14:86:9a:fe Speed:10000 Mtu:8900} {Name:aca8c7cb3cefb96 MacAddress:c2:bf:60:b1:28:61 Speed:10000 Mtu:8900} {Name:b00ca20b86c2035 MacAddress:56:b8:3c:e6:29:e7 Speed:10000 Mtu:8900} {Name:b067750f065ba84 MacAddress:e2:94:5f:c1:0f:5b Speed:10000 Mtu:8900} {Name:b4d899998f74545 MacAddress:76:cc:1f:81:b8:9c Speed:10000 Mtu:8900} {Name:b4e230d3f789f82 MacAddress:2a:22:c2:2e:13:40 Speed:10000 Mtu:8900} {Name:b820d186bee28ed MacAddress:4e:34:1c:51:aa:9d Speed:10000 Mtu:8900} {Name:b91ed73a339c21a MacAddress:4a:33:0a:bb:90:b0 Speed:10000 Mtu:8900} {Name:bb2ba7d0c1c5133 MacAddress:02:93:03:7b:f1:99 Speed:10000 Mtu:8900} {Name:bc3c55d0c455838 MacAddress:e2:14:4f:0c:71:6d Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:d6:09:00:13:e9:99 Speed:0 Mtu:8900} {Name:c0057d7bbbc9bd9 MacAddress:1e:50:8c:2a:60:9b Speed:10000 Mtu:8900} {Name:cf474d719fe0217 MacAddress:32:a1:7a:17:fa:86 Speed:10000 Mtu:8900} {Name:d46849ab9a3cac2 MacAddress:16:7b:d5:77:5d:2d Speed:10000 Mtu:8900} {Name:d6cba419a6f6e10 MacAddress:fa:29:7c:86:2c:e8 Speed:10000 Mtu:8900} {Name:dc05a7757105e04 MacAddress:8e:b4:3c:41:86:7f Speed:10000 Mtu:8900} {Name:ddff8978b61211c MacAddress:32:d3:6e:03:24:0d Speed:10000 Mtu:8900} {Name:e7f98f2c20f8a17 MacAddress:fe:43:2e:bf:28:bb Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:fa:69:5a Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:bb:95:55 Speed:-1 Mtu:9000} {Name:f02823618c817a5 MacAddress:0e:78:db:14:0d:da Speed:10000 Mtu:8900} {Name:f0298c9e8c7173c MacAddress:06:1b:84:1a:78:96 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:6a:3f:76:c6:88:2a Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 12 14:35:41.135511 master-0 kubenswrapper[37036]: I0312 14:35:41.134951 37036 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 12 14:35:41.135511 master-0 kubenswrapper[37036]: I0312 14:35:41.135016 37036 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 12 14:35:41.135511 master-0 kubenswrapper[37036]: I0312 14:35:41.135260 37036 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 12 14:35:41.135511 master-0 kubenswrapper[37036]: I0312 14:35:41.135401 37036 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 14:35:41.136020 master-0 kubenswrapper[37036]: I0312 14:35:41.135424 37036 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 14:35:41.136020 master-0 kubenswrapper[37036]: I0312 14:35:41.135680 37036 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 14:35:41.136020 master-0 kubenswrapper[37036]: I0312 14:35:41.135696 37036 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 14:35:41.136020 master-0 kubenswrapper[37036]: I0312 14:35:41.135704 37036 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 14:35:41.136020 master-0 kubenswrapper[37036]: I0312 14:35:41.135726 37036 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 14:35:41.136020 master-0 kubenswrapper[37036]: I0312 14:35:41.135759 37036 state_mem.go:36] "Initialized new in-memory state store" Mar 12 14:35:41.136020 master-0 kubenswrapper[37036]: I0312 14:35:41.135848 37036 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 12 14:35:41.136020 master-0 kubenswrapper[37036]: I0312 14:35:41.135895 37036 kubelet.go:418] "Attempting to sync node with API server" Mar 12 14:35:41.136020 master-0 kubenswrapper[37036]: I0312 14:35:41.135923 37036 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 14:35:41.136020 master-0 kubenswrapper[37036]: I0312 14:35:41.135936 37036 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 12 14:35:41.136020 master-0 kubenswrapper[37036]: I0312 14:35:41.135947 37036 kubelet.go:324] "Adding apiserver pod source" Mar 12 14:35:41.136020 master-0 kubenswrapper[37036]: I0312 14:35:41.135963 37036 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 14:35:41.141837 master-0 kubenswrapper[37036]: I0312 14:35:41.139702 37036 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 12 14:35:41.141837 master-0 kubenswrapper[37036]: I0312 14:35:41.139841 37036 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 12 14:35:41.141837 master-0 kubenswrapper[37036]: I0312 14:35:41.140121 37036 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 12 14:35:41.141837 master-0 kubenswrapper[37036]: I0312 14:35:41.140249 37036 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 12 14:35:41.141837 master-0 kubenswrapper[37036]: I0312 14:35:41.140270 37036 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 12 14:35:41.141837 master-0 kubenswrapper[37036]: I0312 14:35:41.140277 37036 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 12 14:35:41.141837 master-0 kubenswrapper[37036]: I0312 14:35:41.140284 37036 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 12 14:35:41.141837 master-0 kubenswrapper[37036]: I0312 14:35:41.140291 37036 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 12 14:35:41.141837 master-0 kubenswrapper[37036]: I0312 14:35:41.140300 37036 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 12 14:35:41.141837 master-0 kubenswrapper[37036]: I0312 14:35:41.140311 37036 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 12 14:35:41.141837 master-0 kubenswrapper[37036]: I0312 14:35:41.140319 37036 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 12 14:35:41.141837 master-0 kubenswrapper[37036]: I0312 14:35:41.140330 37036 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 12 14:35:41.141837 master-0 kubenswrapper[37036]: I0312 14:35:41.140436 37036 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 12 14:35:41.141837 master-0 kubenswrapper[37036]: I0312 14:35:41.140451 37036 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 12 14:35:41.141837 master-0 kubenswrapper[37036]: I0312 14:35:41.140465 37036 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 12 14:35:41.141837 master-0 kubenswrapper[37036]: I0312 14:35:41.140487 37036 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 12 14:35:41.141837 master-0 kubenswrapper[37036]: I0312 14:35:41.141297 37036 server.go:1280] "Started kubelet" Mar 12 14:35:41.141837 master-0 kubenswrapper[37036]: I0312 14:35:41.141388 37036 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 14:35:41.141837 master-0 kubenswrapper[37036]: I0312 14:35:41.141452 37036 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 12 14:35:41.143570 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 12 14:35:41.146159 master-0 kubenswrapper[37036]: I0312 14:35:41.143672 37036 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 14:35:41.165083 master-0 kubenswrapper[37036]: I0312 14:35:41.165009 37036 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 14:35:41.169059 master-0 kubenswrapper[37036]: I0312 14:35:41.166160 37036 server.go:449] "Adding debug handlers to kubelet server" Mar 12 14:35:41.184953 master-0 kubenswrapper[37036]: I0312 14:35:41.184244 37036 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 12 14:35:41.194012 master-0 kubenswrapper[37036]: I0312 14:35:41.192441 37036 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 12 14:35:41.194943 master-0 kubenswrapper[37036]: I0312 14:35:41.194879 37036 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 12 14:35:41.194943 master-0 kubenswrapper[37036]: I0312 14:35:41.194942 37036 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 14:35:41.195193 master-0 kubenswrapper[37036]: I0312 14:35:41.195163 37036 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 12 14:35:41.195193 master-0 kubenswrapper[37036]: I0312 14:35:41.195190 37036 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 12 14:35:41.195301 master-0 kubenswrapper[37036]: I0312 14:35:41.195176 37036 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-13 14:02:38 +0000 UTC, rotation deadline is 2026-03-13 08:47:43.22937694 +0000 UTC Mar 12 14:35:41.195301 master-0 kubenswrapper[37036]: I0312 14:35:41.195235 37036 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 18h12m2.034145717s for next certificate rotation Mar 12 14:35:41.195301 master-0 kubenswrapper[37036]: I0312 14:35:41.195241 37036 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 12 14:35:41.196642 master-0 kubenswrapper[37036]: I0312 14:35:41.196614 37036 factory.go:55] Registering systemd factory Mar 12 14:35:41.196716 master-0 kubenswrapper[37036]: I0312 14:35:41.196645 37036 factory.go:221] Registration of the systemd container factory successfully Mar 12 14:35:41.197048 master-0 kubenswrapper[37036]: I0312 14:35:41.197032 37036 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 12 14:35:41.197131 master-0 kubenswrapper[37036]: I0312 14:35:41.197094 37036 factory.go:153] Registering CRI-O factory Mar 12 14:35:41.197169 master-0 kubenswrapper[37036]: I0312 14:35:41.197134 37036 factory.go:221] Registration of the crio container factory successfully Mar 12 14:35:41.197282 master-0 kubenswrapper[37036]: I0312 14:35:41.197265 37036 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 12 14:35:41.197350 master-0 kubenswrapper[37036]: I0312 14:35:41.197295 37036 factory.go:103] Registering Raw factory Mar 12 14:35:41.197350 master-0 kubenswrapper[37036]: I0312 14:35:41.197310 37036 manager.go:1196] Started watching for new ooms in manager Mar 12 14:35:41.197843 master-0 kubenswrapper[37036]: I0312 14:35:41.197825 37036 manager.go:319] Starting recovery of all containers Mar 12 14:35:41.199828 master-0 kubenswrapper[37036]: E0312 14:35:41.199806 37036 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 12 14:35:41.211410 master-0 kubenswrapper[37036]: I0312 14:35:41.211346 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b77ad35-2fff-47bb-ad34-abb3868b09a9" volumeName="kubernetes.io/configmap/6b77ad35-2fff-47bb-ad34-abb3868b09a9-auth-proxy-config" seLinuxMountContext="" Mar 12 14:35:41.211617 master-0 kubenswrapper[37036]: I0312 14:35:41.211599 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" volumeName="kubernetes.io/secret/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-stats-auth" seLinuxMountContext="" Mar 12 14:35:41.211783 master-0 kubenswrapper[37036]: I0312 14:35:41.211768 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39252b5a-d014-4319-ad81-3c1bf2ef585e" volumeName="kubernetes.io/projected/39252b5a-d014-4319-ad81-3c1bf2ef585e-ca-certs" seLinuxMountContext="" Mar 12 14:35:41.211876 master-0 kubenswrapper[37036]: I0312 14:35:41.211864 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f5cd3ff-ced6-47e3-8054-d83053d87680" volumeName="kubernetes.io/configmap/6f5cd3ff-ced6-47e3-8054-d83053d87680-config" seLinuxMountContext="" Mar 12 14:35:41.212104 master-0 kubenswrapper[37036]: I0312 14:35:41.212072 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7420564a-dc9d-4a2e-b0fc-0cc01f115e3b" volumeName="kubernetes.io/secret/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-etcd-client" seLinuxMountContext="" Mar 12 14:35:41.212189 master-0 kubenswrapper[37036]: I0312 14:35:41.212157 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8660cef9-0ab3-453e-a4b9-c243daa6ddb0" volumeName="kubernetes.io/projected/8660cef9-0ab3-453e-a4b9-c243daa6ddb0-kube-api-access-clj2j" seLinuxMountContext="" Mar 12 14:35:41.212306 master-0 kubenswrapper[37036]: I0312 14:35:41.212292 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39bda5b8-c748-4023-8680-8e8454512e5b" volumeName="kubernetes.io/secret/39bda5b8-c748-4023-8680-8e8454512e5b-serving-cert" seLinuxMountContext="" Mar 12 14:35:41.212402 master-0 kubenswrapper[37036]: I0312 14:35:41.212390 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba" volumeName="kubernetes.io/configmap/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-metrics-client-ca" seLinuxMountContext="" Mar 12 14:35:41.212507 master-0 kubenswrapper[37036]: I0312 14:35:41.212477 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0a898118-6d01-4211-92f0-43967b75405c" volumeName="kubernetes.io/secret/0a898118-6d01-4211-92f0-43967b75405c-serving-cert" seLinuxMountContext="" Mar 12 14:35:41.212595 master-0 kubenswrapper[37036]: I0312 14:35:41.212563 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39bda5b8-c748-4023-8680-8e8454512e5b" volumeName="kubernetes.io/secret/39bda5b8-c748-4023-8680-8e8454512e5b-encryption-config" seLinuxMountContext="" Mar 12 14:35:41.212706 master-0 kubenswrapper[37036]: I0312 14:35:41.212692 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70710a0b-8b5d-40f5-b726-fd5e2836ffbe" volumeName="kubernetes.io/empty-dir/70710a0b-8b5d-40f5-b726-fd5e2836ffbe-utilities" seLinuxMountContext="" Mar 12 14:35:41.212791 master-0 kubenswrapper[37036]: I0312 14:35:41.212779 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="df31c4c2-304e-4bad-8e6f-18c174eba675" volumeName="kubernetes.io/configmap/df31c4c2-304e-4bad-8e6f-18c174eba675-config" seLinuxMountContext="" Mar 12 14:35:41.212877 master-0 kubenswrapper[37036]: I0312 14:35:41.212865 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bba274a-38c7-4d13-88a5-6bc39228416c" volumeName="kubernetes.io/configmap/1bba274a-38c7-4d13-88a5-6bc39228416c-config" seLinuxMountContext="" Mar 12 14:35:41.213001 master-0 kubenswrapper[37036]: I0312 14:35:41.212969 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39bda5b8-c748-4023-8680-8e8454512e5b" volumeName="kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-image-import-ca" seLinuxMountContext="" Mar 12 14:35:41.213109 master-0 kubenswrapper[37036]: I0312 14:35:41.213059 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="879e9bf1-ce4a-40b7-a72c-fe4c61e96cea" volumeName="kubernetes.io/projected/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-kube-api-access-2z8pd" seLinuxMountContext="" Mar 12 14:35:41.213183 master-0 kubenswrapper[37036]: I0312 14:35:41.213169 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2742559-1f28-4f2c-a873-d6a9348972fb" volumeName="kubernetes.io/empty-dir/e2742559-1f28-4f2c-a873-d6a9348972fb-catalog-content" seLinuxMountContext="" Mar 12 14:35:41.213250 master-0 kubenswrapper[37036]: I0312 14:35:41.213239 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2742559-1f28-4f2c-a873-d6a9348972fb" volumeName="kubernetes.io/projected/e2742559-1f28-4f2c-a873-d6a9348972fb-kube-api-access-nfz8z" seLinuxMountContext="" Mar 12 14:35:41.213310 master-0 kubenswrapper[37036]: I0312 14:35:41.213299 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1047bb4a-135f-488d-9399-0518cb3a827d" volumeName="kubernetes.io/configmap/1047bb4a-135f-488d-9399-0518cb3a827d-auth-proxy-config" seLinuxMountContext="" Mar 12 14:35:41.213397 master-0 kubenswrapper[37036]: I0312 14:35:41.213365 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7420564a-dc9d-4a2e-b0fc-0cc01f115e3b" volumeName="kubernetes.io/configmap/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-trusted-ca-bundle" seLinuxMountContext="" Mar 12 14:35:41.213479 master-0 kubenswrapper[37036]: I0312 14:35:41.213447 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="879e9bf1-ce4a-40b7-a72c-fe4c61e96cea" volumeName="kubernetes.io/configmap/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-trusted-ca" seLinuxMountContext="" Mar 12 14:35:41.213561 master-0 kubenswrapper[37036]: I0312 14:35:41.213548 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d775283-2696-4411-8ddf-d4e6000f0a0c" volumeName="kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-ca" seLinuxMountContext="" Mar 12 14:35:41.213623 master-0 kubenswrapper[37036]: I0312 14:35:41.213612 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f5cd3ff-ced6-47e3-8054-d83053d87680" volumeName="kubernetes.io/projected/6f5cd3ff-ced6-47e3-8054-d83053d87680-kube-api-access-7dkwb" seLinuxMountContext="" Mar 12 14:35:41.213683 master-0 kubenswrapper[37036]: I0312 14:35:41.213671 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="761993bb-2cba-4e1a-b304-36a24817af94" volumeName="kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-env-overrides" seLinuxMountContext="" Mar 12 14:35:41.213749 master-0 kubenswrapper[37036]: I0312 14:35:41.213737 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bc0d552-01c7-4212-a551-d16419f2dc80" volumeName="kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics" seLinuxMountContext="" Mar 12 14:35:41.213860 master-0 kubenswrapper[37036]: I0312 14:35:41.213846 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d56089bf-177c-492d-8964-73a45574e7ed" volumeName="kubernetes.io/projected/d56089bf-177c-492d-8964-73a45574e7ed-kube-api-access-f2gnl" seLinuxMountContext="" Mar 12 14:35:41.213940 master-0 kubenswrapper[37036]: I0312 14:35:41.213927 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="40912d56-8288-4d58-ad91-7455bd460887" volumeName="kubernetes.io/projected/40912d56-8288-4d58-ad91-7455bd460887-kube-api-access-l9gvf" seLinuxMountContext="" Mar 12 14:35:41.214018 master-0 kubenswrapper[37036]: I0312 14:35:41.214003 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c6b9f13-4a3a-4920-a84b-f76516501f81" volumeName="kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls" seLinuxMountContext="" Mar 12 14:35:41.214083 master-0 kubenswrapper[37036]: I0312 14:35:41.214071 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1047bb4a-135f-488d-9399-0518cb3a827d" volumeName="kubernetes.io/projected/1047bb4a-135f-488d-9399-0518cb3a827d-kube-api-access-flj9j" seLinuxMountContext="" Mar 12 14:35:41.214147 master-0 kubenswrapper[37036]: I0312 14:35:41.214136 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9757756c-cb67-4b6f-99c3-dd63f904897a" volumeName="kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-cni-binary-copy" seLinuxMountContext="" Mar 12 14:35:41.214229 master-0 kubenswrapper[37036]: I0312 14:35:41.214215 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99433993-93cf-46cb-bb66-485672cb2554" volumeName="kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-proxy-ca-bundles" seLinuxMountContext="" Mar 12 14:35:41.214296 master-0 kubenswrapper[37036]: I0312 14:35:41.214284 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2" volumeName="kubernetes.io/projected/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-kube-api-access" seLinuxMountContext="" Mar 12 14:35:41.214360 master-0 kubenswrapper[37036]: I0312 14:35:41.214347 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8dd912f8-2c4d-4a0a-ba41-918ab5c235ba" volumeName="kubernetes.io/secret/8dd912f8-2c4d-4a0a-ba41-918ab5c235ba-webhook-certs" seLinuxMountContext="" Mar 12 14:35:41.214420 master-0 kubenswrapper[37036]: I0312 14:35:41.214409 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e4d9407-ff79-4396-a37f-896617e024d4" volumeName="kubernetes.io/configmap/8e4d9407-ff79-4396-a37f-896617e024d4-mcd-auth-proxy-config" seLinuxMountContext="" Mar 12 14:35:41.214480 master-0 kubenswrapper[37036]: I0312 14:35:41.214469 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bba274a-38c7-4d13-88a5-6bc39228416c" volumeName="kubernetes.io/projected/1bba274a-38c7-4d13-88a5-6bc39228416c-kube-api-access" seLinuxMountContext="" Mar 12 14:35:41.214545 master-0 kubenswrapper[37036]: I0312 14:35:41.214534 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3dc73c14-852d-4957-b6ac-84366ba0594f" volumeName="kubernetes.io/configmap/3dc73c14-852d-4957-b6ac-84366ba0594f-config" seLinuxMountContext="" Mar 12 14:35:41.214608 master-0 kubenswrapper[37036]: I0312 14:35:41.214595 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3edaa533-ecbb-443e-a270-4cb4f923daf6" volumeName="kubernetes.io/configmap/3edaa533-ecbb-443e-a270-4cb4f923daf6-images" seLinuxMountContext="" Mar 12 14:35:41.214668 master-0 kubenswrapper[37036]: I0312 14:35:41.214656 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7433d9bf-4edf-4787-a7a1-e5102c7264c7" volumeName="kubernetes.io/projected/7433d9bf-4edf-4787-a7a1-e5102c7264c7-kube-api-access-t4q4w" seLinuxMountContext="" Mar 12 14:35:41.214745 master-0 kubenswrapper[37036]: I0312 14:35:41.214733 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1f9b15c6-b4ee-4907-8daa-376e3b438896" volumeName="kubernetes.io/empty-dir/1f9b15c6-b4ee-4907-8daa-376e3b438896-cache" seLinuxMountContext="" Mar 12 14:35:41.214803 master-0 kubenswrapper[37036]: I0312 14:35:41.214792 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70710a0b-8b5d-40f5-b726-fd5e2836ffbe" volumeName="kubernetes.io/empty-dir/70710a0b-8b5d-40f5-b726-fd5e2836ffbe-catalog-content" seLinuxMountContext="" Mar 12 14:35:41.214866 master-0 kubenswrapper[37036]: I0312 14:35:41.214854 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70710a0b-8b5d-40f5-b726-fd5e2836ffbe" volumeName="kubernetes.io/projected/70710a0b-8b5d-40f5-b726-fd5e2836ffbe-kube-api-access-b9cfq" seLinuxMountContext="" Mar 12 14:35:41.214954 master-0 kubenswrapper[37036]: I0312 14:35:41.214941 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d00a8cc7-7774-40bd-94a1-9ac2d0f63234" volumeName="kubernetes.io/configmap/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-config" seLinuxMountContext="" Mar 12 14:35:41.215031 master-0 kubenswrapper[37036]: I0312 14:35:41.215019 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39252b5a-d014-4319-ad81-3c1bf2ef585e" volumeName="kubernetes.io/empty-dir/39252b5a-d014-4319-ad81-3c1bf2ef585e-cache" seLinuxMountContext="" Mar 12 14:35:41.215124 master-0 kubenswrapper[37036]: I0312 14:35:41.215112 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59f21770-429b-4b63-82fd-50ce0daf698d" volumeName="kubernetes.io/projected/59f21770-429b-4b63-82fd-50ce0daf698d-kube-api-access-qxdqn" seLinuxMountContext="" Mar 12 14:35:41.215207 master-0 kubenswrapper[37036]: I0312 14:35:41.215194 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ec846db-e344-4f9e-95e6-7a0055f52766" volumeName="kubernetes.io/projected/3ec846db-e344-4f9e-95e6-7a0055f52766-kube-api-access-tkgft" seLinuxMountContext="" Mar 12 14:35:41.215311 master-0 kubenswrapper[37036]: I0312 14:35:41.215296 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dd29b21c-7a0e-4311-952f-427b00468e66" volumeName="kubernetes.io/secret/dd29b21c-7a0e-4311-952f-427b00468e66-serving-cert" seLinuxMountContext="" Mar 12 14:35:41.215398 master-0 kubenswrapper[37036]: I0312 14:35:41.215384 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39bda5b8-c748-4023-8680-8e8454512e5b" volumeName="kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-config" seLinuxMountContext="" Mar 12 14:35:41.215497 master-0 kubenswrapper[37036]: I0312 14:35:41.215484 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7420564a-dc9d-4a2e-b0fc-0cc01f115e3b" volumeName="kubernetes.io/secret/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-serving-cert" seLinuxMountContext="" Mar 12 14:35:41.215576 master-0 kubenswrapper[37036]: I0312 14:35:41.215563 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99433993-93cf-46cb-bb66-485672cb2554" volumeName="kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-config" seLinuxMountContext="" Mar 12 14:35:41.215658 master-0 kubenswrapper[37036]: I0312 14:35:41.215646 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7" volumeName="kubernetes.io/projected/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-kube-api-access-67sxk" seLinuxMountContext="" Mar 12 14:35:41.215741 master-0 kubenswrapper[37036]: I0312 14:35:41.215729 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" volumeName="kubernetes.io/configmap/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-service-ca-bundle" seLinuxMountContext="" Mar 12 14:35:41.215847 master-0 kubenswrapper[37036]: I0312 14:35:41.215834 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0a898118-6d01-4211-92f0-43967b75405c" volumeName="kubernetes.io/projected/0a898118-6d01-4211-92f0-43967b75405c-kube-api-access-8rfxl" seLinuxMountContext="" Mar 12 14:35:41.215978 master-0 kubenswrapper[37036]: I0312 14:35:41.215964 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57930a54-89ab-4ec8-a504-74035bb74d63" volumeName="kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-trusted-ca-bundle" seLinuxMountContext="" Mar 12 14:35:41.216067 master-0 kubenswrapper[37036]: I0312 14:35:41.216055 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fb06459-09da-4620-91cf-8c3fe8f425db" volumeName="kubernetes.io/empty-dir/5fb06459-09da-4620-91cf-8c3fe8f425db-tmp" seLinuxMountContext="" Mar 12 14:35:41.216154 master-0 kubenswrapper[37036]: I0312 14:35:41.216139 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2435b91-86d6-415b-a978-34cc859e74f2" volumeName="kubernetes.io/configmap/a2435b91-86d6-415b-a978-34cc859e74f2-trusted-ca" seLinuxMountContext="" Mar 12 14:35:41.216250 master-0 kubenswrapper[37036]: I0312 14:35:41.216237 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59f21770-429b-4b63-82fd-50ce0daf698d" volumeName="kubernetes.io/secret/59f21770-429b-4b63-82fd-50ce0daf698d-openshift-state-metrics-tls" seLinuxMountContext="" Mar 12 14:35:41.216333 master-0 kubenswrapper[37036]: I0312 14:35:41.216321 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fb06459-09da-4620-91cf-8c3fe8f425db" volumeName="kubernetes.io/projected/5fb06459-09da-4620-91cf-8c3fe8f425db-kube-api-access-zv69s" seLinuxMountContext="" Mar 12 14:35:41.216416 master-0 kubenswrapper[37036]: I0312 14:35:41.216404 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6defef79-6058-466a-ae0b-8eb9258126be" volumeName="kubernetes.io/secret/6defef79-6058-466a-ae0b-8eb9258126be-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 12 14:35:41.216512 master-0 kubenswrapper[37036]: I0312 14:35:41.216482 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2" volumeName="kubernetes.io/secret/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-serving-cert" seLinuxMountContext="" Mar 12 14:35:41.216596 master-0 kubenswrapper[37036]: I0312 14:35:41.216583 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dd29b21c-7a0e-4311-952f-427b00468e66" volumeName="kubernetes.io/configmap/dd29b21c-7a0e-4311-952f-427b00468e66-service-ca-bundle" seLinuxMountContext="" Mar 12 14:35:41.216680 master-0 kubenswrapper[37036]: I0312 14:35:41.216667 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9dfe48c-daa1-4c18-9cf5-7b4930a0e649" volumeName="kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-serving-certs-ca-bundle" seLinuxMountContext="" Mar 12 14:35:41.216762 master-0 kubenswrapper[37036]: I0312 14:35:41.216749 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef824102-83a5-4629-8057-d4f1a57a530d" volumeName="kubernetes.io/secret/ef824102-83a5-4629-8057-d4f1a57a530d-webhook-cert" seLinuxMountContext="" Mar 12 14:35:41.216846 master-0 kubenswrapper[37036]: I0312 14:35:41.216834 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="761993bb-2cba-4e1a-b304-36a24817af94" volumeName="kubernetes.io/projected/761993bb-2cba-4e1a-b304-36a24817af94-kube-api-access-2k4mx" seLinuxMountContext="" Mar 12 14:35:41.216936 master-0 kubenswrapper[37036]: I0312 14:35:41.216924 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="95c11263-0d68-4b11-bcfd-bcb0e96a6988" volumeName="kubernetes.io/configmap/95c11263-0d68-4b11-bcfd-bcb0e96a6988-cni-binary-copy" seLinuxMountContext="" Mar 12 14:35:41.217048 master-0 kubenswrapper[37036]: I0312 14:35:41.217035 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="df31c4c2-304e-4bad-8e6f-18c174eba675" volumeName="kubernetes.io/configmap/df31c4c2-304e-4bad-8e6f-18c174eba675-client-ca" seLinuxMountContext="" Mar 12 14:35:41.217130 master-0 kubenswrapper[37036]: I0312 14:35:41.217118 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9" volumeName="kubernetes.io/configmap/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-ovnkube-identity-cm" seLinuxMountContext="" Mar 12 14:35:41.217193 master-0 kubenswrapper[37036]: I0312 14:35:41.217182 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d775283-2696-4411-8ddf-d4e6000f0a0c" volumeName="kubernetes.io/secret/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-client" seLinuxMountContext="" Mar 12 14:35:41.217333 master-0 kubenswrapper[37036]: I0312 14:35:41.217321 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3815db41-fe01-43f6-b75c-4ccca9124f51" volumeName="kubernetes.io/projected/3815db41-fe01-43f6-b75c-4ccca9124f51-kube-api-access-shknb" seLinuxMountContext="" Mar 12 14:35:41.217394 master-0 kubenswrapper[37036]: I0312 14:35:41.217384 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b77ad35-2fff-47bb-ad34-abb3868b09a9" volumeName="kubernetes.io/projected/6b77ad35-2fff-47bb-ad34-abb3868b09a9-kube-api-access-m97zx" seLinuxMountContext="" Mar 12 14:35:41.217456 master-0 kubenswrapper[37036]: I0312 14:35:41.217445 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f5cd3ff-ced6-47e3-8054-d83053d87680" volumeName="kubernetes.io/secret/6f5cd3ff-ced6-47e3-8054-d83053d87680-machine-api-operator-tls" seLinuxMountContext="" Mar 12 14:35:41.217519 master-0 kubenswrapper[37036]: I0312 14:35:41.217505 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="76d596c0-6a41-43e1-9516-aee9ad834ec2" volumeName="kubernetes.io/projected/76d596c0-6a41-43e1-9516-aee9ad834ec2-kube-api-access-c4pvp" seLinuxMountContext="" Mar 12 14:35:41.217580 master-0 kubenswrapper[37036]: I0312 14:35:41.217569 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f5cd3ff-ced6-47e3-8054-d83053d87680" volumeName="kubernetes.io/configmap/6f5cd3ff-ced6-47e3-8054-d83053d87680-images" seLinuxMountContext="" Mar 12 14:35:41.217642 master-0 kubenswrapper[37036]: I0312 14:35:41.217631 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fdce71e-8085-4316-be40-e535530c2ca4" volumeName="kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs" seLinuxMountContext="" Mar 12 14:35:41.217704 master-0 kubenswrapper[37036]: I0312 14:35:41.217693 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8106d14a-b448-4dd1-bccd-926f85394b5d" volumeName="kubernetes.io/projected/8106d14a-b448-4dd1-bccd-926f85394b5d-kube-api-access-jtqp6" seLinuxMountContext="" Mar 12 14:35:41.217765 master-0 kubenswrapper[37036]: I0312 14:35:41.217753 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2" volumeName="kubernetes.io/configmap/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-config" seLinuxMountContext="" Mar 12 14:35:41.217828 master-0 kubenswrapper[37036]: I0312 14:35:41.217815 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06eb9f4b-167e-435b-8ef6-ae44fc0b85a9" volumeName="kubernetes.io/secret/06eb9f4b-167e-435b-8ef6-ae44fc0b85a9-cluster-storage-operator-serving-cert" seLinuxMountContext="" Mar 12 14:35:41.217884 master-0 kubenswrapper[37036]: I0312 14:35:41.217874 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1047bb4a-135f-488d-9399-0518cb3a827d" volumeName="kubernetes.io/secret/1047bb4a-135f-488d-9399-0518cb3a827d-cloud-controller-manager-operator-tls" seLinuxMountContext="" Mar 12 14:35:41.217969 master-0 kubenswrapper[37036]: I0312 14:35:41.217956 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3edaa533-ecbb-443e-a270-4cb4f923daf6" volumeName="kubernetes.io/projected/3edaa533-ecbb-443e-a270-4cb4f923daf6-kube-api-access-smwtd" seLinuxMountContext="" Mar 12 14:35:41.218051 master-0 kubenswrapper[37036]: I0312 14:35:41.218038 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6defef79-6058-466a-ae0b-8eb9258126be" volumeName="kubernetes.io/projected/6defef79-6058-466a-ae0b-8eb9258126be-kube-api-access-zxt4g" seLinuxMountContext="" Mar 12 14:35:41.218145 master-0 kubenswrapper[37036]: I0312 14:35:41.218133 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08ea0d9f-0635-4759-803e-572eca2f2d34" volumeName="kubernetes.io/projected/08ea0d9f-0635-4759-803e-572eca2f2d34-kube-api-access" seLinuxMountContext="" Mar 12 14:35:41.218204 master-0 kubenswrapper[37036]: I0312 14:35:41.218194 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8106d14a-b448-4dd1-bccd-926f85394b5d" volumeName="kubernetes.io/empty-dir/8106d14a-b448-4dd1-bccd-926f85394b5d-operand-assets" seLinuxMountContext="" Mar 12 14:35:41.218261 master-0 kubenswrapper[37036]: I0312 14:35:41.218250 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="879e9bf1-ce4a-40b7-a72c-fe4c61e96cea" volumeName="kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert" seLinuxMountContext="" Mar 12 14:35:41.218322 master-0 kubenswrapper[37036]: I0312 14:35:41.218309 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99433993-93cf-46cb-bb66-485672cb2554" volumeName="kubernetes.io/secret/99433993-93cf-46cb-bb66-485672cb2554-serving-cert" seLinuxMountContext="" Mar 12 14:35:41.218384 master-0 kubenswrapper[37036]: I0312 14:35:41.218373 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7420564a-dc9d-4a2e-b0fc-0cc01f115e3b" volumeName="kubernetes.io/configmap/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-audit-policies" seLinuxMountContext="" Mar 12 14:35:41.218446 master-0 kubenswrapper[37036]: I0312 14:35:41.218435 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9" volumeName="kubernetes.io/projected/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-kube-api-access-wwtr9" seLinuxMountContext="" Mar 12 14:35:41.218506 master-0 kubenswrapper[37036]: I0312 14:35:41.218494 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="40912d56-8288-4d58-ad91-7455bd460887" volumeName="kubernetes.io/configmap/40912d56-8288-4d58-ad91-7455bd460887-auth-proxy-config" seLinuxMountContext="" Mar 12 14:35:41.218571 master-0 kubenswrapper[37036]: I0312 14:35:41.218558 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="61d829d7-38e1-4826-942c-f7317c4a4bec" volumeName="kubernetes.io/secret/61d829d7-38e1-4826-942c-f7317c4a4bec-proxy-tls" seLinuxMountContext="" Mar 12 14:35:41.218660 master-0 kubenswrapper[37036]: I0312 14:35:41.218648 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b77ad35-2fff-47bb-ad34-abb3868b09a9" volumeName="kubernetes.io/configmap/6b77ad35-2fff-47bb-ad34-abb3868b09a9-images" seLinuxMountContext="" Mar 12 14:35:41.218730 master-0 kubenswrapper[37036]: I0312 14:35:41.218718 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b77ad35-2fff-47bb-ad34-abb3868b09a9" volumeName="kubernetes.io/secret/6b77ad35-2fff-47bb-ad34-abb3868b09a9-proxy-tls" seLinuxMountContext="" Mar 12 14:35:41.218786 master-0 kubenswrapper[37036]: I0312 14:35:41.218775 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f59d485-9f69-4f36-836e-6338f84b7d69" volumeName="kubernetes.io/projected/2f59d485-9f69-4f36-836e-6338f84b7d69-kube-api-access-fbwl8" seLinuxMountContext="" Mar 12 14:35:41.218841 master-0 kubenswrapper[37036]: I0312 14:35:41.218830 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39bda5b8-c748-4023-8680-8e8454512e5b" volumeName="kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-etcd-serving-ca" seLinuxMountContext="" Mar 12 14:35:41.218915 master-0 kubenswrapper[37036]: I0312 14:35:41.218890 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9dfe48c-daa1-4c18-9cf5-7b4930a0e649" volumeName="kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-telemeter-client-tls" seLinuxMountContext="" Mar 12 14:35:41.218987 master-0 kubenswrapper[37036]: I0312 14:35:41.218975 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="76d596c0-6a41-43e1-9516-aee9ad834ec2" volumeName="kubernetes.io/configmap/76d596c0-6a41-43e1-9516-aee9ad834ec2-config" seLinuxMountContext="" Mar 12 14:35:41.219066 master-0 kubenswrapper[37036]: I0312 14:35:41.219054 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="addf66af-4d97-4c1e-960d-ace98c27961b" volumeName="kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-client-ca-bundle" seLinuxMountContext="" Mar 12 14:35:41.219133 master-0 kubenswrapper[37036]: I0312 14:35:41.219121 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1f9b15c6-b4ee-4907-8daa-376e3b438896" volumeName="kubernetes.io/projected/1f9b15c6-b4ee-4907-8daa-376e3b438896-ca-certs" seLinuxMountContext="" Mar 12 14:35:41.219484 master-0 kubenswrapper[37036]: I0312 14:35:41.219469 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39252b5a-d014-4319-ad81-3c1bf2ef585e" volumeName="kubernetes.io/projected/39252b5a-d014-4319-ad81-3c1bf2ef585e-kube-api-access-ktncx" seLinuxMountContext="" Mar 12 14:35:41.219554 master-0 kubenswrapper[37036]: I0312 14:35:41.219542 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3dc73c14-852d-4957-b6ac-84366ba0594f" volumeName="kubernetes.io/secret/3dc73c14-852d-4957-b6ac-84366ba0594f-serving-cert" seLinuxMountContext="" Mar 12 14:35:41.219612 master-0 kubenswrapper[37036]: I0312 14:35:41.219600 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d00a8cc7-7774-40bd-94a1-9ac2d0f63234" volumeName="kubernetes.io/projected/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-kube-api-access-bbv7q" seLinuxMountContext="" Mar 12 14:35:41.219673 master-0 kubenswrapper[37036]: I0312 14:35:41.219661 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bba274a-38c7-4d13-88a5-6bc39228416c" volumeName="kubernetes.io/secret/1bba274a-38c7-4d13-88a5-6bc39228416c-serving-cert" seLinuxMountContext="" Mar 12 14:35:41.219739 master-0 kubenswrapper[37036]: I0312 14:35:41.219728 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39bda5b8-c748-4023-8680-8e8454512e5b" volumeName="kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-audit" seLinuxMountContext="" Mar 12 14:35:41.219803 master-0 kubenswrapper[37036]: I0312 14:35:41.219789 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f72fbbe-69f0-4622-be05-b839ff9b4d45" volumeName="kubernetes.io/projected/3f72fbbe-69f0-4622-be05-b839ff9b4d45-kube-api-access-2mbjg" seLinuxMountContext="" Mar 12 14:35:41.219879 master-0 kubenswrapper[37036]: I0312 14:35:41.219865 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba" volumeName="kubernetes.io/projected/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-kube-api-access-ms688" seLinuxMountContext="" Mar 12 14:35:41.219995 master-0 kubenswrapper[37036]: I0312 14:35:41.219981 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8106d14a-b448-4dd1-bccd-926f85394b5d" volumeName="kubernetes.io/secret/8106d14a-b448-4dd1-bccd-926f85394b5d-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 12 14:35:41.220059 master-0 kubenswrapper[37036]: I0312 14:35:41.220047 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="879e9bf1-ce4a-40b7-a72c-fe4c61e96cea" volumeName="kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls" seLinuxMountContext="" Mar 12 14:35:41.220120 master-0 kubenswrapper[37036]: I0312 14:35:41.220109 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9757756c-cb67-4b6f-99c3-dd63f904897a" volumeName="kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-whereabouts-configmap" seLinuxMountContext="" Mar 12 14:35:41.220187 master-0 kubenswrapper[37036]: I0312 14:35:41.220175 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba" volumeName="kubernetes.io/secret/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Mar 12 14:35:41.220347 master-0 kubenswrapper[37036]: I0312 14:35:41.220332 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2435b91-86d6-415b-a978-34cc859e74f2" volumeName="kubernetes.io/projected/a2435b91-86d6-415b-a978-34cc859e74f2-kube-api-access-qkmrv" seLinuxMountContext="" Mar 12 14:35:41.220407 master-0 kubenswrapper[37036]: I0312 14:35:41.220396 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef824102-83a5-4629-8057-d4f1a57a530d" volumeName="kubernetes.io/secret/ef824102-83a5-4629-8057-d4f1a57a530d-apiservice-cert" seLinuxMountContext="" Mar 12 14:35:41.220777 master-0 kubenswrapper[37036]: I0312 14:35:41.220763 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9757756c-cb67-4b6f-99c3-dd63f904897a" volumeName="kubernetes.io/projected/9757756c-cb67-4b6f-99c3-dd63f904897a-kube-api-access-hxnzm" seLinuxMountContext="" Mar 12 14:35:41.220976 master-0 kubenswrapper[37036]: I0312 14:35:41.220963 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a35674af-162c-4a4a-8605-158b2326267e" volumeName="kubernetes.io/configmap/a35674af-162c-4a4a-8605-158b2326267e-service-ca" seLinuxMountContext="" Mar 12 14:35:41.221041 master-0 kubenswrapper[37036]: I0312 14:35:41.221030 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06eb9f4b-167e-435b-8ef6-ae44fc0b85a9" volumeName="kubernetes.io/projected/06eb9f4b-167e-435b-8ef6-ae44fc0b85a9-kube-api-access-276qm" seLinuxMountContext="" Mar 12 14:35:41.221101 master-0 kubenswrapper[37036]: I0312 14:35:41.221090 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3dc73c14-852d-4957-b6ac-84366ba0594f" volumeName="kubernetes.io/projected/3dc73c14-852d-4957-b6ac-84366ba0594f-kube-api-access-sc9zd" seLinuxMountContext="" Mar 12 14:35:41.221168 master-0 kubenswrapper[37036]: I0312 14:35:41.221157 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba" volumeName="kubernetes.io/secret/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-prometheus-operator-tls" seLinuxMountContext="" Mar 12 14:35:41.221234 master-0 kubenswrapper[37036]: I0312 14:35:41.221221 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="61de099a-410b-4d30-83e8-19cf5901cb27" volumeName="kubernetes.io/secret/61de099a-410b-4d30-83e8-19cf5901cb27-signing-key" seLinuxMountContext="" Mar 12 14:35:41.221292 master-0 kubenswrapper[37036]: I0312 14:35:41.221281 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08ea0d9f-0635-4759-803e-572eca2f2d34" volumeName="kubernetes.io/configmap/08ea0d9f-0635-4759-803e-572eca2f2d34-config" seLinuxMountContext="" Mar 12 14:35:41.221354 master-0 kubenswrapper[37036]: I0312 14:35:41.221343 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59f21770-429b-4b63-82fd-50ce0daf698d" volumeName="kubernetes.io/configmap/59f21770-429b-4b63-82fd-50ce0daf698d-metrics-client-ca" seLinuxMountContext="" Mar 12 14:35:41.221414 master-0 kubenswrapper[37036]: I0312 14:35:41.221404 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a" volumeName="kubernetes.io/secret/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a-certs" seLinuxMountContext="" Mar 12 14:35:41.221541 master-0 kubenswrapper[37036]: I0312 14:35:41.221508 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="95c11263-0d68-4b11-bcfd-bcb0e96a6988" volumeName="kubernetes.io/projected/95c11263-0d68-4b11-bcfd-bcb0e96a6988-kube-api-access-6pfns" seLinuxMountContext="" Mar 12 14:35:41.221630 master-0 kubenswrapper[37036]: I0312 14:35:41.221617 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9dfe48c-daa1-4c18-9cf5-7b4930a0e649" volumeName="kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-metrics-client-ca" seLinuxMountContext="" Mar 12 14:35:41.221712 master-0 kubenswrapper[37036]: I0312 14:35:41.221700 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9dfe48c-daa1-4c18-9cf5-7b4930a0e649" volumeName="kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-secret-telemeter-client" seLinuxMountContext="" Mar 12 14:35:41.227352 master-0 kubenswrapper[37036]: I0312 14:35:41.227310 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9757edbb-8ce2-4513-9b32-a552df50634c" volumeName="kubernetes.io/secret/9757edbb-8ce2-4513-9b32-a552df50634c-cert" seLinuxMountContext="" Mar 12 14:35:41.227533 master-0 kubenswrapper[37036]: I0312 14:35:41.227517 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="addf66af-4d97-4c1e-960d-ace98c27961b" volumeName="kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-server-tls" seLinuxMountContext="" Mar 12 14:35:41.227597 master-0 kubenswrapper[37036]: I0312 14:35:41.227586 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="de61e1fe-294c-48a6-8cf3-aeb4637ef2cc" volumeName="kubernetes.io/configmap/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc-cco-trusted-ca" seLinuxMountContext="" Mar 12 14:35:41.227673 master-0 kubenswrapper[37036]: I0312 14:35:41.227661 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7b68603-8af3-4a50-8d39-86bbcdf1c597" volumeName="kubernetes.io/projected/f7b68603-8af3-4a50-8d39-86bbcdf1c597-kube-api-access-vntrg" seLinuxMountContext="" Mar 12 14:35:41.227930 master-0 kubenswrapper[37036]: I0312 14:35:41.227727 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7420564a-dc9d-4a2e-b0fc-0cc01f115e3b" volumeName="kubernetes.io/projected/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-kube-api-access-jh2zk" seLinuxMountContext="" Mar 12 14:35:41.227930 master-0 kubenswrapper[37036]: I0312 14:35:41.227742 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e4d9407-ff79-4396-a37f-896617e024d4" volumeName="kubernetes.io/projected/8e4d9407-ff79-4396-a37f-896617e024d4-kube-api-access-sjsjh" seLinuxMountContext="" Mar 12 14:35:41.227930 master-0 kubenswrapper[37036]: I0312 14:35:41.227755 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="addf66af-4d97-4c1e-960d-ace98c27961b" volumeName="kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Mar 12 14:35:41.227930 master-0 kubenswrapper[37036]: I0312 14:35:41.227764 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2742559-1f28-4f2c-a873-d6a9348972fb" volumeName="kubernetes.io/empty-dir/e2742559-1f28-4f2c-a873-d6a9348972fb-utilities" seLinuxMountContext="" Mar 12 14:35:41.227930 master-0 kubenswrapper[37036]: I0312 14:35:41.227774 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57930a54-89ab-4ec8-a504-74035bb74d63" volumeName="kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-config" seLinuxMountContext="" Mar 12 14:35:41.227930 master-0 kubenswrapper[37036]: I0312 14:35:41.227783 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57930a54-89ab-4ec8-a504-74035bb74d63" volumeName="kubernetes.io/projected/57930a54-89ab-4ec8-a504-74035bb74d63-kube-api-access-d6z8v" seLinuxMountContext="" Mar 12 14:35:41.227930 master-0 kubenswrapper[37036]: I0312 14:35:41.227793 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59f21770-429b-4b63-82fd-50ce0daf698d" volumeName="kubernetes.io/secret/59f21770-429b-4b63-82fd-50ce0daf698d-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 12 14:35:41.227930 master-0 kubenswrapper[37036]: I0312 14:35:41.227802 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a" volumeName="kubernetes.io/projected/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a-kube-api-access-jcz8p" seLinuxMountContext="" Mar 12 14:35:41.227930 master-0 kubenswrapper[37036]: I0312 14:35:41.227814 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" volumeName="kubernetes.io/secret/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-metrics-certs" seLinuxMountContext="" Mar 12 14:35:41.227930 master-0 kubenswrapper[37036]: I0312 14:35:41.227825 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="40912d56-8288-4d58-ad91-7455bd460887" volumeName="kubernetes.io/secret/40912d56-8288-4d58-ad91-7455bd460887-machine-approver-tls" seLinuxMountContext="" Mar 12 14:35:41.227930 master-0 kubenswrapper[37036]: I0312 14:35:41.227835 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9757756c-cb67-4b6f-99c3-dd63f904897a" volumeName="kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-cni-sysctl-allowlist" seLinuxMountContext="" Mar 12 14:35:41.227930 master-0 kubenswrapper[37036]: I0312 14:35:41.227847 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="df31c4c2-304e-4bad-8e6f-18c174eba675" volumeName="kubernetes.io/projected/df31c4c2-304e-4bad-8e6f-18c174eba675-kube-api-access-gg62n" seLinuxMountContext="" Mar 12 14:35:41.227930 master-0 kubenswrapper[37036]: I0312 14:35:41.227856 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9" volumeName="kubernetes.io/configmap/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-env-overrides" seLinuxMountContext="" Mar 12 14:35:41.227930 master-0 kubenswrapper[37036]: I0312 14:35:41.227868 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="85459175-2c9c-425d-bdfb-0a79c92ed110" volumeName="kubernetes.io/projected/85459175-2c9c-425d-bdfb-0a79c92ed110-kube-api-access-v8tts" seLinuxMountContext="" Mar 12 14:35:41.227930 master-0 kubenswrapper[37036]: I0312 14:35:41.227883 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dd29b21c-7a0e-4311-952f-427b00468e66" volumeName="kubernetes.io/configmap/dd29b21c-7a0e-4311-952f-427b00468e66-trusted-ca-bundle" seLinuxMountContext="" Mar 12 14:35:41.228495 master-0 kubenswrapper[37036]: I0312 14:35:41.228440 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f72fbbe-69f0-4622-be05-b839ff9b4d45" volumeName="kubernetes.io/secret/3f72fbbe-69f0-4622-be05-b839ff9b4d45-serving-cert" seLinuxMountContext="" Mar 12 14:35:41.228495 master-0 kubenswrapper[37036]: I0312 14:35:41.228462 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="61de099a-410b-4d30-83e8-19cf5901cb27" volumeName="kubernetes.io/configmap/61de099a-410b-4d30-83e8-19cf5901cb27-signing-cabundle" seLinuxMountContext="" Mar 12 14:35:41.228577 master-0 kubenswrapper[37036]: I0312 14:35:41.224619 37036 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228475 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="761993bb-2cba-4e1a-b304-36a24817af94" volumeName="kubernetes.io/secret/761993bb-2cba-4e1a-b304-36a24817af94-ovn-node-metrics-cert" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228628 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fdce71e-8085-4316-be40-e535530c2ca4" volumeName="kubernetes.io/projected/7fdce71e-8085-4316-be40-e535530c2ca4-kube-api-access-5bdqv" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228638 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="61d829d7-38e1-4826-942c-f7317c4a4bec" volumeName="kubernetes.io/configmap/61d829d7-38e1-4826-942c-f7317c4a4bec-mcc-auth-proxy-config" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228648 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7420564a-dc9d-4a2e-b0fc-0cc01f115e3b" volumeName="kubernetes.io/configmap/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-etcd-serving-ca" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228658 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="76d596c0-6a41-43e1-9516-aee9ad834ec2" volumeName="kubernetes.io/secret/76d596c0-6a41-43e1-9516-aee9ad834ec2-serving-cert" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228667 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="95c11263-0d68-4b11-bcfd-bcb0e96a6988" volumeName="kubernetes.io/configmap/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-daemon-config" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228679 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef824102-83a5-4629-8057-d4f1a57a530d" volumeName="kubernetes.io/projected/ef824102-83a5-4629-8057-d4f1a57a530d-kube-api-access-5kvhc" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228688 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08ea0d9f-0635-4759-803e-572eca2f2d34" volumeName="kubernetes.io/secret/08ea0d9f-0635-4759-803e-572eca2f2d34-serving-cert" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228698 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bc0d552-01c7-4212-a551-d16419f2dc80" volumeName="kubernetes.io/projected/1bc0d552-01c7-4212-a551-d16419f2dc80-kube-api-access-vpq4d" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228710 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7420564a-dc9d-4a2e-b0fc-0cc01f115e3b" volumeName="kubernetes.io/secret/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-encryption-config" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228721 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="761993bb-2cba-4e1a-b304-36a24817af94" volumeName="kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-ovnkube-config" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228729 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42dbcb8f-e8c4-413e-977d-40aa6df226aa" volumeName="kubernetes.io/projected/42dbcb8f-e8c4-413e-977d-40aa6df226aa-kube-api-access-j47xv" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228737 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99433993-93cf-46cb-bb66-485672cb2554" volumeName="kubernetes.io/projected/99433993-93cf-46cb-bb66-485672cb2554-kube-api-access-2dlf2" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228745 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57930a54-89ab-4ec8-a504-74035bb74d63" volumeName="kubernetes.io/secret/57930a54-89ab-4ec8-a504-74035bb74d63-serving-cert" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228753 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a35674af-162c-4a4a-8605-158b2326267e" volumeName="kubernetes.io/projected/a35674af-162c-4a4a-8605-158b2326267e-kube-api-access" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228764 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f569ed3b-924d-4829-b192-f508ee70658d" volumeName="kubernetes.io/secret/f569ed3b-924d-4829-b192-f508ee70658d-samples-operator-tls" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228773 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="addf66af-4d97-4c1e-960d-ace98c27961b" volumeName="kubernetes.io/empty-dir/addf66af-4d97-4c1e-960d-ace98c27961b-audit-log" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228783 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7" volumeName="kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228792 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9757edbb-8ce2-4513-9b32-a552df50634c" volumeName="kubernetes.io/configmap/9757edbb-8ce2-4513-9b32-a552df50634c-auth-proxy-config" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228801 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2435b91-86d6-415b-a978-34cc859e74f2" volumeName="kubernetes.io/projected/a2435b91-86d6-415b-a978-34cc859e74f2-bound-sa-token" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228809 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dd29b21c-7a0e-4311-952f-427b00468e66" volumeName="kubernetes.io/empty-dir/dd29b21c-7a0e-4311-952f-427b00468e66-snapshots" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228818 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c13c5f-3d1f-4e0a-b77b-732255680086" volumeName="kubernetes.io/projected/f3c13c5f-3d1f-4e0a-b77b-732255680086-kube-api-access-wmrqg" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228826 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f59d485-9f69-4f36-836e-6338f84b7d69" volumeName="kubernetes.io/empty-dir/2f59d485-9f69-4f36-836e-6338f84b7d69-utilities" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228835 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42dbcb8f-e8c4-413e-977d-40aa6df226aa" volumeName="kubernetes.io/configmap/42dbcb8f-e8c4-413e-977d-40aa6df226aa-telemetry-config" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228845 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="de61e1fe-294c-48a6-8cf3-aeb4637ef2cc" volumeName="kubernetes.io/secret/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc-cloud-credential-operator-serving-cert" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228854 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" volumeName="kubernetes.io/projected/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-kube-api-access-56twk" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228864 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cba33300-f7ef-4547-97ff-62e223da79cf" volumeName="kubernetes.io/empty-dir/cba33300-f7ef-4547-97ff-62e223da79cf-catalog-content" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228873 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9dfe48c-daa1-4c18-9cf5-7b4930a0e649" volumeName="kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-federate-client-tls" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228882 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="272b53c4-134c-404d-9a27-c7371415b1f7" volumeName="kubernetes.io/projected/272b53c4-134c-404d-9a27-c7371415b1f7-kube-api-access-nqqcc" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228932 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3edaa533-ecbb-443e-a270-4cb4f923daf6" volumeName="kubernetes.io/secret/3edaa533-ecbb-443e-a270-4cb4f923daf6-cert" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228943 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="61d829d7-38e1-4826-942c-f7317c4a4bec" volumeName="kubernetes.io/projected/61d829d7-38e1-4826-942c-f7317c4a4bec-kube-api-access-zqx42" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228951 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="addf66af-4d97-4c1e-960d-ace98c27961b" volumeName="kubernetes.io/projected/addf66af-4d97-4c1e-960d-ace98c27961b-kube-api-access-l6d7w" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228960 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39252b5a-d014-4319-ad81-3c1bf2ef585e" volumeName="kubernetes.io/secret/39252b5a-d014-4319-ad81-3c1bf2ef585e-catalogserver-certs" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228969 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39bda5b8-c748-4023-8680-8e8454512e5b" volumeName="kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-trusted-ca-bundle" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228980 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f569ed3b-924d-4829-b192-f508ee70658d" volumeName="kubernetes.io/projected/f569ed3b-924d-4829-b192-f508ee70658d-kube-api-access-62ptf" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228989 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dd29b21c-7a0e-4311-952f-427b00468e66" volumeName="kubernetes.io/projected/dd29b21c-7a0e-4311-952f-427b00468e66-kube-api-access-rcq7v" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.228998 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef5679f7-5bf5-409d-b74b-64a9cbb6c701" volumeName="kubernetes.io/secret/ef5679f7-5bf5-409d-b74b-64a9cbb6c701-cert" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229010 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4ef01b7f-f7cb-4fd4-a75d-fe7a657d68d4" volumeName="kubernetes.io/projected/4ef01b7f-f7cb-4fd4-a75d-fe7a657d68d4-kube-api-access-fdzwp" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229021 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="61de099a-410b-4d30-83e8-19cf5901cb27" volumeName="kubernetes.io/projected/61de099a-410b-4d30-83e8-19cf5901cb27-kube-api-access-9czc5" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229030 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99433993-93cf-46cb-bb66-485672cb2554" volumeName="kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-client-ca" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229041 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cba33300-f7ef-4547-97ff-62e223da79cf" volumeName="kubernetes.io/empty-dir/cba33300-f7ef-4547-97ff-62e223da79cf-utilities" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229050 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="df31c4c2-304e-4bad-8e6f-18c174eba675" volumeName="kubernetes.io/secret/df31c4c2-304e-4bad-8e6f-18c174eba675-serving-cert" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229060 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39bda5b8-c748-4023-8680-8e8454512e5b" volumeName="kubernetes.io/secret/39bda5b8-c748-4023-8680-8e8454512e5b-etcd-client" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229070 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="40912d56-8288-4d58-ad91-7455bd460887" volumeName="kubernetes.io/configmap/40912d56-8288-4d58-ad91-7455bd460887-config" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229079 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a81be38f-e07e-4863-8d61-fdefc2713a6a" volumeName="kubernetes.io/configmap/a81be38f-e07e-4863-8d61-fdefc2713a6a-metrics-client-ca" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229089 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d00a8cc7-7774-40bd-94a1-9ac2d0f63234" volumeName="kubernetes.io/secret/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-serving-cert" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229098 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39bda5b8-c748-4023-8680-8e8454512e5b" volumeName="kubernetes.io/projected/39bda5b8-c748-4023-8680-8e8454512e5b-kube-api-access-4krm9" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229109 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e4d9407-ff79-4396-a37f-896617e024d4" volumeName="kubernetes.io/secret/8e4d9407-ff79-4396-a37f-896617e024d4-proxy-tls" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229119 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e733069-752a-4140-83eb-8287f1bce1a7" volumeName="kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229128 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07a6a1d6-fecf-4847-b7c1-160d5d7320fb" volumeName="kubernetes.io/projected/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-kube-api-access-cqh9t" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229139 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bc0d552-01c7-4212-a551-d16419f2dc80" volumeName="kubernetes.io/configmap/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-trusted-ca" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229148 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="761993bb-2cba-4e1a-b304-36a24817af94" volumeName="kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-ovnkube-script-lib" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229157 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a81be38f-e07e-4863-8d61-fdefc2713a6a" volumeName="kubernetes.io/configmap/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229166 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" volumeName="kubernetes.io/projected/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-kube-api-access-qhdq5" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229175 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7433d9bf-4edf-4787-a7a1-e5102c7264c7" volumeName="kubernetes.io/secret/7433d9bf-4edf-4787-a7a1-e5102c7264c7-metrics-tls" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229184 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9757edbb-8ce2-4513-9b32-a552df50634c" volumeName="kubernetes.io/projected/9757edbb-8ce2-4513-9b32-a552df50634c-kube-api-access-m2cq8" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229203 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ec846db-e344-4f9e-95e6-7a0055f52766" volumeName="kubernetes.io/secret/3ec846db-e344-4f9e-95e6-7a0055f52766-metrics-tls" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229213 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a81be38f-e07e-4863-8d61-fdefc2713a6a" volumeName="kubernetes.io/projected/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-api-access-b7krt" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229225 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9dfe48c-daa1-4c18-9cf5-7b4930a0e649" volumeName="kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-secret-telemeter-client-kube-rbac-proxy-config" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229235 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6defef79-6058-466a-ae0b-8eb9258126be" volumeName="kubernetes.io/configmap/6defef79-6058-466a-ae0b-8eb9258126be-ovnkube-config" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229243 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="addf66af-4d97-4c1e-960d-ace98c27961b" volumeName="kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-client-certs" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229252 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9d51570-06dd-4e2f-9c19-07fb694279ae" volumeName="kubernetes.io/projected/b9d51570-06dd-4e2f-9c19-07fb694279ae-kube-api-access-2cqkl" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229261 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" volumeName="kubernetes.io/secret/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-default-certificate" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229270 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="900b2a0e-1e2b-41a3-86f5-639ec1e95969" volumeName="kubernetes.io/secret/900b2a0e-1e2b-41a3-86f5-639ec1e95969-tls-certificates" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229280 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef824102-83a5-4629-8057-d4f1a57a530d" volumeName="kubernetes.io/empty-dir/ef824102-83a5-4629-8057-d4f1a57a530d-tmpfs" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229291 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1f9b15c6-b4ee-4907-8daa-376e3b438896" volumeName="kubernetes.io/projected/1f9b15c6-b4ee-4907-8daa-376e3b438896-kube-api-access-w7nnk" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229301 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3edaa533-ecbb-443e-a270-4cb4f923daf6" volumeName="kubernetes.io/configmap/3edaa533-ecbb-443e-a270-4cb4f923daf6-config" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229310 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f72fbbe-69f0-4622-be05-b839ff9b4d45" volumeName="kubernetes.io/configmap/3f72fbbe-69f0-4622-be05-b839ff9b4d45-config" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229325 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d775283-2696-4411-8ddf-d4e6000f0a0c" volumeName="kubernetes.io/projected/8d775283-2696-4411-8ddf-d4e6000f0a0c-kube-api-access-lcwrv" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229336 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fb06459-09da-4620-91cf-8c3fe8f425db" volumeName="kubernetes.io/empty-dir/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-tuned" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229347 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a81be38f-e07e-4863-8d61-fdefc2713a6a" volumeName="kubernetes.io/empty-dir/a81be38f-e07e-4863-8d61-fdefc2713a6a-volume-directive-shadow" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229358 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="addf66af-4d97-4c1e-960d-ace98c27961b" volumeName="kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-metrics-server-audit-profiles" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229368 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cba33300-f7ef-4547-97ff-62e223da79cf" volumeName="kubernetes.io/projected/cba33300-f7ef-4547-97ff-62e223da79cf-kube-api-access-6qv7x" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229378 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07a6a1d6-fecf-4847-b7c1-160d5d7320fb" volumeName="kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229389 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1047bb4a-135f-488d-9399-0518cb3a827d" volumeName="kubernetes.io/configmap/1047bb4a-135f-488d-9399-0518cb3a827d-images" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229398 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="272b53c4-134c-404d-9a27-c7371415b1f7" volumeName="kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229408 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42dbcb8f-e8c4-413e-977d-40aa6df226aa" volumeName="kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229420 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="de61e1fe-294c-48a6-8cf3-aeb4637ef2cc" volumeName="kubernetes.io/projected/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc-kube-api-access-dtp2z" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229430 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9" volumeName="kubernetes.io/secret/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-webhook-cert" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229441 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9dfe48c-daa1-4c18-9cf5-7b4930a0e649" volumeName="kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-telemeter-trusted-ca-bundle" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229450 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0a898118-6d01-4211-92f0-43967b75405c" volumeName="kubernetes.io/empty-dir/0a898118-6d01-4211-92f0-43967b75405c-available-featuregates" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229460 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ec846db-e344-4f9e-95e6-7a0055f52766" volumeName="kubernetes.io/configmap/3ec846db-e344-4f9e-95e6-7a0055f52766-config-volume" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229470 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d775283-2696-4411-8ddf-d4e6000f0a0c" volumeName="kubernetes.io/secret/8d775283-2696-4411-8ddf-d4e6000f0a0c-serving-cert" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229479 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7" volumeName="kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-tls" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229490 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2435b91-86d6-415b-a978-34cc859e74f2" volumeName="kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229500 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9dfe48c-daa1-4c18-9cf5-7b4930a0e649" volumeName="kubernetes.io/projected/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-kube-api-access-mmcz9" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229509 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3edaa533-ecbb-443e-a270-4cb4f923daf6" volumeName="kubernetes.io/secret/3edaa533-ecbb-443e-a270-4cb4f923daf6-cluster-baremetal-operator-tls" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229520 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" volumeName="kubernetes.io/projected/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-bound-sa-token" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229529 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57930a54-89ab-4ec8-a504-74035bb74d63" volumeName="kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-service-ca-bundle" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229540 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6defef79-6058-466a-ae0b-8eb9258126be" volumeName="kubernetes.io/configmap/6defef79-6058-466a-ae0b-8eb9258126be-env-overrides" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229552 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7" volumeName="kubernetes.io/configmap/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-metrics-client-ca" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229561 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7" volumeName="kubernetes.io/empty-dir/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-textfile" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229569 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" volumeName="kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229581 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a" volumeName="kubernetes.io/secret/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a-node-bootstrap-token" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229589 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8dd912f8-2c4d-4a0a-ba41-918ab5c235ba" volumeName="kubernetes.io/projected/8dd912f8-2c4d-4a0a-ba41-918ab5c235ba-kube-api-access-27tm9" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229599 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a35674af-162c-4a4a-8605-158b2326267e" volumeName="kubernetes.io/secret/a35674af-162c-4a4a-8605-158b2326267e-serving-cert" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229608 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d775283-2696-4411-8ddf-d4e6000f0a0c" volumeName="kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-config" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229617 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a81be38f-e07e-4863-8d61-fdefc2713a6a" volumeName="kubernetes.io/secret/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229626 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d775283-2696-4411-8ddf-d4e6000f0a0c" volumeName="kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-service-ca" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229634 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a81be38f-e07e-4863-8d61-fdefc2713a6a" volumeName="kubernetes.io/secret/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-tls" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229643 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9d51570-06dd-4e2f-9c19-07fb694279ae" volumeName="kubernetes.io/configmap/b9d51570-06dd-4e2f-9c19-07fb694279ae-iptables-alerter-script" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229652 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" volumeName="kubernetes.io/configmap/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-trusted-ca" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229664 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef5679f7-5bf5-409d-b74b-64a9cbb6c701" volumeName="kubernetes.io/projected/ef5679f7-5bf5-409d-b74b-64a9cbb6c701-kube-api-access-vv6gf" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229673 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="85459175-2c9c-425d-bdfb-0a79c92ed110" volumeName="kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229682 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c6b9f13-4a3a-4920-a84b-f76516501f81" volumeName="kubernetes.io/projected/8c6b9f13-4a3a-4920-a84b-f76516501f81-kube-api-access-2vnhl" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229690 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f59d485-9f69-4f36-836e-6338f84b7d69" volumeName="kubernetes.io/empty-dir/2f59d485-9f69-4f36-836e-6338f84b7d69-catalog-content" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229708 37036 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c13c5f-3d1f-4e0a-b77b-732255680086" volumeName="kubernetes.io/secret/f3c13c5f-3d1f-4e0a-b77b-732255680086-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229717 37036 reconstruct.go:97] "Volume reconstruction finished" Mar 12 14:35:41.229918 master-0 kubenswrapper[37036]: I0312 14:35:41.229724 37036 reconciler.go:26] "Reconciler: start to sync state" Mar 12 14:35:41.233632 master-0 kubenswrapper[37036]: I0312 14:35:41.232480 37036 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 12 14:35:41.233632 master-0 kubenswrapper[37036]: I0312 14:35:41.233040 37036 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 12 14:35:41.233632 master-0 kubenswrapper[37036]: I0312 14:35:41.233072 37036 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 12 14:35:41.233632 master-0 kubenswrapper[37036]: I0312 14:35:41.233089 37036 kubelet.go:2335] "Starting kubelet main sync loop" Mar 12 14:35:41.233632 master-0 kubenswrapper[37036]: E0312 14:35:41.233132 37036 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 14:35:41.234687 master-0 kubenswrapper[37036]: I0312 14:35:41.234659 37036 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 12 14:35:41.243260 master-0 kubenswrapper[37036]: I0312 14:35:41.243225 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_05fc4965-b390-4edc-a407-d431b06d7612/installer/0.log" Mar 12 14:35:41.243260 master-0 kubenswrapper[37036]: I0312 14:35:41.243265 37036 generic.go:334] "Generic (PLEG): container finished" podID="05fc4965-b390-4edc-a407-d431b06d7612" containerID="6aa44e483ff3af56ade2c830f5190301f0a2aff21489693f95cab78436b2ad8d" exitCode=1 Mar 12 14:35:41.251256 master-0 kubenswrapper[37036]: I0312 14:35:41.251174 37036 generic.go:334] "Generic (PLEG): container finished" podID="9a2b4b06-98cd-4ca3-aebe-d49651c6013f" containerID="6a4b354a483f93559470810779464488abbf5caec068837d5cc9967973e986cd" exitCode=0 Mar 12 14:35:41.269699 master-0 kubenswrapper[37036]: I0312 14:35:41.269638 37036 generic.go:334] "Generic (PLEG): container finished" podID="dd29b21c-7a0e-4311-952f-427b00468e66" containerID="b7ebd6ed103fd32804e88ec8b0eb113b06bd39e732fa9609967014bb6c6c87cc" exitCode=0 Mar 12 14:35:41.276389 master-0 kubenswrapper[37036]: I0312 14:35:41.274372 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-qtx2d_6f5cd3ff-ced6-47e3-8054-d83053d87680/machine-api-operator/0.log" Mar 12 14:35:41.276389 master-0 kubenswrapper[37036]: I0312 14:35:41.274889 37036 generic.go:334] "Generic (PLEG): container finished" podID="6f5cd3ff-ced6-47e3-8054-d83053d87680" containerID="d0767e3a40f949712be9170d0b8f7cd2c338fed5faee0a7ad41873676dd6e5ae" exitCode=255 Mar 12 14:35:41.276619 master-0 kubenswrapper[37036]: I0312 14:35:41.276518 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_efd52682-bf05-44fc-9790-8adfc87ca087/installer/0.log" Mar 12 14:35:41.276619 master-0 kubenswrapper[37036]: I0312 14:35:41.276567 37036 generic.go:334] "Generic (PLEG): container finished" podID="efd52682-bf05-44fc-9790-8adfc87ca087" containerID="a7a831aba8d50e763154f735949d2f89a1f0e98463882117ee4053d40ba3f7ce" exitCode=1 Mar 12 14:35:41.285702 master-0 kubenswrapper[37036]: I0312 14:35:41.285516 37036 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="a54d7c040e4e83aac6a6fc975cc3d2fd03101d4237db0646f2870734d1932e37" exitCode=0 Mar 12 14:35:41.285702 master-0 kubenswrapper[37036]: I0312 14:35:41.285570 37036 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="557e5767b6a5906fd35802d8cc7a729030365600bcb6aca559cdc1d58e816deb" exitCode=0 Mar 12 14:35:41.285702 master-0 kubenswrapper[37036]: I0312 14:35:41.285580 37036 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="7ad0044b2389b9999007ceef7cd4808d51c84380e6314ac6db787dc5a548f095" exitCode=0 Mar 12 14:35:41.292059 master-0 kubenswrapper[37036]: I0312 14:35:41.290945 37036 generic.go:334] "Generic (PLEG): container finished" podID="b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7" containerID="36ab6a383938c1c2c65deef282e5bd58d913849b1497608417a2412a1cf8ab99" exitCode=0 Mar 12 14:35:41.293221 master-0 kubenswrapper[37036]: I0312 14:35:41.292779 37036 generic.go:334] "Generic (PLEG): container finished" podID="8e4d9407-ff79-4396-a37f-896617e024d4" containerID="f3cde608396e1250953a5916aba2ef7c179e1de121583d5c59e0f48fda1512ff" exitCode=0 Mar 12 14:35:41.299978 master-0 kubenswrapper[37036]: I0312 14:35:41.299917 37036 generic.go:334] "Generic (PLEG): container finished" podID="761993bb-2cba-4e1a-b304-36a24817af94" containerID="e511180297e76f6a11f5330905f38a15021808c15b34dd938afb52d0fc965c91" exitCode=0 Mar 12 14:35:41.308955 master-0 kubenswrapper[37036]: I0312 14:35:41.308568 37036 generic.go:334] "Generic (PLEG): container finished" podID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerID="23045659386f5f50b8b2e11a25ff55cb6da08b535a3f1f8469ef54d77c636cee" exitCode=0 Mar 12 14:35:41.315156 master-0 kubenswrapper[37036]: I0312 14:35:41.315095 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-754bdc9f9d-44b6s_40912d56-8288-4d58-ad91-7455bd460887/machine-approver-controller/0.log" Mar 12 14:35:41.315716 master-0 kubenswrapper[37036]: I0312 14:35:41.315667 37036 generic.go:334] "Generic (PLEG): container finished" podID="40912d56-8288-4d58-ad91-7455bd460887" containerID="6b815065f5b803f6446ee0525693bbd7ee720d608451c165c93b259f6a7e3184" exitCode=255 Mar 12 14:35:41.317955 master-0 kubenswrapper[37036]: I0312 14:35:41.317631 37036 generic.go:334] "Generic (PLEG): container finished" podID="23b56974-d2b1-4205-af5a-70cc2b616d1a" containerID="44912c45860c53bd920d6344d008ca95bda45324f0583a0a019e5ef0a05b1d24" exitCode=0 Mar 12 14:35:41.320669 master-0 kubenswrapper[37036]: I0312 14:35:41.320620 37036 generic.go:334] "Generic (PLEG): container finished" podID="1bc0d552-01c7-4212-a551-d16419f2dc80" containerID="d4f5f31cb9b13fbf54308c119403bf09d2d0acf82b48cd71b5bda3672a1ed049" exitCode=0 Mar 12 14:35:41.323938 master-0 kubenswrapper[37036]: I0312 14:35:41.323465 37036 generic.go:334] "Generic (PLEG): container finished" podID="a2c3501c-0ebe-46d0-b2ed-540f96cd137c" containerID="92d7499402985a174fd8cf44fdbd49d9d08d220559433aa9bf620331ab2599ae" exitCode=0 Mar 12 14:35:41.326115 master-0 kubenswrapper[37036]: I0312 14:35:41.326079 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 12 14:35:41.326370 master-0 kubenswrapper[37036]: I0312 14:35:41.326336 37036 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="93a2be4c1cc0002fe72e77c70515d0d6599835f46c575d492bb4928167ddaaac" exitCode=1 Mar 12 14:35:41.326370 master-0 kubenswrapper[37036]: I0312 14:35:41.326364 37036 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="ece77fc75f8a7b32ae075ac5d9a3759a5a3b706e4492b696da7d62701d1c5eb8" exitCode=0 Mar 12 14:35:41.328481 master-0 kubenswrapper[37036]: I0312 14:35:41.328427 37036 generic.go:334] "Generic (PLEG): container finished" podID="6defef79-6058-466a-ae0b-8eb9258126be" containerID="e09e9528f2e667c7ca5a54a2f40134d7a65389dd5410fb6f666432c3167149ba" exitCode=0 Mar 12 14:35:41.341305 master-0 kubenswrapper[37036]: E0312 14:35:41.333213 37036 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 14:35:41.341305 master-0 kubenswrapper[37036]: I0312 14:35:41.337643 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-qtql5_1bba274a-38c7-4d13-88a5-6bc39228416c/kube-controller-manager-operator/3.log" Mar 12 14:35:41.341305 master-0 kubenswrapper[37036]: I0312 14:35:41.337712 37036 generic.go:334] "Generic (PLEG): container finished" podID="1bba274a-38c7-4d13-88a5-6bc39228416c" containerID="a44c4ecc04fa9e6c4e5b12d13bcdb1beeaf87374ca0d2540444a8445b0121666" exitCode=255 Mar 12 14:35:41.343813 master-0 kubenswrapper[37036]: I0312 14:35:41.341234 37036 generic.go:334] "Generic (PLEG): container finished" podID="61d829d7-38e1-4826-942c-f7317c4a4bec" containerID="952a4e5cff72cd7499151126b7d570c4e426b0316c7d3f1d9462b433d44d34b6" exitCode=0 Mar 12 14:35:41.352208 master-0 kubenswrapper[37036]: I0312 14:35:41.350927 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-dvv78_85459175-2c9c-425d-bdfb-0a79c92ed110/package-server-manager/0.log" Mar 12 14:35:41.352208 master-0 kubenswrapper[37036]: I0312 14:35:41.351246 37036 generic.go:334] "Generic (PLEG): container finished" podID="85459175-2c9c-425d-bdfb-0a79c92ed110" containerID="e509fdc6496e2a91ab75938ff7600d03685ac240f8fb3c3d670f376d905b17ab" exitCode=1 Mar 12 14:35:41.367452 master-0 kubenswrapper[37036]: I0312 14:35:41.367408 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-gt2tw_3f72fbbe-69f0-4622-be05-b839ff9b4d45/openshift-apiserver-operator/2.log" Mar 12 14:35:41.367659 master-0 kubenswrapper[37036]: I0312 14:35:41.367483 37036 generic.go:334] "Generic (PLEG): container finished" podID="3f72fbbe-69f0-4622-be05-b839ff9b4d45" containerID="46c2a4e909bb52a20054b9e9b5b0a7b00da6400e691aeeec0e60efe2c628204c" exitCode=255 Mar 12 14:35:41.408558 master-0 kubenswrapper[37036]: I0312 14:35:41.406835 37036 generic.go:334] "Generic (PLEG): container finished" podID="7fed292c3d5a90a99bfee43e89190405" containerID="bd7899bffaf6aa78dc3ed5f5798ea564a1a0894027ca075b490729e999a8ce5b" exitCode=0 Mar 12 14:35:41.420766 master-0 kubenswrapper[37036]: I0312 14:35:41.420707 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7f8bfc67b-pz8rc_df31c4c2-304e-4bad-8e6f-18c174eba675/route-controller-manager/3.log" Mar 12 14:35:41.421047 master-0 kubenswrapper[37036]: I0312 14:35:41.420769 37036 generic.go:334] "Generic (PLEG): container finished" podID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerID="61400ed5c81e00b9e0a4acdbab9426e759da65e0bd1381d3d70a790a5d50716c" exitCode=255 Mar 12 14:35:41.428121 master-0 kubenswrapper[37036]: I0312 14:35:41.428093 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-ljnjj_0a898118-6d01-4211-92f0-43967b75405c/openshift-config-operator/4.log" Mar 12 14:35:41.428775 master-0 kubenswrapper[37036]: I0312 14:35:41.428697 37036 generic.go:334] "Generic (PLEG): container finished" podID="0a898118-6d01-4211-92f0-43967b75405c" containerID="1602a9eed5353da99938e46cc2f064b4455a5e47eb3af80ff79cdafd544bf392" exitCode=255 Mar 12 14:35:41.429334 master-0 kubenswrapper[37036]: I0312 14:35:41.429286 37036 generic.go:334] "Generic (PLEG): container finished" podID="0a898118-6d01-4211-92f0-43967b75405c" containerID="6060fd0146ead8129b93c5b31730ef60e2eaf7a165dbe7fde9719cb084457eda" exitCode=0 Mar 12 14:35:41.440164 master-0 kubenswrapper[37036]: I0312 14:35:41.440116 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-69b6fc6b88-fv6pp_76d596c0-6a41-43e1-9516-aee9ad834ec2/service-ca-operator/2.log" Mar 12 14:35:41.440377 master-0 kubenswrapper[37036]: I0312 14:35:41.440166 37036 generic.go:334] "Generic (PLEG): container finished" podID="76d596c0-6a41-43e1-9516-aee9ad834ec2" containerID="132c247fef63805e546221090174559865f0a5c67459f97a478961649f25c4ce" exitCode=255 Mar 12 14:35:41.443466 master-0 kubenswrapper[37036]: I0312 14:35:41.443433 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-b7296_9757edbb-8ce2-4513-9b32-a552df50634c/cluster-autoscaler-operator/0.log" Mar 12 14:35:41.443751 master-0 kubenswrapper[37036]: I0312 14:35:41.443724 37036 generic.go:334] "Generic (PLEG): container finished" podID="9757edbb-8ce2-4513-9b32-a552df50634c" containerID="1f6d2570897da6801ddcca5ad1dff41b4e29f16cbcc5ab930745b1a932963f31" exitCode=255 Mar 12 14:35:41.450629 master-0 kubenswrapper[37036]: I0312 14:35:41.450583 37036 generic.go:334] "Generic (PLEG): container finished" podID="de61e1fe-294c-48a6-8cf3-aeb4637ef2cc" containerID="1da1f692fe7f463fbb1c0cbb755fdd4e259885377082c810ee0f69c91f679d04" exitCode=0 Mar 12 14:35:41.479496 master-0 kubenswrapper[37036]: I0312 14:35:41.479323 37036 generic.go:334] "Generic (PLEG): container finished" podID="cba33300-f7ef-4547-97ff-62e223da79cf" containerID="eb008940bc7dc6c2ae442f778e48aef8337971c8ef1e3c95db6a891e0cad1a81" exitCode=0 Mar 12 14:35:41.479496 master-0 kubenswrapper[37036]: I0312 14:35:41.479399 37036 generic.go:334] "Generic (PLEG): container finished" podID="cba33300-f7ef-4547-97ff-62e223da79cf" containerID="1bc9540ba67897e35b5ccbe24ebd39e07a2c8806ea8a765dbac1ad9e9c299016" exitCode=0 Mar 12 14:35:41.489937 master-0 kubenswrapper[37036]: I0312 14:35:41.489861 37036 generic.go:334] "Generic (PLEG): container finished" podID="39bda5b8-c748-4023-8680-8e8454512e5b" containerID="433f8c8699626602589391cd2daaab97922be2a22d3d7962e8991c85c86df5c6" exitCode=0 Mar 12 14:35:41.496392 master-0 kubenswrapper[37036]: I0312 14:35:41.496333 37036 generic.go:334] "Generic (PLEG): container finished" podID="6b77ad35-2fff-47bb-ad34-abb3868b09a9" containerID="b8d113d4078bf75e05e20466c91ff71f4f6b488f7676b497a0a45f5dab626d36" exitCode=0 Mar 12 14:35:41.523723 master-0 kubenswrapper[37036]: I0312 14:35:41.520265 37036 generic.go:334] "Generic (PLEG): container finished" podID="a35674af-162c-4a4a-8605-158b2326267e" containerID="74c768e9e11582adc0014bc840fea327d7f38cf0f676db2b9e0edea0c24915ce" exitCode=0 Mar 12 14:35:41.537923 master-0 kubenswrapper[37036]: E0312 14:35:41.533994 37036 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 14:35:41.543506 master-0 kubenswrapper[37036]: I0312 14:35:41.541338 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-84bfdbbb7f-7lx8p_61de099a-410b-4d30-83e8-19cf5901cb27/service-ca-controller/2.log" Mar 12 14:35:41.543506 master-0 kubenswrapper[37036]: I0312 14:35:41.541381 37036 generic.go:334] "Generic (PLEG): container finished" podID="61de099a-410b-4d30-83e8-19cf5901cb27" containerID="a9360a88d496d9b99968219677b5a40fc143b8872564dfdffdd3aa113acbb8d5" exitCode=255 Mar 12 14:35:41.563919 master-0 kubenswrapper[37036]: I0312 14:35:41.563433 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-754hn_1f9b15c6-b4ee-4907-8daa-376e3b438896/manager/0.log" Mar 12 14:35:41.563919 master-0 kubenswrapper[37036]: I0312 14:35:41.563483 37036 generic.go:334] "Generic (PLEG): container finished" podID="1f9b15c6-b4ee-4907-8daa-376e3b438896" containerID="ed6b1efe75e8b6c558fafcaa8ddbf929d9ca6180cac551e6f152da3936b202da" exitCode=1 Mar 12 14:35:41.570526 master-0 kubenswrapper[37036]: I0312 14:35:41.568622 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77899cf6d-h8sq4_8106d14a-b448-4dd1-bccd-926f85394b5d/cluster-olm-operator/2.log" Mar 12 14:35:41.570526 master-0 kubenswrapper[37036]: I0312 14:35:41.569080 37036 generic.go:334] "Generic (PLEG): container finished" podID="8106d14a-b448-4dd1-bccd-926f85394b5d" containerID="07fcba2f19661d8828bf52496d599b063fbcaa903c444fc6dc693f6b4ced2d26" exitCode=255 Mar 12 14:35:41.570526 master-0 kubenswrapper[37036]: I0312 14:35:41.569098 37036 generic.go:334] "Generic (PLEG): container finished" podID="8106d14a-b448-4dd1-bccd-926f85394b5d" containerID="34b14db33a75935753eb07fc5c1da978369413ed001610be1a02068299e72c2a" exitCode=0 Mar 12 14:35:41.570526 master-0 kubenswrapper[37036]: I0312 14:35:41.569107 37036 generic.go:334] "Generic (PLEG): container finished" podID="8106d14a-b448-4dd1-bccd-926f85394b5d" containerID="be2a07c0fd561c76349af0b4e32d3d5bd9b366ededeeef597a13a0ecfa9560a3" exitCode=0 Mar 12 14:35:41.574407 master-0 kubenswrapper[37036]: I0312 14:35:41.572174 37036 generic.go:334] "Generic (PLEG): container finished" podID="2f59d485-9f69-4f36-836e-6338f84b7d69" containerID="fd763b32a6f9e14de1e48ab02ce0e8ed0420b566d892ab96ff30c9ac6deeebf4" exitCode=0 Mar 12 14:35:41.574407 master-0 kubenswrapper[37036]: I0312 14:35:41.572193 37036 generic.go:334] "Generic (PLEG): container finished" podID="2f59d485-9f69-4f36-836e-6338f84b7d69" containerID="6f88048bcaa35db146cb15d79ce615c930b521dad3951a081c1c2ef94a48da36" exitCode=0 Mar 12 14:35:41.574407 master-0 kubenswrapper[37036]: I0312 14:35:41.573291 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-mjxsv_8d775283-2696-4411-8ddf-d4e6000f0a0c/etcd-operator/3.log" Mar 12 14:35:41.574407 master-0 kubenswrapper[37036]: I0312 14:35:41.573310 37036 generic.go:334] "Generic (PLEG): container finished" podID="8d775283-2696-4411-8ddf-d4e6000f0a0c" containerID="dab12d78b58362271ed50f79c5a69254f295643a7991e2e36b8a3b67ed281ba9" exitCode=255 Mar 12 14:35:41.581632 master-0 kubenswrapper[37036]: I0312 14:35:41.577868 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_c0743910-1ba7-490d-bc3e-5126562b04aa/installer/0.log" Mar 12 14:35:41.581632 master-0 kubenswrapper[37036]: I0312 14:35:41.577940 37036 generic.go:334] "Generic (PLEG): container finished" podID="c0743910-1ba7-490d-bc3e-5126562b04aa" containerID="763faa898e18449dd9a50b708e0137c7362e38addce32c4afec9964d733e4f39" exitCode=1 Mar 12 14:35:41.586822 master-0 kubenswrapper[37036]: I0312 14:35:41.584527 37036 generic.go:334] "Generic (PLEG): container finished" podID="1d3d45b6ce1b3764f9927e623a71adf8" containerID="3a9edbd537b2b433573698a4a6787a21fea247fccf7cbaf8147e87a4f36c14fb" exitCode=0 Mar 12 14:35:41.589218 master-0 kubenswrapper[37036]: I0312 14:35:41.588908 37036 generic.go:334] "Generic (PLEG): container finished" podID="941c0808-bbfd-467e-b733-3a8294163ee5" containerID="b0d7763766a63cc91dd74368313cbb94587dedcd2efd8ded0e17187af3e40d25" exitCode=0 Mar 12 14:35:41.590982 master-0 kubenswrapper[37036]: I0312 14:35:41.590953 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/5.log" Mar 12 14:35:41.591356 master-0 kubenswrapper[37036]: I0312 14:35:41.591281 37036 generic.go:334] "Generic (PLEG): container finished" podID="4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6" containerID="c16aee696a6ef88096dfa67f9116c7fd30990cd6603084cb800a4c732d12f445" exitCode=1 Mar 12 14:35:41.593070 master-0 kubenswrapper[37036]: I0312 14:35:41.593042 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-zghs6_879e9bf1-ce4a-40b7-a72c-fe4c61e96cea/cluster-node-tuning-operator/0.log" Mar 12 14:35:41.593148 master-0 kubenswrapper[37036]: I0312 14:35:41.593075 37036 generic.go:334] "Generic (PLEG): container finished" podID="879e9bf1-ce4a-40b7-a72c-fe4c61e96cea" containerID="84cd4dda4ef244649d072d7fb3ef07cda0fc4acab308d3a457899758e508ea9b" exitCode=1 Mar 12 14:35:41.643941 master-0 kubenswrapper[37036]: I0312 14:35:41.643238 37036 generic.go:334] "Generic (PLEG): container finished" podID="9757756c-cb67-4b6f-99c3-dd63f904897a" containerID="f2ba438d34b4b3304e8d60d973e3309595cd9060a2ebe30a5d88db295ad25e25" exitCode=0 Mar 12 14:35:41.643941 master-0 kubenswrapper[37036]: I0312 14:35:41.643280 37036 generic.go:334] "Generic (PLEG): container finished" podID="9757756c-cb67-4b6f-99c3-dd63f904897a" containerID="d39ce324f3db6164db245417f53b6d8ff38716c386224704af63bf67e207b5f1" exitCode=0 Mar 12 14:35:41.643941 master-0 kubenswrapper[37036]: I0312 14:35:41.643288 37036 generic.go:334] "Generic (PLEG): container finished" podID="9757756c-cb67-4b6f-99c3-dd63f904897a" containerID="9fbd87c96fccfe4bfad334fd8c3bc1df622b06005839f21efff6ba86833c49f2" exitCode=0 Mar 12 14:35:41.643941 master-0 kubenswrapper[37036]: I0312 14:35:41.643298 37036 generic.go:334] "Generic (PLEG): container finished" podID="9757756c-cb67-4b6f-99c3-dd63f904897a" containerID="affa558e980cee997cdd8182eda2cfef7d818deacab403a1f48e02cffbc1c48b" exitCode=0 Mar 12 14:35:41.643941 master-0 kubenswrapper[37036]: I0312 14:35:41.643304 37036 generic.go:334] "Generic (PLEG): container finished" podID="9757756c-cb67-4b6f-99c3-dd63f904897a" containerID="badf1c98d1937a2f8e44bf83e8bf87b7da9889235c52744f099d88d3a841de7f" exitCode=0 Mar 12 14:35:41.643941 master-0 kubenswrapper[37036]: I0312 14:35:41.643312 37036 generic.go:334] "Generic (PLEG): container finished" podID="9757756c-cb67-4b6f-99c3-dd63f904897a" containerID="cfa5b038bc7b07de92bf843b3a45833830090fe9d6879ece21a0622781be697c" exitCode=0 Mar 12 14:35:41.649304 master-0 kubenswrapper[37036]: I0312 14:35:41.649256 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-rqq4v_e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9/approver/0.log" Mar 12 14:35:41.649933 master-0 kubenswrapper[37036]: I0312 14:35:41.649875 37036 generic.go:334] "Generic (PLEG): container finished" podID="e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9" containerID="6426a3a4748b7e9d673d2f1d6267439ec1d4e697687aa5758b4c1a8fe5038d99" exitCode=1 Mar 12 14:35:41.660027 master-0 kubenswrapper[37036]: I0312 14:35:41.659927 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hs6mc_3edaa533-ecbb-443e-a270-4cb4f923daf6/cluster-baremetal-operator/3.log" Mar 12 14:35:41.661065 master-0 kubenswrapper[37036]: I0312 14:35:41.660515 37036 generic.go:334] "Generic (PLEG): container finished" podID="3edaa533-ecbb-443e-a270-4cb4f923daf6" containerID="3ebfe9284b5aa5ae3cf93734a2a620a3ca175da8fc2dbf0765228bbf0c19305a" exitCode=1 Mar 12 14:35:41.664442 master-0 kubenswrapper[37036]: I0312 14:35:41.664419 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-2pj4z_39252b5a-d014-4319-ad81-3c1bf2ef585e/manager/0.log" Mar 12 14:35:41.664837 master-0 kubenswrapper[37036]: I0312 14:35:41.664805 37036 generic.go:334] "Generic (PLEG): container finished" podID="39252b5a-d014-4319-ad81-3c1bf2ef585e" containerID="9e5d0273aaf9a58de181bc25e8eb0e74c78055d79bccf5dc90c3b2168e550793" exitCode=1 Mar 12 14:35:41.666708 master-0 kubenswrapper[37036]: I0312 14:35:41.666686 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-794bf69795-vntlz_7420564a-dc9d-4a2e-b0fc-0cc01f115e3b/oauth-apiserver/0.log" Mar 12 14:35:41.670516 master-0 kubenswrapper[37036]: I0312 14:35:41.670458 37036 generic.go:334] "Generic (PLEG): container finished" podID="7420564a-dc9d-4a2e-b0fc-0cc01f115e3b" containerID="82e98531076d6e3c9a7e475978917c54179baaf121c2bd492fa03aa8611e6187" exitCode=1 Mar 12 14:35:41.670516 master-0 kubenswrapper[37036]: I0312 14:35:41.670503 37036 generic.go:334] "Generic (PLEG): container finished" podID="7420564a-dc9d-4a2e-b0fc-0cc01f115e3b" containerID="4de0a85e4d47c7fb4dc863fea7d92d4eeed644f410c3792a0156ceb688c0d760" exitCode=0 Mar 12 14:35:41.679714 master-0 kubenswrapper[37036]: I0312 14:35:41.679672 37036 generic.go:334] "Generic (PLEG): container finished" podID="0c8675d4-a0be-42a3-96af-e56f5fb02983" containerID="c501e9b39beb072c6b4373a31e843bee99560319d607f9fde7f18203290ac2ca" exitCode=0 Mar 12 14:35:41.684726 master-0 kubenswrapper[37036]: I0312 14:35:41.684693 37036 generic.go:334] "Generic (PLEG): container finished" podID="99433993-93cf-46cb-bb66-485672cb2554" containerID="942edb2086b196730f2050c8c10e7943616ea284812689341f08412925b12705" exitCode=0 Mar 12 14:35:41.690766 master-0 kubenswrapper[37036]: I0312 14:35:41.690710 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-jpf47_57930a54-89ab-4ec8-a504-74035bb74d63/authentication-operator/4.log" Mar 12 14:35:41.691020 master-0 kubenswrapper[37036]: I0312 14:35:41.690821 37036 generic.go:334] "Generic (PLEG): container finished" podID="57930a54-89ab-4ec8-a504-74035bb74d63" containerID="8d633c24c0fbfe0880167743a2ebe5f60f0f211a6026d8c3f55625a7e7adbd93" exitCode=255 Mar 12 14:35:41.693656 master-0 kubenswrapper[37036]: I0312 14:35:41.693621 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-xgrsw_06eb9f4b-167e-435b-8ef6-ae44fc0b85a9/cluster-storage-operator/1.log" Mar 12 14:35:41.693737 master-0 kubenswrapper[37036]: I0312 14:35:41.693668 37036 generic.go:334] "Generic (PLEG): container finished" podID="06eb9f4b-167e-435b-8ef6-ae44fc0b85a9" containerID="f0b49f86d1ebba78f4cfa063af24f0516cffba203587d317eadf4a198fe2c77d" exitCode=255 Mar 12 14:35:41.695772 master-0 kubenswrapper[37036]: I0312 14:35:41.695750 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-7f65c457f5-hkf2t_3dc73c14-852d-4957-b6ac-84366ba0594f/kube-storage-version-migrator-operator/3.log" Mar 12 14:35:41.695858 master-0 kubenswrapper[37036]: I0312 14:35:41.695785 37036 generic.go:334] "Generic (PLEG): container finished" podID="3dc73c14-852d-4957-b6ac-84366ba0594f" containerID="7c75b0b66bdc20c82fe578e42fb9ae10c12f677e86c5f3339f7a2fe4881a6199" exitCode=255 Mar 12 14:35:41.700708 master-0 kubenswrapper[37036]: I0312 14:35:41.700667 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/5.log" Mar 12 14:35:41.700852 master-0 kubenswrapper[37036]: I0312 14:35:41.700709 37036 generic.go:334] "Generic (PLEG): container finished" podID="d56089bf-177c-492d-8964-73a45574e7ed" containerID="cd9014bcffe6ddde739ac15065ac6e2169de2b76f2a0295b122a3bc2a089b78d" exitCode=1 Mar 12 14:35:41.709851 master-0 kubenswrapper[37036]: I0312 14:35:41.708399 37036 generic.go:334] "Generic (PLEG): container finished" podID="70710a0b-8b5d-40f5-b726-fd5e2836ffbe" containerID="1b509b364f4790e7d098a08001f85e21186839f1379b4fc1d8a3f87999a8287a" exitCode=0 Mar 12 14:35:41.709851 master-0 kubenswrapper[37036]: I0312 14:35:41.708436 37036 generic.go:334] "Generic (PLEG): container finished" podID="70710a0b-8b5d-40f5-b726-fd5e2836ffbe" containerID="8d3bb5013ca4c818b7c70903d8fce9e610940673188c266c6d78750aa35aac12" exitCode=0 Mar 12 14:35:41.710150 master-0 kubenswrapper[37036]: I0312 14:35:41.710076 37036 generic.go:334] "Generic (PLEG): container finished" podID="a2435b91-86d6-415b-a978-34cc859e74f2" containerID="875a6bda6b71188c64ac2ab0648f7976d1deadab74df54ad54a3c4c6e3e8c152" exitCode=0 Mar 12 14:35:41.726488 master-0 kubenswrapper[37036]: I0312 14:35:41.716692 37036 generic.go:334] "Generic (PLEG): container finished" podID="e2742559-1f28-4f2c-a873-d6a9348972fb" containerID="a6e68da263c509d4a3107148074b05db9d9991a2f13362fc7aaad75eb4e279c0" exitCode=0 Mar 12 14:35:41.726488 master-0 kubenswrapper[37036]: I0312 14:35:41.716707 37036 generic.go:334] "Generic (PLEG): container finished" podID="e2742559-1f28-4f2c-a873-d6a9348972fb" containerID="935fc506f983008a79b60e43ad782c4f076fe53a90782b9c09742c04419944c2" exitCode=0 Mar 12 14:35:41.726488 master-0 kubenswrapper[37036]: I0312 14:35:41.719294 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-vpn8v_08ea0d9f-0635-4759-803e-572eca2f2d34/kube-scheduler-operator-container/1.log" Mar 12 14:35:41.726488 master-0 kubenswrapper[37036]: I0312 14:35:41.719333 37036 generic.go:334] "Generic (PLEG): container finished" podID="08ea0d9f-0635-4759-803e-572eca2f2d34" containerID="c7748344653d88d11ff333e5116bce0c85dee6521b85089b95571404112fbab9" exitCode=255 Mar 12 14:35:41.726488 master-0 kubenswrapper[37036]: I0312 14:35:41.723316 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-zwdgk_d00a8cc7-7774-40bd-94a1-9ac2d0f63234/openshift-controller-manager-operator/3.log" Mar 12 14:35:41.726488 master-0 kubenswrapper[37036]: I0312 14:35:41.723347 37036 generic.go:334] "Generic (PLEG): container finished" podID="d00a8cc7-7774-40bd-94a1-9ac2d0f63234" containerID="cdfe0e410845d5baf2e09f8531028d9af2d70fe1e72cb65a07430cd6462f940c" exitCode=255 Mar 12 14:35:41.731039 master-0 kubenswrapper[37036]: I0312 14:35:41.730977 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-smpl5_a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2/kube-apiserver-operator/3.log" Mar 12 14:35:41.731039 master-0 kubenswrapper[37036]: I0312 14:35:41.731032 37036 generic.go:334] "Generic (PLEG): container finished" podID="a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2" containerID="5efaa8718300502113322a1eee9979f20223fd4bf67820218994af2c3ddf3fdb" exitCode=255 Mar 12 14:35:41.734095 master-0 kubenswrapper[37036]: I0312 14:35:41.734045 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-7s8fj_f3c13c5f-3d1f-4e0a-b77b-732255680086/control-plane-machine-set-operator/0.log" Mar 12 14:35:41.734207 master-0 kubenswrapper[37036]: I0312 14:35:41.734112 37036 generic.go:334] "Generic (PLEG): container finished" podID="f3c13c5f-3d1f-4e0a-b77b-732255680086" containerID="c67f823638be00e0ed74a2579b7dd1b4da80134d340ad18f11466d7e3913888f" exitCode=1 Mar 12 14:35:41.740979 master-0 kubenswrapper[37036]: I0312 14:35:41.740370 37036 generic.go:334] "Generic (PLEG): container finished" podID="5a56d42a-efb4-4956-acab-d12c7ca5276e" containerID="146c62a465e9e1e895adc796ffe1dc3a492864f1300cc5372ec58af6ed5526e2" exitCode=0 Mar 12 14:35:41.744573 master-0 kubenswrapper[37036]: I0312 14:35:41.744081 37036 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="680cd62a7f090bc2a4f20cc8a440912f04f5a4fb884d39ec76cd168ddf53e447" exitCode=0 Mar 12 14:35:41.749607 master-0 kubenswrapper[37036]: I0312 14:35:41.748097 37036 generic.go:334] "Generic (PLEG): container finished" podID="146495bf-0787-483f-a9fc-0e8925b89150" containerID="6033bc31672a320e7b8ffbe7a63f79564d187ec798713169c640338dfe2b84c4" exitCode=0 Mar 12 14:35:41.751964 master-0 kubenswrapper[37036]: I0312 14:35:41.751879 37036 generic.go:334] "Generic (PLEG): container finished" podID="b2d8e6e9-c10f-4b43-8155-9addbfddba2e" containerID="6332902d5d84cf465484ab14dac64d9b60905fd555e191dc35b3857c84ea5469" exitCode=0 Mar 12 14:35:41.774282 master-0 kubenswrapper[37036]: I0312 14:35:41.774243 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-ldxfn_7433d9bf-4edf-4787-a7a1-e5102c7264c7/network-operator/3.log" Mar 12 14:35:41.774460 master-0 kubenswrapper[37036]: I0312 14:35:41.774291 37036 generic.go:334] "Generic (PLEG): container finished" podID="7433d9bf-4edf-4787-a7a1-e5102c7264c7" containerID="48fe02f7a254d8d98f49ab36edbe52b1845dafa9c51071f3a38df472248895ba" exitCode=255 Mar 12 14:35:41.775924 master-0 kubenswrapper[37036]: I0312 14:35:41.775626 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5685fbc7d-ckmlv_8660cef9-0ab3-453e-a4b9-c243daa6ddb0/csi-snapshot-controller-operator/2.log" Mar 12 14:35:41.775924 master-0 kubenswrapper[37036]: I0312 14:35:41.775660 37036 generic.go:334] "Generic (PLEG): container finished" podID="8660cef9-0ab3-453e-a4b9-c243daa6ddb0" containerID="d135f68615930d49632ead44689c31ed1dba2d0c236cbda4ae0463dc788e0e6a" exitCode=255 Mar 12 14:35:41.934460 master-0 kubenswrapper[37036]: E0312 14:35:41.934341 37036 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 14:35:42.139911 master-0 kubenswrapper[37036]: I0312 14:35:42.139862 37036 apiserver.go:52] "Watching apiserver" Mar 12 14:35:42.165797 master-0 kubenswrapper[37036]: I0312 14:35:42.165735 37036 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 12 14:35:42.734658 master-0 kubenswrapper[37036]: E0312 14:35:42.734575 37036 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 14:35:44.335175 master-0 kubenswrapper[37036]: E0312 14:35:44.335112 37036 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 14:35:47.536055 master-0 kubenswrapper[37036]: E0312 14:35:47.535992 37036 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 14:35:52.536492 master-0 kubenswrapper[37036]: E0312 14:35:52.536419 37036 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 14:35:57.537035 master-0 kubenswrapper[37036]: E0312 14:35:57.536958 37036 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 14:36:02.537438 master-0 kubenswrapper[37036]: E0312 14:36:02.537378 37036 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 14:36:07.538232 master-0 kubenswrapper[37036]: E0312 14:36:07.538164 37036 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 14:36:11.274631 master-0 kubenswrapper[37036]: E0312 14:36:11.274586 37036 summary_sys_containers.go:89] "Failed to get system container stats" err="failed to get cgroup stats for \"/system.slice/crio.service\": failed to get container info for \"/system.slice/crio.service\": unknown container \"/system.slice/crio.service\"" containerName="/system.slice/crio.service" Mar 12 14:36:11.274631 master-0 kubenswrapper[37036]: E0312 14:36:11.274620 37036 summary_sys_containers.go:89] "Failed to get system container stats" err="failed to get cgroup stats for \"/system.slice\": failed to get container info for \"/system.slice\": unknown container \"/system.slice\"" containerName="/system.slice" Mar 12 14:36:11.275998 master-0 kubenswrapper[37036]: E0312 14:36:11.275959 37036 summary_sys_containers.go:89] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods.slice\": failed to get container info for \"/kubepods.slice\": unknown container \"/kubepods.slice\"" containerName="/kubepods.slice" Mar 12 14:36:11.276077 master-0 kubenswrapper[37036]: E0312 14:36:11.276002 37036 summary_sys_containers.go:89] "Failed to get system container stats" err="failed to get cgroup stats for \"/system.slice\": failed to get container info for \"/system.slice\": unknown container \"/system.slice\"" containerName="/system.slice" Mar 12 14:36:11.277871 master-0 kubenswrapper[37036]: E0312 14:36:11.277810 37036 summary_sys_containers.go:89] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods.slice\": failed to get container info for \"/kubepods.slice\": unknown container \"/kubepods.slice\"" containerName="/kubepods.slice" Mar 12 14:36:11.278963 master-0 kubenswrapper[37036]: E0312 14:36:11.278930 37036 summary_sys_containers.go:89] "Failed to get system container stats" err="failed to get cgroup stats for \"/system.slice/crio.service\": failed to get container info for \"/system.slice/crio.service\": unknown container \"/system.slice/crio.service\"" containerName="/system.slice/crio.service" Mar 12 14:36:11.493970 master-0 kubenswrapper[37036]: I0312 14:36:11.493883 37036 manager.go:324] Recovery completed Mar 12 14:36:11.595238 master-0 kubenswrapper[37036]: I0312 14:36:11.594132 37036 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 12 14:36:11.595238 master-0 kubenswrapper[37036]: I0312 14:36:11.594166 37036 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 12 14:36:11.595238 master-0 kubenswrapper[37036]: I0312 14:36:11.594202 37036 state_mem.go:36] "Initialized new in-memory state store" Mar 12 14:36:11.595238 master-0 kubenswrapper[37036]: I0312 14:36:11.594408 37036 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 12 14:36:11.595238 master-0 kubenswrapper[37036]: I0312 14:36:11.594422 37036 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 12 14:36:11.595238 master-0 kubenswrapper[37036]: I0312 14:36:11.594464 37036 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 12 14:36:11.595238 master-0 kubenswrapper[37036]: I0312 14:36:11.594474 37036 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 12 14:36:11.595238 master-0 kubenswrapper[37036]: I0312 14:36:11.594482 37036 policy_none.go:49] "None policy: Start" Mar 12 14:36:11.598030 master-0 kubenswrapper[37036]: I0312 14:36:11.597973 37036 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 12 14:36:11.598030 master-0 kubenswrapper[37036]: I0312 14:36:11.598009 37036 state_mem.go:35] "Initializing new in-memory state store" Mar 12 14:36:11.598215 master-0 kubenswrapper[37036]: I0312 14:36:11.598201 37036 state_mem.go:75] "Updated machine memory state" Mar 12 14:36:11.598215 master-0 kubenswrapper[37036]: I0312 14:36:11.598214 37036 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 12 14:36:11.609424 master-0 kubenswrapper[37036]: I0312 14:36:11.609372 37036 manager.go:334] "Starting Device Plugin manager" Mar 12 14:36:11.609582 master-0 kubenswrapper[37036]: I0312 14:36:11.609448 37036 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 12 14:36:11.609582 master-0 kubenswrapper[37036]: I0312 14:36:11.609465 37036 server.go:79] "Starting device plugin registration server" Mar 12 14:36:11.609949 master-0 kubenswrapper[37036]: I0312 14:36:11.609920 37036 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 14:36:11.610021 master-0 kubenswrapper[37036]: I0312 14:36:11.609944 37036 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 14:36:11.610522 master-0 kubenswrapper[37036]: I0312 14:36:11.610495 37036 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 12 14:36:11.611391 master-0 kubenswrapper[37036]: I0312 14:36:11.610583 37036 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 12 14:36:11.611391 master-0 kubenswrapper[37036]: I0312 14:36:11.610594 37036 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 14:36:11.710321 master-0 kubenswrapper[37036]: I0312 14:36:11.710258 37036 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 14:36:11.716556 master-0 kubenswrapper[37036]: I0312 14:36:11.716521 37036 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 14:36:11.716717 master-0 kubenswrapper[37036]: I0312 14:36:11.716573 37036 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 14:36:11.716717 master-0 kubenswrapper[37036]: I0312 14:36:11.716613 37036 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 14:36:11.716811 master-0 kubenswrapper[37036]: I0312 14:36:11.716798 37036 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 14:36:11.725423 master-0 kubenswrapper[37036]: I0312 14:36:11.725375 37036 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 12 14:36:11.725627 master-0 kubenswrapper[37036]: I0312 14:36:11.725491 37036 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 12 14:36:11.957319 master-0 kubenswrapper[37036]: I0312 14:36:11.957214 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_48512e02022680c9d90092634f0fc146/kube-apiserver-check-endpoints/0.log" Mar 12 14:36:11.959314 master-0 kubenswrapper[37036]: I0312 14:36:11.959272 37036 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="38d6f94bd36743b5e1de43d22e67db88c9c5b063935ce36f553f6e277d2085b0" exitCode=255 Mar 12 14:36:11.961744 master-0 kubenswrapper[37036]: I0312 14:36:11.961706 37036 generic.go:334] "Generic (PLEG): container finished" podID="e7f6ebd3-98c8-457c-a88c-7e81270f01b5" containerID="8267e1775d4f1f71ce9ca7f7438e5d643c261adc1297b9c3415c07d0974bcee7" exitCode=0 Mar 12 14:36:12.539030 master-0 kubenswrapper[37036]: I0312 14:36:12.538946 37036 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 12 14:36:12.539778 master-0 kubenswrapper[37036]: I0312 14:36:12.539701 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0","openshift-kube-controller-manager/installer-3-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-b5qg4","openshift-network-diagnostics/network-check-source-7c67b67d47-wdt59","openshift-cluster-node-tuning-operator/tuned-btfvk","openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82","openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82","openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s","openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw","openshift-etcd/etcd-master-0","openshift-etcd/installer-2-master-0","openshift-kube-apiserver/installer-3-master-0","openshift-kube-scheduler/installer-4-master-0","openshift-multus/multus-admission-controller-7769569c45-s5wj4","openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn","openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5","openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-q29ch","openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk","openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv","openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv","openshift-dns-operator/dns-operator-589895fbb7-q4wwv","openshift-marketplace/marketplace-operator-64bf9778cb-qzdff","openshift-monitoring/node-exporter-5pkwh","openshift-multus/multus-additional-cni-plugins-h868v","openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4","openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg","openshift-insights/insights-operator-8f89dfddd-gltz7","openshift-marketplace/redhat-operators-9bljc","openshift-oauth-apiserver/apiserver-794bf69795-vntlz","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v","openshift-ovn-kubernetes/ovnkube-node-h4b4k","openshift-dns/dns-default-fpjck","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-network-operator/iptables-alerter-vb4v5","openshift-service-ca/service-ca-84bfdbbb7f-7lx8p","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/installer-3-master-0","openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv","openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47","openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx","openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj","openshift-ingress-operator/ingress-operator-677db989d6-44hhf","openshift-ingress/router-default-79f8cd6fdd-gjwhp","openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts","openshift-multus/network-metrics-daemon-n9v7g","openshift-apiserver/apiserver-6b7d9dd778-7klpj","openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd","openshift-kube-apiserver/installer-4-master-0","openshift-network-operator/network-operator-7c649bf6d4-ldxfn","openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd","openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x","openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6","openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v","openshift-marketplace/certified-operators-mgqz4","openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h","openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79","openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp","openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw","openshift-kube-controller-manager/installer-2-master-0","openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296","openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d","openshift-network-diagnostics/network-check-target-8q2fv","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-controller-manager/installer-2-retry-1-master-0","openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc","openshift-monitoring/metrics-server-85b44c7984-pzbfq","openshift-machine-api/control-plane-machine-set-operator-6686554ddc-7s8fj","openshift-machine-config-operator/machine-config-server-nj7qg","assisted-installer/assisted-installer-controller-lbcvf","openshift-ingress-canary/ingress-canary-dbdr9","openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5","openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t","openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc","openshift-kube-scheduler/installer-3-retry-1-master-0","openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zswp","openshift-marketplace/community-operators-4gbmc","openshift-network-node-identity/network-node-identity-rqq4v","openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78","openshift-multus/multus-zttwz","openshift-dns/node-resolver-nml4k","openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9","openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf","openshift-machine-config-operator/machine-config-daemon-ngzc8","openshift-marketplace/redhat-marketplace-vmhgb"] Mar 12 14:36:12.540037 master-0 kubenswrapper[37036]: I0312 14:36:12.540003 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-lbcvf" Mar 12 14:36:12.543792 master-0 kubenswrapper[37036]: I0312 14:36:12.543750 37036 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="48455a35-581b-463e-bd51-87d671e06402" Mar 12 14:36:12.544805 master-0 kubenswrapper[37036]: I0312 14:36:12.544767 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 12 14:36:12.546012 master-0 kubenswrapper[37036]: I0312 14:36:12.545981 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 12 14:36:12.546699 master-0 kubenswrapper[37036]: I0312 14:36:12.546663 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 12 14:36:12.547091 master-0 kubenswrapper[37036]: I0312 14:36:12.547060 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 12 14:36:12.547194 master-0 kubenswrapper[37036]: I0312 14:36:12.547166 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 12 14:36:12.547388 master-0 kubenswrapper[37036]: I0312 14:36:12.547351 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 14:36:12.547479 master-0 kubenswrapper[37036]: I0312 14:36:12.547452 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 12 14:36:12.548211 master-0 kubenswrapper[37036]: I0312 14:36:12.548172 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 14:36:12.549347 master-0 kubenswrapper[37036]: I0312 14:36:12.549310 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 12 14:36:12.549658 master-0 kubenswrapper[37036]: I0312 14:36:12.549632 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 12 14:36:12.551384 master-0 kubenswrapper[37036]: I0312 14:36:12.551340 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 12 14:36:12.551494 master-0 kubenswrapper[37036]: I0312 14:36:12.551403 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 12 14:36:12.551730 master-0 kubenswrapper[37036]: I0312 14:36:12.551350 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 12 14:36:12.552816 master-0 kubenswrapper[37036]: I0312 14:36:12.552354 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 12 14:36:12.552816 master-0 kubenswrapper[37036]: I0312 14:36:12.552386 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 12 14:36:12.552816 master-0 kubenswrapper[37036]: I0312 14:36:12.552506 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 12 14:36:12.552816 master-0 kubenswrapper[37036]: I0312 14:36:12.552541 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 12 14:36:12.553067 master-0 kubenswrapper[37036]: I0312 14:36:12.553037 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 12 14:36:12.553112 master-0 kubenswrapper[37036]: I0312 14:36:12.553096 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 12 14:36:12.553188 master-0 kubenswrapper[37036]: I0312 14:36:12.553161 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 12 14:36:12.553256 master-0 kubenswrapper[37036]: I0312 14:36:12.553243 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 12 14:36:12.561188 master-0 kubenswrapper[37036]: I0312 14:36:12.560507 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 12 14:36:12.561188 master-0 kubenswrapper[37036]: I0312 14:36:12.560670 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 12 14:36:12.561188 master-0 kubenswrapper[37036]: I0312 14:36:12.560958 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 12 14:36:12.561188 master-0 kubenswrapper[37036]: I0312 14:36:12.561060 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 12 14:36:12.561514 master-0 kubenswrapper[37036]: I0312 14:36:12.561258 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 14:36:12.563001 master-0 kubenswrapper[37036]: I0312 14:36:12.562136 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 12 14:36:12.563001 master-0 kubenswrapper[37036]: I0312 14:36:12.562155 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 12 14:36:12.563001 master-0 kubenswrapper[37036]: I0312 14:36:12.562402 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Mar 12 14:36:12.563001 master-0 kubenswrapper[37036]: I0312 14:36:12.562483 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 12 14:36:12.563558 master-0 kubenswrapper[37036]: I0312 14:36:12.563525 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 12 14:36:12.563632 master-0 kubenswrapper[37036]: I0312 14:36:12.563585 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 12 14:36:12.563632 master-0 kubenswrapper[37036]: I0312 14:36:12.563609 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 12 14:36:12.563715 master-0 kubenswrapper[37036]: I0312 14:36:12.563662 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 12 14:36:12.563763 master-0 kubenswrapper[37036]: I0312 14:36:12.563712 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 12 14:36:12.565024 master-0 kubenswrapper[37036]: I0312 14:36:12.564981 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 12 14:36:12.565173 master-0 kubenswrapper[37036]: I0312 14:36:12.565070 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 12 14:36:12.565252 master-0 kubenswrapper[37036]: I0312 14:36:12.565235 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 12 14:36:12.565310 master-0 kubenswrapper[37036]: I0312 14:36:12.565282 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 12 14:36:12.567515 master-0 kubenswrapper[37036]: I0312 14:36:12.565411 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 12 14:36:12.567515 master-0 kubenswrapper[37036]: I0312 14:36:12.565236 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 12 14:36:12.575454 master-0 kubenswrapper[37036]: I0312 14:36:12.575389 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 12 14:36:12.575674 master-0 kubenswrapper[37036]: I0312 14:36:12.575557 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 12 14:36:12.575674 master-0 kubenswrapper[37036]: I0312 14:36:12.575632 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 12 14:36:12.577238 master-0 kubenswrapper[37036]: I0312 14:36:12.576392 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 12 14:36:12.577238 master-0 kubenswrapper[37036]: I0312 14:36:12.576653 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 12 14:36:12.577838 master-0 kubenswrapper[37036]: I0312 14:36:12.577815 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 12 14:36:12.578083 master-0 kubenswrapper[37036]: I0312 14:36:12.578065 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 12 14:36:12.578293 master-0 kubenswrapper[37036]: I0312 14:36:12.578277 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 12 14:36:12.578540 master-0 kubenswrapper[37036]: I0312 14:36:12.578524 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 12 14:36:12.578710 master-0 kubenswrapper[37036]: I0312 14:36:12.578682 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 12 14:36:12.578871 master-0 kubenswrapper[37036]: I0312 14:36:12.578856 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 12 14:36:12.584150 master-0 kubenswrapper[37036]: E0312 14:36:12.579069 37036 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:36:12.584495 master-0 kubenswrapper[37036]: I0312 14:36:12.580356 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 12 14:36:12.584662 master-0 kubenswrapper[37036]: E0312 14:36:12.580936 37036 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 14:36:12.584853 master-0 kubenswrapper[37036]: I0312 14:36:12.584818 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 14:36:12.584853 master-0 kubenswrapper[37036]: E0312 14:36:12.580959 37036 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:36:12.585004 master-0 kubenswrapper[37036]: I0312 14:36:12.580879 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 12 14:36:12.585004 master-0 kubenswrapper[37036]: I0312 14:36:12.584833 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 14:36:12.585004 master-0 kubenswrapper[37036]: I0312 14:36:12.583759 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 14:36:12.585117 master-0 kubenswrapper[37036]: I0312 14:36:12.581047 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 12 14:36:12.585117 master-0 kubenswrapper[37036]: I0312 14:36:12.578587 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 12 14:36:12.585200 master-0 kubenswrapper[37036]: I0312 14:36:12.585160 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 12 14:36:12.585200 master-0 kubenswrapper[37036]: I0312 14:36:12.581117 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 12 14:36:12.585200 master-0 kubenswrapper[37036]: I0312 14:36:12.585191 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 12 14:36:12.585319 master-0 kubenswrapper[37036]: I0312 14:36:12.581175 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 12 14:36:12.585375 master-0 kubenswrapper[37036]: I0312 14:36:12.585352 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 12 14:36:12.585431 master-0 kubenswrapper[37036]: I0312 14:36:12.585401 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 12 14:36:12.585490 master-0 kubenswrapper[37036]: I0312 14:36:12.581247 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 12 14:36:12.585617 master-0 kubenswrapper[37036]: I0312 14:36:12.581272 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 12 14:36:12.585714 master-0 kubenswrapper[37036]: I0312 14:36:12.581332 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 12 14:36:12.586152 master-0 kubenswrapper[37036]: I0312 14:36:12.581373 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 12 14:36:12.586152 master-0 kubenswrapper[37036]: I0312 14:36:12.581392 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 12 14:36:12.586152 master-0 kubenswrapper[37036]: I0312 14:36:12.581425 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 12 14:36:12.586269 master-0 kubenswrapper[37036]: I0312 14:36:12.586177 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:36:12.586342 master-0 kubenswrapper[37036]: I0312 14:36:12.586323 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 12 14:36:12.586854 master-0 kubenswrapper[37036]: I0312 14:36:12.586690 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"05fc4965-b390-4edc-a407-d431b06d7612","Type":"ContainerDied","Data":"6aa44e483ff3af56ade2c830f5190301f0a2aff21489693f95cab78436b2ad8d"} Mar 12 14:36:12.586967 master-0 kubenswrapper[37036]: I0312 14:36:12.586882 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"05fc4965-b390-4edc-a407-d431b06d7612","Type":"ContainerDied","Data":"2994881b5befdba78efa5f6568b4edfa2a8b9fa1561fed91504e637ca759f929"} Mar 12 14:36:12.586967 master-0 kubenswrapper[37036]: I0312 14:36:12.586913 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2994881b5befdba78efa5f6568b4edfa2a8b9fa1561fed91504e637ca759f929" Mar 12 14:36:12.587051 master-0 kubenswrapper[37036]: I0312 14:36:12.586972 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 14:36:12.587051 master-0 kubenswrapper[37036]: I0312 14:36:12.586988 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 12 14:36:12.587051 master-0 kubenswrapper[37036]: I0312 14:36:12.587000 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" event={"ID":"42dbcb8f-e8c4-413e-977d-40aa6df226aa","Type":"ContainerStarted","Data":"96773e17e9462f90b171d3286268d0d8f5fc4990dec24aadd0ba11958115f19d"} Mar 12 14:36:12.587051 master-0 kubenswrapper[37036]: I0312 14:36:12.587010 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" event={"ID":"42dbcb8f-e8c4-413e-977d-40aa6df226aa","Type":"ContainerStarted","Data":"dc05a7757105e04e114bec1d0c6d1948857cd13293222846a43aed00c9eb7e9e"} Mar 12 14:36:12.587051 master-0 kubenswrapper[37036]: I0312 14:36:12.587021 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"9a2b4b06-98cd-4ca3-aebe-d49651c6013f","Type":"ContainerDied","Data":"6a4b354a483f93559470810779464488abbf5caec068837d5cc9967973e986cd"} Mar 12 14:36:12.587051 master-0 kubenswrapper[37036]: I0312 14:36:12.587033 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"9a2b4b06-98cd-4ca3-aebe-d49651c6013f","Type":"ContainerDied","Data":"d550ea8dc31005b416b9c69f57e3f529e1fb9f7cb9468cf14d70b47c6fe1bf41"} Mar 12 14:36:12.587051 master-0 kubenswrapper[37036]: I0312 14:36:12.587042 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d550ea8dc31005b416b9c69f57e3f529e1fb9f7cb9468cf14d70b47c6fe1bf41" Mar 12 14:36:12.587051 master-0 kubenswrapper[37036]: I0312 14:36:12.587050 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" event={"ID":"59f21770-429b-4b63-82fd-50ce0daf698d","Type":"ContainerStarted","Data":"4090851a8a1e04f68cb376f8a537549cd0813cb04a4f0fc1281d6f979e4c7445"} Mar 12 14:36:12.587276 master-0 kubenswrapper[37036]: I0312 14:36:12.587059 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" event={"ID":"59f21770-429b-4b63-82fd-50ce0daf698d","Type":"ContainerStarted","Data":"3888e133cb6f93fcd878da6d7969a89f350958d23b4b08aa7f61aa0370050771"} Mar 12 14:36:12.587276 master-0 kubenswrapper[37036]: I0312 14:36:12.587068 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" event={"ID":"59f21770-429b-4b63-82fd-50ce0daf698d","Type":"ContainerStarted","Data":"eef0b37dd526322eaef7c1aca76f63285c998e29a07055dc363715ea766db015"} Mar 12 14:36:12.587276 master-0 kubenswrapper[37036]: I0312 14:36:12.587078 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" event={"ID":"59f21770-429b-4b63-82fd-50ce0daf698d","Type":"ContainerStarted","Data":"b91ed73a339c21ab18d17bc789c0ba3301a928d38dce2afb46b197b75f34b51e"} Mar 12 14:36:12.587276 master-0 kubenswrapper[37036]: I0312 14:36:12.581459 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 12 14:36:12.587276 master-0 kubenswrapper[37036]: I0312 14:36:12.587108 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zttwz" event={"ID":"95c11263-0d68-4b11-bcfd-bcb0e96a6988","Type":"ContainerStarted","Data":"5b018faa420052ddd30a7440e3b7a6b3748f361b955c0e4528b5de090907c8ec"} Mar 12 14:36:12.587276 master-0 kubenswrapper[37036]: I0312 14:36:12.581490 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 12 14:36:12.587276 master-0 kubenswrapper[37036]: I0312 14:36:12.587158 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zttwz" event={"ID":"95c11263-0d68-4b11-bcfd-bcb0e96a6988","Type":"ContainerStarted","Data":"fb9c2d52a7f820046d4d8f7dbc4ab42d1bcf38f9fbb4f9b3e069dc056c52a7d9"} Mar 12 14:36:12.587276 master-0 kubenswrapper[37036]: I0312 14:36:12.587179 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"3a18cac8a90d6913a6a0391d805cddc9","Type":"ContainerStarted","Data":"e9fc6346a6da6119c81346ba303c8b5290b20fcbd3042c75e28a3ab7c8620e35"} Mar 12 14:36:12.587276 master-0 kubenswrapper[37036]: I0312 14:36:12.581495 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 12 14:36:12.587276 master-0 kubenswrapper[37036]: I0312 14:36:12.587191 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"3a18cac8a90d6913a6a0391d805cddc9","Type":"ContainerStarted","Data":"5913774b8f250bfb47692670821ad697d9a92cb0aca0d95d6ebaa53a1397311f"} Mar 12 14:36:12.587276 master-0 kubenswrapper[37036]: I0312 14:36:12.587203 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" event={"ID":"272b53c4-134c-404d-9a27-c7371415b1f7","Type":"ContainerStarted","Data":"3010a80a92c3a02adf1119b509dd4d02bfec5d34b2c3fbe2b1e05487ab8ddb25"} Mar 12 14:36:12.587276 master-0 kubenswrapper[37036]: I0312 14:36:12.587214 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" event={"ID":"272b53c4-134c-404d-9a27-c7371415b1f7","Type":"ContainerStarted","Data":"d6cba419a6f6e1067b6ba753b668a42fc154b7b841036f746eeb0f9473a12dda"} Mar 12 14:36:12.587276 master-0 kubenswrapper[37036]: I0312 14:36:12.587224 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-gltz7" event={"ID":"dd29b21c-7a0e-4311-952f-427b00468e66","Type":"ContainerStarted","Data":"4554fa36ea62af0faebf9a33b90a529e86ff1bd8c518571b83301ec75299b664"} Mar 12 14:36:12.587276 master-0 kubenswrapper[37036]: I0312 14:36:12.587233 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-gltz7" event={"ID":"dd29b21c-7a0e-4311-952f-427b00468e66","Type":"ContainerDied","Data":"b7ebd6ed103fd32804e88ec8b0eb113b06bd39e732fa9609967014bb6c6c87cc"} Mar 12 14:36:12.587276 master-0 kubenswrapper[37036]: I0312 14:36:12.581571 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 12 14:36:12.587276 master-0 kubenswrapper[37036]: I0312 14:36:12.587243 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-gltz7" event={"ID":"dd29b21c-7a0e-4311-952f-427b00468e66","Type":"ContainerStarted","Data":"ddff8978b61211cf6981c8dcb5ac20ebbd703343ccf0d4864c6b4d8c7b748d88"} Mar 12 14:36:12.587276 master-0 kubenswrapper[37036]: I0312 14:36:12.587254 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" event={"ID":"6f5cd3ff-ced6-47e3-8054-d83053d87680","Type":"ContainerStarted","Data":"882c8b126a35149a72e79b677b717d54b482233f211b3eeec7589c0e044c5274"} Mar 12 14:36:12.587276 master-0 kubenswrapper[37036]: I0312 14:36:12.587263 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" event={"ID":"6f5cd3ff-ced6-47e3-8054-d83053d87680","Type":"ContainerDied","Data":"d0767e3a40f949712be9170d0b8f7cd2c338fed5faee0a7ad41873676dd6e5ae"} Mar 12 14:36:12.587276 master-0 kubenswrapper[37036]: I0312 14:36:12.587273 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" event={"ID":"6f5cd3ff-ced6-47e3-8054-d83053d87680","Type":"ContainerStarted","Data":"b5c77a8f26bcb62c099e151f8163e284029fc1893f65e49773f468da1bd7a06d"} Mar 12 14:36:12.587276 master-0 kubenswrapper[37036]: I0312 14:36:12.587282 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" event={"ID":"6f5cd3ff-ced6-47e3-8054-d83053d87680","Type":"ContainerStarted","Data":"1325db6b5fc63da3d3f80a9e903b690f2007b20dd9156b1536d772080219b0fc"} Mar 12 14:36:12.587276 master-0 kubenswrapper[37036]: I0312 14:36:12.587290 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"efd52682-bf05-44fc-9790-8adfc87ca087","Type":"ContainerDied","Data":"a7a831aba8d50e763154f735949d2f89a1f0e98463882117ee4053d40ba3f7ce"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587300 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"efd52682-bf05-44fc-9790-8adfc87ca087","Type":"ContainerDied","Data":"83a78b6bdc6bac34701501df7342c8dd451a72192f273fdc21aa0b983df21030"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587308 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83a78b6bdc6bac34701501df7342c8dd451a72192f273fdc21aa0b983df21030" Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587316 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" event={"ID":"1047bb4a-135f-488d-9399-0518cb3a827d","Type":"ContainerStarted","Data":"2773a86cc6a182bf175dc97eef9809e0caf7310c36237fcf488f8202b3a5b3a1"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587326 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" event={"ID":"1047bb4a-135f-488d-9399-0518cb3a827d","Type":"ContainerStarted","Data":"a573d71f938ba5f8098acbd1d172d8565a7766835eb5b928e725d99289f6a092"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587334 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" event={"ID":"1047bb4a-135f-488d-9399-0518cb3a827d","Type":"ContainerStarted","Data":"380f10a329a2eea87fd21dfa83c04f4ce73f4e4ef348556c89b039d62e9dac7d"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587343 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" event={"ID":"1047bb4a-135f-488d-9399-0518cb3a827d","Type":"ContainerStarted","Data":"b1a27def0943392bc851926036706c077e2c62d9404ab94e4d470faf771c9199"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587352 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"356f690299e3b7ad78aab551d955363ff311b1f2444fabe29c77c744cb4403f0"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587361 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"5009920acedd27d6a0105b1b145e95689f042f0ff07c9e9a14badc4267ae9ad8"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587398 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"9dddf3271a2d9acbd283c8eb5c1a2bf711cfeed332f245d2144a8b6421eca562"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.582803 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587411 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"52be6745e5385673d059fdbe2baaa4388277f83fc99a7fe7a8efe93c4686d66e"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587421 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"072241b0ff18685e3ac5cf437ba20aea2256aa2c2d716ca900fb030653d7963d"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587430 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"a54d7c040e4e83aac6a6fc975cc3d2fd03101d4237db0646f2870734d1932e37"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587439 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"557e5767b6a5906fd35802d8cc7a729030365600bcb6aca559cdc1d58e816deb"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.582872 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587483 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"7ad0044b2389b9999007ceef7cd4808d51c84380e6314ac6db787dc5a548f095"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587499 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"d546c5397e398d2fa2328f65fedfe1cce52498d31ad5c371f9043b0bc9f34f16"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587509 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-5pkwh" event={"ID":"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7","Type":"ContainerStarted","Data":"b83e97de107007adbcd23692a3bdbb649ea8264dd63f326fab85915ecb6c5f3a"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587519 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-5pkwh" event={"ID":"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7","Type":"ContainerStarted","Data":"19be5ccb5230010d84871a29080c878437ffbe4a525b10e61775810b14c25703"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.582883 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587529 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-5pkwh" event={"ID":"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7","Type":"ContainerDied","Data":"36ab6a383938c1c2c65deef282e5bd58d913849b1497608417a2412a1cf8ab99"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587548 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-5pkwh" event={"ID":"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7","Type":"ContainerStarted","Data":"4e4174446867a7a20182ef847c837a9996a0c6baab2ed07f50687234fab093d4"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587560 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" event={"ID":"8e4d9407-ff79-4396-a37f-896617e024d4","Type":"ContainerStarted","Data":"f6e61fb48d9732e09deab678588d21ae5ee12522c122ebf00a93dabd3828c932"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587574 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" event={"ID":"8e4d9407-ff79-4396-a37f-896617e024d4","Type":"ContainerDied","Data":"f3cde608396e1250953a5916aba2ef7c179e1de121583d5c59e0f48fda1512ff"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587588 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" event={"ID":"8e4d9407-ff79-4396-a37f-896617e024d4","Type":"ContainerStarted","Data":"5f66e3c8b94fb51a0c6c9ea1e5170e6a0c1589229e247c05b279a57ea1791d02"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587596 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" event={"ID":"8e4d9407-ff79-4396-a37f-896617e024d4","Type":"ContainerStarted","Data":"b0d9b5d35890bf7ee8f33755b50b3d62e47a389cd7d7e50fa4af660965af6cae"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587632 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" event={"ID":"761993bb-2cba-4e1a-b304-36a24817af94","Type":"ContainerStarted","Data":"5a171a4570a54c3f9188c37293065f2c1387a33c9d0045159c6fe79364d2cedb"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587678 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" event={"ID":"761993bb-2cba-4e1a-b304-36a24817af94","Type":"ContainerStarted","Data":"c15c1e9e16b30d85f0885585e2fe098199ed4e7cc955b4ed8774d188c849fa6e"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587693 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" event={"ID":"761993bb-2cba-4e1a-b304-36a24817af94","Type":"ContainerStarted","Data":"8f0a5e87a171e977245e106bcb0d14b6d01585868818b13d263c2d666131b999"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587705 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" event={"ID":"761993bb-2cba-4e1a-b304-36a24817af94","Type":"ContainerStarted","Data":"e4481bd232d92f3a57b8f7787193a01f1bf071df01fa34ce50980d73d202ef3b"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587717 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" event={"ID":"761993bb-2cba-4e1a-b304-36a24817af94","Type":"ContainerStarted","Data":"27a2d14c0a647584e5f7a6024a2a5900646e402a88a0ad1b289750c901a9138e"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587727 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" event={"ID":"761993bb-2cba-4e1a-b304-36a24817af94","Type":"ContainerStarted","Data":"79327b58d378e43070a170da3c3a5c7f6760dc0eb1a55c38ce78fc4548e93dd8"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587765 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" event={"ID":"761993bb-2cba-4e1a-b304-36a24817af94","Type":"ContainerStarted","Data":"6e495ef489b9ca0f05277f0691c4af4c593cd41786f5ce51a937f04016e8aa5d"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587778 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" event={"ID":"761993bb-2cba-4e1a-b304-36a24817af94","Type":"ContainerStarted","Data":"378c98f107287fc6c6428ceb6468176d0f8fb0ce32f629fb877669840b856fb3"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587791 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" event={"ID":"761993bb-2cba-4e1a-b304-36a24817af94","Type":"ContainerDied","Data":"e511180297e76f6a11f5330905f38a15021808c15b34dd938afb52d0fc965c91"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587804 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" event={"ID":"761993bb-2cba-4e1a-b304-36a24817af94","Type":"ContainerStarted","Data":"ba6778d1fdc6908e0a785cdabed807cc4f2dd052e1c7ef6d135e92d89f5e89d1"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587841 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fpjck" event={"ID":"3ec846db-e344-4f9e-95e6-7a0055f52766","Type":"ContainerStarted","Data":"7d54ce2bc817ff7890a25ecf66d17535153695c3255ca6f5f7a08771a0185ede"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587858 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fpjck" event={"ID":"3ec846db-e344-4f9e-95e6-7a0055f52766","Type":"ContainerStarted","Data":"fe9d2abf39cf7290e7c15c5dee12b2f88b594ebe36b5f363c1c4813ec36888d6"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587868 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fpjck" event={"ID":"3ec846db-e344-4f9e-95e6-7a0055f52766","Type":"ContainerStarted","Data":"a917672632ddd41ece955a9caf8b6f8e502d8c6d1a179cc7a84283068844b577"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587880 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" event={"ID":"e7f6ebd3-98c8-457c-a88c-7e81270f01b5","Type":"ContainerStarted","Data":"8267e1775d4f1f71ce9ca7f7438e5d643c261adc1297b9c3415c07d0974bcee7"} Mar 12 14:36:12.587883 master-0 kubenswrapper[37036]: I0312 14:36:12.587920 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" event={"ID":"e7f6ebd3-98c8-457c-a88c-7e81270f01b5","Type":"ContainerDied","Data":"23045659386f5f50b8b2e11a25ff55cb6da08b535a3f1f8469ef54d77c636cee"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.587943 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" event={"ID":"e7f6ebd3-98c8-457c-a88c-7e81270f01b5","Type":"ContainerStarted","Data":"fa512b9d1c47fba8ce4517c7ff55b3a36d2662e583e6b6952289b14b55413ef1"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.587956 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" event={"ID":"40912d56-8288-4d58-ad91-7455bd460887","Type":"ContainerStarted","Data":"1dac5d600ea05249e8d8af0156190efd630d5fd6d9218a7c125e8b47799a9d88"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.587970 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" event={"ID":"40912d56-8288-4d58-ad91-7455bd460887","Type":"ContainerDied","Data":"6b815065f5b803f6446ee0525693bbd7ee720d608451c165c93b259f6a7e3184"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.583092 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588051 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" event={"ID":"40912d56-8288-4d58-ad91-7455bd460887","Type":"ContainerStarted","Data":"9f02bf384767db17e8e9570ea753dcefdc9a2ea0cf7d2650e496583afd2ebc7f"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588095 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" event={"ID":"40912d56-8288-4d58-ad91-7455bd460887","Type":"ContainerStarted","Data":"a071b87c5a3a1d570849d8f30a4ef18e47cf5ac7ae26cb6fa07ebd774622be6c"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588111 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"23b56974-d2b1-4205-af5a-70cc2b616d1a","Type":"ContainerDied","Data":"44912c45860c53bd920d6344d008ca95bda45324f0583a0a019e5ef0a05b1d24"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588127 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"23b56974-d2b1-4205-af5a-70cc2b616d1a","Type":"ContainerDied","Data":"5d684cba0a95ae743814a8952b46742b894c87c51cb377826df98e54818be432"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588138 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d684cba0a95ae743814a8952b46742b894c87c51cb377826df98e54818be432" Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588186 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" event={"ID":"1bc0d552-01c7-4212-a551-d16419f2dc80","Type":"ContainerStarted","Data":"87e408de3133a4bf2efebc128a746f8d5687684784576d160d686aa712c52c42"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588201 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" event={"ID":"1bc0d552-01c7-4212-a551-d16419f2dc80","Type":"ContainerDied","Data":"d4f5f31cb9b13fbf54308c119403bf09d2d0acf82b48cd71b5bda3672a1ed049"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588214 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" event={"ID":"1bc0d552-01c7-4212-a551-d16419f2dc80","Type":"ContainerStarted","Data":"d46849ab9a3cac26570e0fb5ca7236cfad3a52459d3d93f56a2bd305b0ad9cd4"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588224 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"a2c3501c-0ebe-46d0-b2ed-540f96cd137c","Type":"ContainerDied","Data":"92d7499402985a174fd8cf44fdbd49d9d08d220559433aa9bf620331ab2599ae"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588248 37036 scope.go:117] "RemoveContainer" containerID="23045659386f5f50b8b2e11a25ff55cb6da08b535a3f1f8469ef54d77c636cee" Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588264 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"a2c3501c-0ebe-46d0-b2ed-540f96cd137c","Type":"ContainerDied","Data":"5efa81dbe1ce010e90dacfcc2b35c64f33e1c5492934d48f9dc1bdd46d4dd233"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588278 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5efa81dbe1ce010e90dacfcc2b35c64f33e1c5492934d48f9dc1bdd46d4dd233" Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588288 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"464680c0443f63fd05a16f58ce52f9d2432c0930cf81a8fc5c4fea579afa01c4"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588300 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"93a2be4c1cc0002fe72e77c70515d0d6599835f46c575d492bb4928167ddaaac"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588314 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"ece77fc75f8a7b32ae075ac5d9a3759a5a3b706e4492b696da7d62701d1c5eb8"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588355 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"360de6d7cd6901ac994724b265fa41deda5af26bfc1f5396acb31cdc3acfea90"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588367 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" event={"ID":"6defef79-6058-466a-ae0b-8eb9258126be","Type":"ContainerStarted","Data":"93f129166a8bd6d0ee33efc1d3e3d3b386f208cdbb79ef0a8dea04125491275d"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588378 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" event={"ID":"6defef79-6058-466a-ae0b-8eb9258126be","Type":"ContainerDied","Data":"e09e9528f2e667c7ca5a54a2f40134d7a65389dd5410fb6f666432c3167149ba"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588389 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" event={"ID":"6defef79-6058-466a-ae0b-8eb9258126be","Type":"ContainerStarted","Data":"9f2fe9790563ec38565007414495f6da66cc6ef242600efb951afc8284d7b4ba"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.583146 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.583154 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588718 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" event={"ID":"6defef79-6058-466a-ae0b-8eb9258126be","Type":"ContainerStarted","Data":"7ad7c4acbfd0070259486f35a18b99f96bb34f57c1bf16a0b81a55c2de084162"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.583171 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588747 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" event={"ID":"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649","Type":"ContainerStarted","Data":"4443df09a8c19650104c98a740a88d33df6130e524690a66362e4946d87ce8bd"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588804 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" event={"ID":"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649","Type":"ContainerStarted","Data":"243af8de94d7247256fe8d5f1c07f4bdc58bf9e725adb6ad3b482cf84320ddf3"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588822 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" event={"ID":"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649","Type":"ContainerStarted","Data":"75a51d91deac1b48c8ef86e4ae313b0ebac186bbd6cc97293836179bad976767"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588834 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" event={"ID":"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649","Type":"ContainerStarted","Data":"b00ca20b86c203586e283f8a194f1ae9775853a076e1989c48f1365bb1141a67"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588845 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" event={"ID":"1bba274a-38c7-4d13-88a5-6bc39228416c","Type":"ContainerStarted","Data":"62047f99b4ce506e99d53fe6ad293c502f400eb032ad29d0d887e3da41f2256c"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.583180 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.583187 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588944 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" event={"ID":"1bba274a-38c7-4d13-88a5-6bc39228416c","Type":"ContainerDied","Data":"a44c4ecc04fa9e6c4e5b12d13bcdb1beeaf87374ca0d2540444a8445b0121666"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.583195 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.588992 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" event={"ID":"1bba274a-38c7-4d13-88a5-6bc39228416c","Type":"ContainerStarted","Data":"1cc258e5add24f89b3e9a9a1502a4d4f7e01fa0c35af8f6d6a9076b7b4e48345"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.583204 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.589008 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" event={"ID":"61d829d7-38e1-4826-942c-f7317c4a4bec","Type":"ContainerStarted","Data":"39f68fed61650f6dec97860dd3151ed994abaeef80f3d14d0170b6aa53c69d9d"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.589026 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" event={"ID":"61d829d7-38e1-4826-942c-f7317c4a4bec","Type":"ContainerStarted","Data":"ef9652ff46904d8020e6714eabfec803a7fe8bff55ab4610c8c71c7a4b16e47c"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.589066 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.589066 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" event={"ID":"61d829d7-38e1-4826-942c-f7317c4a4bec","Type":"ContainerDied","Data":"952a4e5cff72cd7499151126b7d570c4e426b0316c7d3f1d9462b433d44d34b6"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.589349 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" event={"ID":"61d829d7-38e1-4826-942c-f7317c4a4bec","Type":"ContainerStarted","Data":"f0298c9e8c7173c3949586fa731c073a558897f0792064c146633191e5244fab"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.589413 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" event={"ID":"85459175-2c9c-425d-bdfb-0a79c92ed110","Type":"ContainerStarted","Data":"80fe428101670d5bb2198155d9aa028725f1d648d8b1891b02a37c2835bc8023"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.589433 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" event={"ID":"85459175-2c9c-425d-bdfb-0a79c92ed110","Type":"ContainerDied","Data":"e509fdc6496e2a91ab75938ff7600d03685ac240f8fb3c3d670f376d905b17ab"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.589446 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" event={"ID":"85459175-2c9c-425d-bdfb-0a79c92ed110","Type":"ContainerStarted","Data":"8c7c68a3a3bab58942cd948fa92e68afb328afcaa83ac1189a7b2322e7ba46ad"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.589459 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" event={"ID":"85459175-2c9c-425d-bdfb-0a79c92ed110","Type":"ContainerStarted","Data":"57327dd3cf51a7946c6428acbb4cffd5439484941e4f876980813ac47338ecdb"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.589500 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" event={"ID":"3f72fbbe-69f0-4622-be05-b839ff9b4d45","Type":"ContainerStarted","Data":"7256289b9a2663c64b4d6e3489d9934c85fe09c7f80090aa8be7b45d9d4e8d84"} Mar 12 14:36:12.589564 master-0 kubenswrapper[37036]: I0312 14:36:12.589598 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" event={"ID":"3f72fbbe-69f0-4622-be05-b839ff9b4d45","Type":"ContainerDied","Data":"46c2a4e909bb52a20054b9e9b5b0a7b00da6400e691aeeec0e60efe2c628204c"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.589622 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" event={"ID":"3f72fbbe-69f0-4622-be05-b839ff9b4d45","Type":"ContainerStarted","Data":"84ea14c79c9435282226e3a70b4b302086d9d4276408c71b8e887b9f85e1f795"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.589670 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" event={"ID":"ef824102-83a5-4629-8057-d4f1a57a530d","Type":"ContainerStarted","Data":"c6ae01a88bdc3790dd26c96718f2304e8d180c5079a242449e4507767ce03d7c"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.589768 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" event={"ID":"ef824102-83a5-4629-8057-d4f1a57a530d","Type":"ContainerStarted","Data":"3e2810ad638aff3594c8253ba5203ae1a01b05deb352d63eb28794aa543ce257"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.589790 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06b2e38b2912c9d15a5b2978f55eb051dd05aa588cbc81336019b954026e6207" Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.589872 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" event={"ID":"df31c4c2-304e-4bad-8e6f-18c174eba675","Type":"ContainerStarted","Data":"e05cf7c7c58106dc1c6b46b6d00fbb76e60bbaa968f5d7f6eb52040b9ee4fd95"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.589889 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" event={"ID":"df31c4c2-304e-4bad-8e6f-18c174eba675","Type":"ContainerDied","Data":"61400ed5c81e00b9e0a4acdbab9426e759da65e0bd1381d3d70a790a5d50716c"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.589475 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.583211 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.583219 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.583233 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.583251 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.583313 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.584252 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.584850 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.585476 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.588358 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.589998 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" event={"ID":"df31c4c2-304e-4bad-8e6f-18c174eba675","Type":"ContainerStarted","Data":"0797fe88dc9adea8392e9b93088b1a0313bddd85f5318d3039e5b08dcf043b58"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590019 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" event={"ID":"0a898118-6d01-4211-92f0-43967b75405c","Type":"ContainerStarted","Data":"c2dbf5a09af1e2fa063a0458bcbee562bc513bd8f67fc9f514462e42c6e7aba0"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590035 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" event={"ID":"0a898118-6d01-4211-92f0-43967b75405c","Type":"ContainerDied","Data":"1602a9eed5353da99938e46cc2f064b4455a5e47eb3af80ff79cdafd544bf392"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590047 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" event={"ID":"0a898118-6d01-4211-92f0-43967b75405c","Type":"ContainerDied","Data":"6060fd0146ead8129b93c5b31730ef60e2eaf7a165dbe7fde9719cb084457eda"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590059 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" event={"ID":"0a898118-6d01-4211-92f0-43967b75405c","Type":"ContainerStarted","Data":"7bbac52760e3fcba097d54391f795f027fe56fcf9f7e33e8c515250455992a3b"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590071 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-btfvk" event={"ID":"5fb06459-09da-4620-91cf-8c3fe8f425db","Type":"ContainerStarted","Data":"ce843dfd3f27a78f901d934f6e3dbf102d0d981cef10c5b1d1777b8952181107"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590084 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-btfvk" event={"ID":"5fb06459-09da-4620-91cf-8c3fe8f425db","Type":"ContainerStarted","Data":"b1a14449751313d757471e50a932157f1cfc8f3980d87122c44917f7224e903e"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590096 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" event={"ID":"76d596c0-6a41-43e1-9516-aee9ad834ec2","Type":"ContainerStarted","Data":"f38874aef5393d658264082641b6ae35c3855eb55f95d5be3e85d3b60c18eb6a"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590109 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" event={"ID":"76d596c0-6a41-43e1-9516-aee9ad834ec2","Type":"ContainerDied","Data":"132c247fef63805e546221090174559865f0a5c67459f97a478961649f25c4ce"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590186 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" event={"ID":"76d596c0-6a41-43e1-9516-aee9ad834ec2","Type":"ContainerStarted","Data":"b4d899998f745455ee9f9d0e86782192bfb9c3fa197ad167b3e3e1e3896ea9e7"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590229 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" event={"ID":"9757edbb-8ce2-4513-9b32-a552df50634c","Type":"ContainerStarted","Data":"4144b508950e9d55aa988b5826fcc71dda27f29d18bef2532e5c5b4d53868302"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590245 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" event={"ID":"9757edbb-8ce2-4513-9b32-a552df50634c","Type":"ContainerDied","Data":"1f6d2570897da6801ddcca5ad1dff41b4e29f16cbcc5ab930745b1a932963f31"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590258 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" event={"ID":"9757edbb-8ce2-4513-9b32-a552df50634c","Type":"ContainerStarted","Data":"3dcc154c9494e2fe36c0a3115ac75b0708464d60dbbe1a7436789b256f05252a"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590272 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" event={"ID":"9757edbb-8ce2-4513-9b32-a552df50634c","Type":"ContainerStarted","Data":"16c9911f528d88ff6368917af5d3a0bfb97b0cd22d43dad86b75920f982a3c90"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.588443 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590313 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" event={"ID":"de61e1fe-294c-48a6-8cf3-aeb4637ef2cc","Type":"ContainerStarted","Data":"a5acc699b1a37e91f5340ec4c115ef975c8b471e9e344c9594483a5c84605341"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590376 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" event={"ID":"de61e1fe-294c-48a6-8cf3-aeb4637ef2cc","Type":"ContainerDied","Data":"1da1f692fe7f463fbb1c0cbb755fdd4e259885377082c810ee0f69c91f679d04"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590403 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" event={"ID":"de61e1fe-294c-48a6-8cf3-aeb4637ef2cc","Type":"ContainerStarted","Data":"e7a243dee19ff7c60c3cfd7b46d1da9cee4b1db91f6862f6afe950a9febf71ef"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590418 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" event={"ID":"de61e1fe-294c-48a6-8cf3-aeb4637ef2cc","Type":"ContainerStarted","Data":"2376cfb1ee60c237c8964f78aeee837ea12e09f11b9b3dfc1320568c3b4a4743"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590432 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nml4k" event={"ID":"3815db41-fe01-43f6-b75c-4ccca9124f51","Type":"ContainerStarted","Data":"c9974749d7b55714b2a366fdd455a4e5648ebc243ccac259517cdb7794faf5cb"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590445 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nml4k" event={"ID":"3815db41-fe01-43f6-b75c-4ccca9124f51","Type":"ContainerStarted","Data":"2d3eaf559f7c7fc8939b6cb1adf4ce35f6ab04af130fc43628777d00ccfd15a4"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590458 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" event={"ID":"8c6b9f13-4a3a-4920-a84b-f76516501f81","Type":"ContainerStarted","Data":"8fb3af0133d0946b0f849f54e6b053a8d244cc1e5f114a25ba3e224c22bcf96c"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590472 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" event={"ID":"8c6b9f13-4a3a-4920-a84b-f76516501f81","Type":"ContainerStarted","Data":"22207ff89d6884489259f42baf46427c71156ff68dfb78cafcbd6e3eaaee6798"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590484 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" event={"ID":"8c6b9f13-4a3a-4920-a84b-f76516501f81","Type":"ContainerStarted","Data":"04b735b224daf50d8a4394bad34d733739b181daca3e401220cb41161ddee701"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590498 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmhgb" event={"ID":"cba33300-f7ef-4547-97ff-62e223da79cf","Type":"ContainerStarted","Data":"48a67c0385d8a7388255b47c510bfe700f7804124474e7e1a69fe4888870bf2a"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590511 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmhgb" event={"ID":"cba33300-f7ef-4547-97ff-62e223da79cf","Type":"ContainerDied","Data":"eb008940bc7dc6c2ae442f778e48aef8337971c8ef1e3c95db6a891e0cad1a81"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590523 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmhgb" event={"ID":"cba33300-f7ef-4547-97ff-62e223da79cf","Type":"ContainerDied","Data":"1bc9540ba67897e35b5ccbe24ebd39e07a2c8806ea8a765dbac1ad9e9c299016"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590536 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmhgb" event={"ID":"cba33300-f7ef-4547-97ff-62e223da79cf","Type":"ContainerStarted","Data":"19d81290fc93fac6e353ccf6f4dabde5040333c3260c06c3a57f91c397c38d86"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590551 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-q29ch" event={"ID":"f569ed3b-924d-4829-b192-f508ee70658d","Type":"ContainerStarted","Data":"1b9240ea02291ba731222eda95b14d923700f86aa2f9700200a3ef468ef2cb89"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590564 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-q29ch" event={"ID":"f569ed3b-924d-4829-b192-f508ee70658d","Type":"ContainerStarted","Data":"3acdc56c43692bcfd84f78b7447975cc602b8dce78d52adc35c712d43e43e0fa"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590575 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-q29ch" event={"ID":"f569ed3b-924d-4829-b192-f508ee70658d","Type":"ContainerStarted","Data":"1086c8d5071e504e73694312636385db33200a4d801de67bcefe278f7df988d9"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590586 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-s5wj4" event={"ID":"8dd912f8-2c4d-4a0a-ba41-918ab5c235ba","Type":"ContainerStarted","Data":"188113c35fe96cf36264ce279ce38efba594ca2f0808990ac18724ea42464967"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590599 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-s5wj4" event={"ID":"8dd912f8-2c4d-4a0a-ba41-918ab5c235ba","Type":"ContainerStarted","Data":"7e8ded1c40f6f3e26e0bdf53cc47f92c6162eab80d359209d548be3dc3c1a52f"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590609 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-s5wj4" event={"ID":"8dd912f8-2c4d-4a0a-ba41-918ab5c235ba","Type":"ContainerStarted","Data":"aca8c7cb3cefb96ea167603c4fdab132577bdaf6be51eb609e79f8b9ea4df1b7"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590622 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" event={"ID":"39bda5b8-c748-4023-8680-8e8454512e5b","Type":"ContainerStarted","Data":"8964cd1f217fb6cd94d6566a0a2a6f59f63cb7a634af81c532937c3dbd22f0d9"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590635 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" event={"ID":"39bda5b8-c748-4023-8680-8e8454512e5b","Type":"ContainerStarted","Data":"bf877df3c9cca5ce74acde914aebb5ead90404a0291628cd7df82a19c157c176"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590647 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" event={"ID":"39bda5b8-c748-4023-8680-8e8454512e5b","Type":"ContainerDied","Data":"433f8c8699626602589391cd2daaab97922be2a22d3d7962e8991c85c86df5c6"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590660 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" event={"ID":"39bda5b8-c748-4023-8680-8e8454512e5b","Type":"ContainerStarted","Data":"5679426d37d3354caeeb4580675058670c5c7ef6fa2efa546a861e1c9f923e06"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590673 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" event={"ID":"6b77ad35-2fff-47bb-ad34-abb3868b09a9","Type":"ContainerStarted","Data":"21f8f12539de21393c8dd3d19a14cef264215b3b3fd47ca7bb10332072f42348"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590685 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" event={"ID":"6b77ad35-2fff-47bb-ad34-abb3868b09a9","Type":"ContainerStarted","Data":"a34b6a72c3251dd9a1c20dfe3ee9652cef595595471cc3d6289a1e7342e2aae3"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590696 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" event={"ID":"6b77ad35-2fff-47bb-ad34-abb3868b09a9","Type":"ContainerDied","Data":"b8d113d4078bf75e05e20466c91ff71f4f6b488f7676b497a0a45f5dab626d36"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590708 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" event={"ID":"6b77ad35-2fff-47bb-ad34-abb3868b09a9","Type":"ContainerStarted","Data":"9ae8ffe0fbe6457550dbcfde92cc569b256c78e408c6b4f88c41a2524eefcfab"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590718 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-dbdr9" event={"ID":"ef5679f7-5bf5-409d-b74b-64a9cbb6c701","Type":"ContainerStarted","Data":"2bc2451d30c899d724765075921ba2037d7b62249caf8354e71f78f87b61472d"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590729 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-dbdr9" event={"ID":"ef5679f7-5bf5-409d-b74b-64a9cbb6c701","Type":"ContainerStarted","Data":"5a8c18378832b96fedb1cc482f9c56eff1b7bedfc155a7a794d6f4818bd05ce5"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590749 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" event={"ID":"a35674af-162c-4a4a-8605-158b2326267e","Type":"ContainerStarted","Data":"eab4795a93eb894d5a185a08bb28e127ecd93f412b5b24a97499c132d3ea0156"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590770 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" event={"ID":"a35674af-162c-4a4a-8605-158b2326267e","Type":"ContainerDied","Data":"74c768e9e11582adc0014bc840fea327d7f38cf0f676db2b9e0edea0c24915ce"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590790 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" event={"ID":"a35674af-162c-4a4a-8605-158b2326267e","Type":"ContainerStarted","Data":"241f858261d65330369ee282a68caee5de8979050ed624a101ccc38bb5423e5f"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590803 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" event={"ID":"61de099a-410b-4d30-83e8-19cf5901cb27","Type":"ContainerStarted","Data":"4b830aae57df8ccdd98824435ccd52272794e23367b55e5ca8ce1c42ac9a4c48"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590816 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" event={"ID":"61de099a-410b-4d30-83e8-19cf5901cb27","Type":"ContainerDied","Data":"a9360a88d496d9b99968219677b5a40fc143b8872564dfdffdd3aa113acbb8d5"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590846 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" event={"ID":"61de099a-410b-4d30-83e8-19cf5901cb27","Type":"ContainerStarted","Data":"47bb0848ead40d3cf654dbab8841bba9aaf69454627f9510e73ce08c4830d731"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590868 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" event={"ID":"1f9b15c6-b4ee-4907-8daa-376e3b438896","Type":"ContainerStarted","Data":"d8ff2c564a804fe655cf5c13836235d82f004d14fcc6254310c9d20d2a34b9ca"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590888 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" event={"ID":"1f9b15c6-b4ee-4907-8daa-376e3b438896","Type":"ContainerStarted","Data":"cbf45306386e8635befce668d8225cbafe68cc1140eea40d954cb85ff55d0336"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590918 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" event={"ID":"1f9b15c6-b4ee-4907-8daa-376e3b438896","Type":"ContainerDied","Data":"ed6b1efe75e8b6c558fafcaa8ddbf929d9ca6180cac551e6f152da3936b202da"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590947 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" event={"ID":"1f9b15c6-b4ee-4907-8daa-376e3b438896","Type":"ContainerStarted","Data":"81cd0864a54b3fb544c03e1c4cc3bb2a1e8301732b585b1ac0d2dad7435e59f9"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590961 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" event={"ID":"8106d14a-b448-4dd1-bccd-926f85394b5d","Type":"ContainerStarted","Data":"b6e949323342e30da019c0c5dd230cd9dc9467fe07077839fa3e6146d2b06774"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590978 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" event={"ID":"8106d14a-b448-4dd1-bccd-926f85394b5d","Type":"ContainerDied","Data":"07fcba2f19661d8828bf52496d599b063fbcaa903c444fc6dc693f6b4ced2d26"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.590992 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" event={"ID":"8106d14a-b448-4dd1-bccd-926f85394b5d","Type":"ContainerDied","Data":"34b14db33a75935753eb07fc5c1da978369413ed001610be1a02068299e72c2a"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591005 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" event={"ID":"8106d14a-b448-4dd1-bccd-926f85394b5d","Type":"ContainerDied","Data":"be2a07c0fd561c76349af0b4e32d3d5bd9b366ededeeef597a13a0ecfa9560a3"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591017 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" event={"ID":"8106d14a-b448-4dd1-bccd-926f85394b5d","Type":"ContainerStarted","Data":"667a33334db41ad265e60ff8664b098419b2a584d575b100118b0dcbbdce439e"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591029 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" event={"ID":"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba","Type":"ContainerStarted","Data":"d705af3964cb121f77d5ca09181cfdf91c9d4d07e3a5599879eb179498167449"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591041 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" event={"ID":"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba","Type":"ContainerStarted","Data":"050fc0b90a67cc99fced813d2d0dfac828853a651e063ec897d38aebb5d47e8e"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591052 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" event={"ID":"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba","Type":"ContainerStarted","Data":"6248f60ded635728b07f9ffbb9d72d48359f97cdb83b7f5d2e6153af60f77309"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591063 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9bljc" event={"ID":"2f59d485-9f69-4f36-836e-6338f84b7d69","Type":"ContainerStarted","Data":"0ae163796f8d852887d4f4bb30a0ee7a5d70bdc703b68435049e704d0b2a64bb"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591076 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9bljc" event={"ID":"2f59d485-9f69-4f36-836e-6338f84b7d69","Type":"ContainerDied","Data":"fd763b32a6f9e14de1e48ab02ce0e8ed0420b566d892ab96ff30c9ac6deeebf4"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591091 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9bljc" event={"ID":"2f59d485-9f69-4f36-836e-6338f84b7d69","Type":"ContainerDied","Data":"6f88048bcaa35db146cb15d79ce615c930b521dad3951a081c1c2ef94a48da36"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591104 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9bljc" event={"ID":"2f59d485-9f69-4f36-836e-6338f84b7d69","Type":"ContainerStarted","Data":"1349683c6b7a48b60ff43680722efbbec3a557f6a028d5afab1d1b9c68ad3a50"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591115 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" event={"ID":"8d775283-2696-4411-8ddf-d4e6000f0a0c","Type":"ContainerStarted","Data":"b653f2520c921bea50374d24b8a493063feaa8e5c6c64501293ba49359c77e27"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591139 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" event={"ID":"8d775283-2696-4411-8ddf-d4e6000f0a0c","Type":"ContainerDied","Data":"dab12d78b58362271ed50f79c5a69254f295643a7991e2e36b8a3b67ed281ba9"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591153 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" event={"ID":"8d775283-2696-4411-8ddf-d4e6000f0a0c","Type":"ContainerStarted","Data":"b820d186bee28edd1c55ac6380a6987416ca51ef3ff64ae7bf3a04304904c238"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591168 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="451cb30a0b8b39cb726cc182b92fb7f0c2e916a7e1138a7ad734d273a44b3de6" Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591179 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" event={"ID":"07a6a1d6-fecf-4847-b7c1-160d5d7320fb","Type":"ContainerStarted","Data":"c17b1a095c8d2091cd370bbb911b06ac4230f51bbc05825adea160d39c746b2d"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591211 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" event={"ID":"07a6a1d6-fecf-4847-b7c1-160d5d7320fb","Type":"ContainerStarted","Data":"a41bc83813b39c2fa459a0e9284786027dca250eb150090c47a705729e7d08f5"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591225 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"c0743910-1ba7-490d-bc3e-5126562b04aa","Type":"ContainerDied","Data":"763faa898e18449dd9a50b708e0137c7362e38addce32c4afec9964d733e4f39"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591242 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"c0743910-1ba7-490d-bc3e-5126562b04aa","Type":"ContainerDied","Data":"ad667b1962e9be89dad22c04e8baae0b8b39d88482f4ed8d30c8828a965ec326"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591252 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad667b1962e9be89dad22c04e8baae0b8b39d88482f4ed8d30c8828a965ec326" Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591262 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"38a182ab60d59f721fb8126757690cc7012aae3a440b852f434d3a3df1616418"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591278 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"fb3c45839cbe90283d02c62fead173d9e325341ad3690ee7a41efec589b54f05"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591300 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"8d3f8c6c0f2e16a16de21bbdef81829ff48d83da35a97f9706694fdb99e2f9cc"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591318 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerDied","Data":"3a9edbd537b2b433573698a4a6787a21fea247fccf7cbaf8147e87a4f36c14fb"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591331 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"c08577925424813ee777936cf83e1b718ae5ce815b0089c7d7f01bbc45cd2891"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591343 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" event={"ID":"941c0808-bbfd-467e-b733-3a8294163ee5","Type":"ContainerDied","Data":"b0d7763766a63cc91dd74368313cbb94587dedcd2efd8ded0e17187af3e40d25"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591356 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" event={"ID":"941c0808-bbfd-467e-b733-3a8294163ee5","Type":"ContainerDied","Data":"b728e0e598b7cc096f35be929d43eb0ed111353285b0505a0f58ce9dbef5d088"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591369 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b728e0e598b7cc096f35be929d43eb0ed111353285b0505a0f58ce9dbef5d088" Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591380 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" event={"ID":"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6","Type":"ContainerDied","Data":"c16aee696a6ef88096dfa67f9116c7fd30990cd6603084cb800a4c732d12f445"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591393 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" event={"ID":"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6","Type":"ContainerStarted","Data":"2fa51a43e8255ddff099408eecb3af1c9c7359cdc855341d675c4d921272ecf0"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591405 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" event={"ID":"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6","Type":"ContainerStarted","Data":"031300aa1cb0172a0d2afed31c2d6390d62119757876eb5bc01076e0f90336fb"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591416 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" event={"ID":"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea","Type":"ContainerStarted","Data":"b7f5d85b9d4bda1ad07cf87ff44bb85a1287e1637de9231fcb5c0a28147a7d8e"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591429 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" event={"ID":"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea","Type":"ContainerDied","Data":"84cd4dda4ef244649d072d7fb3ef07cda0fc4acab308d3a457899758e508ea9b"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591447 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" event={"ID":"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea","Type":"ContainerStarted","Data":"cf474d719fe021709d76198dcf6233015fdb798e1bd5aaff8f16e8ee1cf431e4"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591469 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"965d6e0e3f611771f8ba2352415f565a","Type":"ContainerStarted","Data":"c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591486 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"965d6e0e3f611771f8ba2352415f565a","Type":"ContainerStarted","Data":"5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591501 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"965d6e0e3f611771f8ba2352415f565a","Type":"ContainerStarted","Data":"b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591512 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"965d6e0e3f611771f8ba2352415f565a","Type":"ContainerStarted","Data":"504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591524 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"965d6e0e3f611771f8ba2352415f565a","Type":"ContainerStarted","Data":"4962f86c890ab9be604d23a0da920ebdb05a4b0dbc30671f52da23640f2df151"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591536 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h868v" event={"ID":"9757756c-cb67-4b6f-99c3-dd63f904897a","Type":"ContainerStarted","Data":"ebd95ca3fb2815dc4627a44d443095574f5ee1471a5dae51cc1433a123d8f27b"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591566 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h868v" event={"ID":"9757756c-cb67-4b6f-99c3-dd63f904897a","Type":"ContainerDied","Data":"f2ba438d34b4b3304e8d60d973e3309595cd9060a2ebe30a5d88db295ad25e25"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591586 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h868v" event={"ID":"9757756c-cb67-4b6f-99c3-dd63f904897a","Type":"ContainerDied","Data":"d39ce324f3db6164db245417f53b6d8ff38716c386224704af63bf67e207b5f1"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591598 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h868v" event={"ID":"9757756c-cb67-4b6f-99c3-dd63f904897a","Type":"ContainerDied","Data":"9fbd87c96fccfe4bfad334fd8c3bc1df622b06005839f21efff6ba86833c49f2"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591608 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h868v" event={"ID":"9757756c-cb67-4b6f-99c3-dd63f904897a","Type":"ContainerDied","Data":"affa558e980cee997cdd8182eda2cfef7d818deacab403a1f48e02cffbc1c48b"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591619 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h868v" event={"ID":"9757756c-cb67-4b6f-99c3-dd63f904897a","Type":"ContainerDied","Data":"badf1c98d1937a2f8e44bf83e8bf87b7da9889235c52744f099d88d3a841de7f"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591632 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h868v" event={"ID":"9757756c-cb67-4b6f-99c3-dd63f904897a","Type":"ContainerDied","Data":"cfa5b038bc7b07de92bf843b3a45833830090fe9d6879ece21a0622781be697c"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591643 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h868v" event={"ID":"9757756c-cb67-4b6f-99c3-dd63f904897a","Type":"ContainerStarted","Data":"6f063e04e3f4cea4c5a58314f5a114923174086e042c2c243d9038f9f34bad2b"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591654 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-rqq4v" event={"ID":"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9","Type":"ContainerStarted","Data":"5cb54a4bc2f599bf332cb42ee8b1be36eecedf83f6db23db71f7ec0f390ee742"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591670 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-rqq4v" event={"ID":"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9","Type":"ContainerDied","Data":"6426a3a4748b7e9d673d2f1d6267439ec1d4e697687aa5758b4c1a8fe5038d99"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591682 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-rqq4v" event={"ID":"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9","Type":"ContainerStarted","Data":"4f82da527f459a4e4785bd921abd6a49239f5f19783c788a0d00d2e0b9706a60"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591693 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-rqq4v" event={"ID":"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9","Type":"ContainerStarted","Data":"273deb0b6a9c20f6e288a8f04dbffa2d991224ef0582918efc29bdb17656c1b9"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591706 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" event={"ID":"3edaa533-ecbb-443e-a270-4cb4f923daf6","Type":"ContainerStarted","Data":"98c75e1f97fd956ded29ce0a2ec09f912dd4d6fb9c502e3b869d08808fa332fc"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591720 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" event={"ID":"3edaa533-ecbb-443e-a270-4cb4f923daf6","Type":"ContainerDied","Data":"3ebfe9284b5aa5ae3cf93734a2a620a3ca175da8fc2dbf0765228bbf0c19305a"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591736 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" event={"ID":"3edaa533-ecbb-443e-a270-4cb4f923daf6","Type":"ContainerStarted","Data":"b2880692b11dddefd5768e7f708988a4a68f0f5399d1041e081e8804f1478aff"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591747 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" event={"ID":"3edaa533-ecbb-443e-a270-4cb4f923daf6","Type":"ContainerStarted","Data":"8bae2bf48688fed38a08346cb01a13f07f5d6ebf571f08738d916c6d12d3bb19"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591759 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-8q2fv" event={"ID":"8e733069-752a-4140-83eb-8287f1bce1a7","Type":"ContainerStarted","Data":"ff62c021b6b2728ab194d385e1dcbbac9d1a1db7bb9e0282f3a425ca39b12bc0"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591774 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-8q2fv" event={"ID":"8e733069-752a-4140-83eb-8287f1bce1a7","Type":"ContainerStarted","Data":"210d19917e7415e5f1763dbc60d79ff661ed77ac9ff9582758b201449af2e08f"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591786 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" event={"ID":"39252b5a-d014-4319-ad81-3c1bf2ef585e","Type":"ContainerStarted","Data":"b4044e7e2ef92f0cd6613cc7ae6cd69030edcd7a8b1f34e45d134f63f2150425"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591800 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" event={"ID":"39252b5a-d014-4319-ad81-3c1bf2ef585e","Type":"ContainerDied","Data":"9e5d0273aaf9a58de181bc25e8eb0e74c78055d79bccf5dc90c3b2168e550793"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591813 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" event={"ID":"39252b5a-d014-4319-ad81-3c1bf2ef585e","Type":"ContainerStarted","Data":"90ca548788bdb3dbbc3bce6e0bb77f916ef9ff6e9d18d4c0ee025d2ba9c36e55"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591824 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" event={"ID":"39252b5a-d014-4319-ad81-3c1bf2ef585e","Type":"ContainerStarted","Data":"48b23f5b2fb0b4600ed151be719911ca6e8598a87db7cece2fed00b00050b177"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591837 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" event={"ID":"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b","Type":"ContainerStarted","Data":"465db2374b4fcd6162b8dc553bb6f2ef4a19ba262a22ad29911e0930f35262e4"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.591932 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" event={"ID":"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b","Type":"ContainerDied","Data":"82e98531076d6e3c9a7e475978917c54179baaf121c2bd492fa03aa8611e6187"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592003 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" event={"ID":"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b","Type":"ContainerDied","Data":"4de0a85e4d47c7fb4dc863fea7d92d4eeed644f410c3792a0156ceb688c0d760"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592021 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" event={"ID":"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b","Type":"ContainerStarted","Data":"39547af9c96ab9ffa0c68d5520b2aefe82b1e2e9c5c31895677204de893a9b6a"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592033 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zswp" event={"ID":"4ef01b7f-f7cb-4fd4-a75d-fe7a657d68d4","Type":"ContainerStarted","Data":"34219c8db92022e83f302fe60298f8acc5a44b5e8ce995bbe93cbd8b92bb7d3e"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592046 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zswp" event={"ID":"4ef01b7f-f7cb-4fd4-a75d-fe7a657d68d4","Type":"ContainerStarted","Data":"7b9fd861cdb850c770377b61b9c7cf051a5f9d4b0cf67257f63a4048e2364f02"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592058 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zswp" event={"ID":"4ef01b7f-f7cb-4fd4-a75d-fe7a657d68d4","Type":"ContainerStarted","Data":"6724dfeb711ea97e4c0311828871b84e605df95c88e47b984ac33b84e0c182f2"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592072 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"0c8675d4-a0be-42a3-96af-e56f5fb02983","Type":"ContainerDied","Data":"c501e9b39beb072c6b4373a31e843bee99560319d607f9fde7f18203290ac2ca"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592085 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"0c8675d4-a0be-42a3-96af-e56f5fb02983","Type":"ContainerDied","Data":"3378bf89846b15560831731ea870867860116f550ee6cc7c8a063f8901a47bce"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592096 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3378bf89846b15560831731ea870867860116f550ee6cc7c8a063f8901a47bce" Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592107 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-nj7qg" event={"ID":"6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a","Type":"ContainerStarted","Data":"540809e8c57264298020b8f7c329852fdc11e5e328ec4a2eb78873d2a2fd4933"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592119 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-nj7qg" event={"ID":"6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a","Type":"ContainerStarted","Data":"133614914dd24d9ac9613df300e1e5f9690b2a5705765951b6217919a73bd40b"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592130 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-wdt59" event={"ID":"f7b68603-8af3-4a50-8d39-86bbcdf1c597","Type":"ContainerStarted","Data":"1564943ad1ff64ec05cc4bdb39b9cac207880b0ddd829f16092763ce6b2053d9"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592142 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-wdt59" event={"ID":"f7b68603-8af3-4a50-8d39-86bbcdf1c597","Type":"ContainerStarted","Data":"6a6f22295caf5561da4b53d5d1d44905e37cde1c7951dfd83965f63ee4f0c534"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592153 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" event={"ID":"99433993-93cf-46cb-bb66-485672cb2554","Type":"ContainerStarted","Data":"80852c13a84697f07d1a8ca8a4892c3fa3a6416ed1dfca07e537b2d4c816a13a"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592165 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" event={"ID":"99433993-93cf-46cb-bb66-485672cb2554","Type":"ContainerDied","Data":"942edb2086b196730f2050c8c10e7943616ea284812689341f08412925b12705"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592177 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" event={"ID":"99433993-93cf-46cb-bb66-485672cb2554","Type":"ContainerStarted","Data":"2e21aa41c709714c621e81f34dd2940d383309852477d3447a69f2b11767e16e"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592188 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" event={"ID":"57930a54-89ab-4ec8-a504-74035bb74d63","Type":"ContainerStarted","Data":"d99d8c1ae20305282e19d20db3c4034a70d569c692d3ca52db2c6c835d89056f"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592200 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" event={"ID":"57930a54-89ab-4ec8-a504-74035bb74d63","Type":"ContainerDied","Data":"8d633c24c0fbfe0880167743a2ebe5f60f0f211a6026d8c3f55625a7e7adbd93"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592212 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" event={"ID":"57930a54-89ab-4ec8-a504-74035bb74d63","Type":"ContainerStarted","Data":"bb2ba7d0c1c51336231f0b223ca71f794a5f473f0c46059600789cebd6ae818f"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592238 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw" event={"ID":"06eb9f4b-167e-435b-8ef6-ae44fc0b85a9","Type":"ContainerStarted","Data":"cf3b7b652c6e8beb6b340ea9b42886885fa378a7a2f0d930f3dcc101d315af74"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592252 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw" event={"ID":"06eb9f4b-167e-435b-8ef6-ae44fc0b85a9","Type":"ContainerDied","Data":"f0b49f86d1ebba78f4cfa063af24f0516cffba203587d317eadf4a198fe2c77d"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592266 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw" event={"ID":"06eb9f4b-167e-435b-8ef6-ae44fc0b85a9","Type":"ContainerStarted","Data":"7e1bd495d46e0c7a0ac9149686af3fabe8525fa70c85e91b10cc34e43bcb54d8"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592278 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" event={"ID":"3dc73c14-852d-4957-b6ac-84366ba0594f","Type":"ContainerStarted","Data":"9ebeb9694fca5f3db47e9fa609996cadf840e959f920863cd859cd6c26d01671"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592294 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" event={"ID":"3dc73c14-852d-4957-b6ac-84366ba0594f","Type":"ContainerDied","Data":"7c75b0b66bdc20c82fe578e42fb9ae10c12f677e86c5f3339f7a2fe4881a6199"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592309 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" event={"ID":"3dc73c14-852d-4957-b6ac-84366ba0594f","Type":"ContainerStarted","Data":"1ba5c83b988cf94fb241db9240f0b33554a204e49670a14cf13953d488a8abe8"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592322 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-b5qg4" event={"ID":"900b2a0e-1e2b-41a3-86f5-639ec1e95969","Type":"ContainerStarted","Data":"6b9d3f1d90ce9219f6b4917e4b3176236cb57e09e88592cc7f4e6e459e15ea90"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592335 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-b5qg4" event={"ID":"900b2a0e-1e2b-41a3-86f5-639ec1e95969","Type":"ContainerStarted","Data":"e7f98f2c20f8a17639a398b1fbfbba35de0dedfd7ce02e92e1a2182183ee86ac"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592348 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" event={"ID":"d56089bf-177c-492d-8964-73a45574e7ed","Type":"ContainerStarted","Data":"a625865f3b69893afdeab1c428fb5b3ab0a928ff5b48376f2646d22f9267fdfd"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592361 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" event={"ID":"d56089bf-177c-492d-8964-73a45574e7ed","Type":"ContainerDied","Data":"cd9014bcffe6ddde739ac15065ac6e2169de2b76f2a0295b122a3bc2a089b78d"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592372 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" event={"ID":"d56089bf-177c-492d-8964-73a45574e7ed","Type":"ContainerStarted","Data":"c0057d7bbbc9bd9f44bd51e3c80dfbe61d922316757a135f4fb3b8485ad4e5e9"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592392 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgqz4" event={"ID":"70710a0b-8b5d-40f5-b726-fd5e2836ffbe","Type":"ContainerStarted","Data":"fb097b697c600a4c9949f08cdf30a60a633ba6d4b0ed4e2e71d781af9c42818b"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592410 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgqz4" event={"ID":"70710a0b-8b5d-40f5-b726-fd5e2836ffbe","Type":"ContainerDied","Data":"1b509b364f4790e7d098a08001f85e21186839f1379b4fc1d8a3f87999a8287a"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592422 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgqz4" event={"ID":"70710a0b-8b5d-40f5-b726-fd5e2836ffbe","Type":"ContainerDied","Data":"8d3bb5013ca4c818b7c70903d8fce9e610940673188c266c6d78750aa35aac12"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592434 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgqz4" event={"ID":"70710a0b-8b5d-40f5-b726-fd5e2836ffbe","Type":"ContainerStarted","Data":"f02823618c817a57f5deb9d5aa242eb2274591837e55328914242489612536a0"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592445 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" event={"ID":"a2435b91-86d6-415b-a978-34cc859e74f2","Type":"ContainerStarted","Data":"8504d8b3c047fc38b216e74a2854ab9051eda54c09a2ad35024a92e002c39426"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592456 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" event={"ID":"a2435b91-86d6-415b-a978-34cc859e74f2","Type":"ContainerDied","Data":"875a6bda6b71188c64ac2ab0648f7976d1deadab74df54ad54a3c4c6e3e8c152"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592468 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" event={"ID":"a2435b91-86d6-415b-a978-34cc859e74f2","Type":"ContainerStarted","Data":"a6ab4911ef54a5ef7fd92d9752905d7377429179c56c4e77bafea0e6505d40e2"} Mar 12 14:36:12.597290 master-0 kubenswrapper[37036]: I0312 14:36:12.592478 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gbmc" event={"ID":"e2742559-1f28-4f2c-a873-d6a9348972fb","Type":"ContainerStarted","Data":"86b3413a245ccb948b2791723b699bee2548d7f2a2bcf15246661ec724ccd645"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592490 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gbmc" event={"ID":"e2742559-1f28-4f2c-a873-d6a9348972fb","Type":"ContainerDied","Data":"a6e68da263c509d4a3107148074b05db9d9991a2f13362fc7aaad75eb4e279c0"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592503 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gbmc" event={"ID":"e2742559-1f28-4f2c-a873-d6a9348972fb","Type":"ContainerDied","Data":"935fc506f983008a79b60e43ad782c4f076fe53a90782b9c09742c04419944c2"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592514 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gbmc" event={"ID":"e2742559-1f28-4f2c-a873-d6a9348972fb","Type":"ContainerStarted","Data":"44f838e36ef84ec07445889d3aec1d687c84ce529c36e9146d695bf4ed4afa8f"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592527 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" event={"ID":"08ea0d9f-0635-4759-803e-572eca2f2d34","Type":"ContainerStarted","Data":"78e4947f344b5bf77c640296e1ec1a396c45d29d30d4a66e0eef8ce340e94e05"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592542 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" event={"ID":"08ea0d9f-0635-4759-803e-572eca2f2d34","Type":"ContainerDied","Data":"c7748344653d88d11ff333e5116bce0c85dee6521b85089b95571404112fbab9"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592555 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" event={"ID":"08ea0d9f-0635-4759-803e-572eca2f2d34","Type":"ContainerStarted","Data":"43ed8c1a4973dd17aafd4ecf7a139cc5fe9ab8ae42ddeb20c5c40716650f035f"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592568 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" event={"ID":"d00a8cc7-7774-40bd-94a1-9ac2d0f63234","Type":"ContainerStarted","Data":"b6bbd0c5f61f89850e4a55dd74cf02eb9ebef972bb50c7b01561e16b68e8704e"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592581 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" event={"ID":"d00a8cc7-7774-40bd-94a1-9ac2d0f63234","Type":"ContainerDied","Data":"cdfe0e410845d5baf2e09f8531028d9af2d70fe1e72cb65a07430cd6462f940c"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592593 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" event={"ID":"d00a8cc7-7774-40bd-94a1-9ac2d0f63234","Type":"ContainerStarted","Data":"59d708b78a7b260fc1f5fce51861156cd584df9875d86be3a6175021610d5f66"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592650 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" event={"ID":"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2","Type":"ContainerStarted","Data":"a5a8fe9347723240cf160315b7dc5a4ab938896729de851e21ca853677fbf3ce"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592664 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" event={"ID":"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2","Type":"ContainerDied","Data":"5efaa8718300502113322a1eee9979f20223fd4bf67820218994af2c3ddf3fdb"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592654 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592678 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" event={"ID":"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2","Type":"ContainerStarted","Data":"643a9eb1fc3e8f464aba2201dd6fa47d57c365903e1554bd77d2fd4b8d623917"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592690 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-7s8fj" event={"ID":"f3c13c5f-3d1f-4e0a-b77b-732255680086","Type":"ContainerStarted","Data":"510ccfcc8baef1b4d5cf64c2613ac89aa7307dc24793f9d1e3ffbb21645aa509"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592706 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-7s8fj" event={"ID":"f3c13c5f-3d1f-4e0a-b77b-732255680086","Type":"ContainerDied","Data":"c67f823638be00e0ed74a2579b7dd1b4da80134d340ad18f11466d7e3913888f"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592719 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-7s8fj" event={"ID":"f3c13c5f-3d1f-4e0a-b77b-732255680086","Type":"ContainerStarted","Data":"7f4e5afa4afe018a7c389e007a13d614d179ad2102c4e104bffdef509a1d7c7b"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592731 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" event={"ID":"a81be38f-e07e-4863-8d61-fdefc2713a6a","Type":"ContainerStarted","Data":"6f73e85300f82c74d7e3de259c7823bc1fa3d345012078ea6cbaa374b7196577"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592743 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" event={"ID":"a81be38f-e07e-4863-8d61-fdefc2713a6a","Type":"ContainerStarted","Data":"e9a89dbf9c4b5498b299505ee1db6b94e8fd5fbe2a7174de9621cd8bdf42917f"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592757 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" event={"ID":"a81be38f-e07e-4863-8d61-fdefc2713a6a","Type":"ContainerStarted","Data":"0e8fc01e9a8eda98a015f25b77b74c387b2748cffe4174ae0263f83f13e0be0a"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592768 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" event={"ID":"a81be38f-e07e-4863-8d61-fdefc2713a6a","Type":"ContainerStarted","Data":"b067750f065ba84cd14fac759b144c851d17dfcf9ba98a9096e90f8e2906332d"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592779 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"5a56d42a-efb4-4956-acab-d12c7ca5276e","Type":"ContainerDied","Data":"146c62a465e9e1e895adc796ffe1dc3a492864f1300cc5372ec58af6ed5526e2"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592795 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"5a56d42a-efb4-4956-acab-d12c7ca5276e","Type":"ContainerDied","Data":"de0406e113f23db73705a57d2ac92f7e04c405beeb25e91cf51ec912fcd90a38"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592808 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de0406e113f23db73705a57d2ac92f7e04c405beeb25e91cf51ec912fcd90a38" Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592818 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"38d6f94bd36743b5e1de43d22e67db88c9c5b063935ce36f553f6e277d2085b0"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592830 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"3cc6add3b8ddeafffa30f8317b74f57c52371e22c6de0912648ca83e47756722"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592850 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"a9d7b0be96b2dd2ee16b0e4d8085acc0eb870f88bd3a21243f9c99d9574c51c9"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592861 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"ae60fe54b5ccd230d5c299ecbcb6f31dfb5d0828ec56237e3d4b1ef25899a097"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592872 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"bd47b92106de563d3373945a17b8e6aaefdc2d9f737608fa199cd4000e84df8c"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592883 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerDied","Data":"680cd62a7f090bc2a4f20cc8a440912f04f5a4fb884d39ec76cd168ddf53e447"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592947 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"75d2cc73f5d8290489c2ec72fc148a6f125ffa59eaf8f20c0252b0060ef642a3"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592965 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-lbcvf" event={"ID":"146495bf-0787-483f-a9fc-0e8925b89150","Type":"ContainerDied","Data":"6033bc31672a320e7b8ffbe7a63f79564d187ec798713169c640338dfe2b84c4"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592978 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-lbcvf" event={"ID":"146495bf-0787-483f-a9fc-0e8925b89150","Type":"ContainerDied","Data":"8691ff1161482cb0ea7536261d7d49ae2b9d112fc1e670e086005a7ae489ba6c"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.592989 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8691ff1161482cb0ea7536261d7d49ae2b9d112fc1e670e086005a7ae489ba6c" Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.593000 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"b2d8e6e9-c10f-4b43-8155-9addbfddba2e","Type":"ContainerDied","Data":"6332902d5d84cf465484ab14dac64d9b60905fd555e191dc35b3857c84ea5469"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.593013 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"b2d8e6e9-c10f-4b43-8155-9addbfddba2e","Type":"ContainerDied","Data":"f6b8e2c91dfdac4af077c810b8c82108167dd8fa5fde5c09fa329a80aae9a543"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.593025 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6b8e2c91dfdac4af077c810b8c82108167dd8fa5fde5c09fa329a80aae9a543" Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.593037 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" event={"ID":"addf66af-4d97-4c1e-960d-ace98c27961b","Type":"ContainerStarted","Data":"a005763593d95225e12c3935e14975d230d9626dd66e3c9c188263f4188a5757"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.593051 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" event={"ID":"addf66af-4d97-4c1e-960d-ace98c27961b","Type":"ContainerStarted","Data":"b4e230d3f789f82e2598481603b93fd52d829378a89dce8399b53642cd4db5c4"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.593065 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n9v7g" event={"ID":"7fdce71e-8085-4316-be40-e535530c2ca4","Type":"ContainerStarted","Data":"b8084c79072268deed68a248c9cc23b07e893e8cdbd559a3d91ee67109a24a9f"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.593077 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n9v7g" event={"ID":"7fdce71e-8085-4316-be40-e535530c2ca4","Type":"ContainerStarted","Data":"aa27a3d716446258953a4956aee28f02e22ffb14db399fb7312647fbcc4f9bfc"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.593091 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n9v7g" event={"ID":"7fdce71e-8085-4316-be40-e535530c2ca4","Type":"ContainerStarted","Data":"bc3c55d0c455838629b8ab5cf95b13e36cb5ff08d49b778a2bbce43b9948d568"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.593103 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" event={"ID":"7433d9bf-4edf-4787-a7a1-e5102c7264c7","Type":"ContainerStarted","Data":"4e84d09329d158806666f09503ce18f2a051ebedca7fa710b43371c50013f13b"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.593116 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" event={"ID":"7433d9bf-4edf-4787-a7a1-e5102c7264c7","Type":"ContainerDied","Data":"48fe02f7a254d8d98f49ab36edbe52b1845dafa9c51071f3a38df472248895ba"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.593129 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" event={"ID":"7433d9bf-4edf-4787-a7a1-e5102c7264c7","Type":"ContainerStarted","Data":"422b72f1d9f4ed3748b07f1e5c14fad3faa59d5f9a198007cce69e02be1d9fa2"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.593141 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv" event={"ID":"8660cef9-0ab3-453e-a4b9-c243daa6ddb0","Type":"ContainerStarted","Data":"36113a200e00efea87bc465d209049d07954fd38fc45547a2de2a279634e07cb"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.593154 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv" event={"ID":"8660cef9-0ab3-453e-a4b9-c243daa6ddb0","Type":"ContainerDied","Data":"d135f68615930d49632ead44689c31ed1dba2d0c236cbda4ae0463dc788e0e6a"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.593166 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv" event={"ID":"8660cef9-0ab3-453e-a4b9-c243daa6ddb0","Type":"ContainerStarted","Data":"2ed4af146d2bc6a8dae65fe67eb8f5e0b4dce64f0e0b6991bdd46a09447f48de"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.593177 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerDied","Data":"38d6f94bd36743b5e1de43d22e67db88c9c5b063935ce36f553f6e277d2085b0"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.593188 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" event={"ID":"e7f6ebd3-98c8-457c-a88c-7e81270f01b5","Type":"ContainerDied","Data":"8267e1775d4f1f71ce9ca7f7438e5d643c261adc1297b9c3415c07d0974bcee7"} Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.600115 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.600559 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.600968 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.601569 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.602784 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 12 14:36:12.605069 master-0 kubenswrapper[37036]: I0312 14:36:12.604432 37036 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 12 14:36:12.611717 master-0 kubenswrapper[37036]: I0312 14:36:12.611684 37036 scope.go:117] "RemoveContainer" containerID="23045659386f5f50b8b2e11a25ff55cb6da08b535a3f1f8469ef54d77c636cee" Mar 12 14:36:12.612277 master-0 kubenswrapper[37036]: E0312 14:36:12.612216 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23045659386f5f50b8b2e11a25ff55cb6da08b535a3f1f8469ef54d77c636cee\": container with ID starting with 23045659386f5f50b8b2e11a25ff55cb6da08b535a3f1f8469ef54d77c636cee not found: ID does not exist" containerID="23045659386f5f50b8b2e11a25ff55cb6da08b535a3f1f8469ef54d77c636cee" Mar 12 14:36:12.612352 master-0 kubenswrapper[37036]: I0312 14:36:12.612247 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23045659386f5f50b8b2e11a25ff55cb6da08b535a3f1f8469ef54d77c636cee"} err="failed to get container status \"23045659386f5f50b8b2e11a25ff55cb6da08b535a3f1f8469ef54d77c636cee\": rpc error: code = NotFound desc = could not find container \"23045659386f5f50b8b2e11a25ff55cb6da08b535a3f1f8469ef54d77c636cee\": container with ID starting with 23045659386f5f50b8b2e11a25ff55cb6da08b535a3f1f8469ef54d77c636cee not found: ID does not exist" Mar 12 14:36:12.622115 master-0 kubenswrapper[37036]: I0312 14:36:12.622054 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 12 14:36:12.626985 master-0 kubenswrapper[37036]: I0312 14:36:12.626949 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-whereabouts-configmap\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:36:12.627104 master-0 kubenswrapper[37036]: I0312 14:36:12.626997 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:36:12.627104 master-0 kubenswrapper[37036]: I0312 14:36:12.627029 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-federate-client-tls\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:12.627104 master-0 kubenswrapper[37036]: I0312 14:36:12.627055 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b77ad35-2fff-47bb-ad34-abb3868b09a9-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-lds9v\" (UID: \"6b77ad35-2fff-47bb-ad34-abb3868b09a9\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" Mar 12 14:36:12.627104 master-0 kubenswrapper[37036]: I0312 14:36:12.627079 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-trusted-ca-bundle\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:36:12.627104 master-0 kubenswrapper[37036]: I0312 14:36:12.627100 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-host\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.627322 master-0 kubenswrapper[37036]: I0312 14:36:12.627125 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a35674af-162c-4a4a-8605-158b2326267e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:36:12.627373 master-0 kubenswrapper[37036]: I0312 14:36:12.627347 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7krt\" (UniqueName: \"kubernetes.io/projected/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-api-access-b7krt\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:36:12.627418 master-0 kubenswrapper[37036]: I0312 14:36:12.627381 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-client\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:36:12.627465 master-0 kubenswrapper[37036]: I0312 14:36:12.627421 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs\") pod \"network-metrics-daemon-n9v7g\" (UID: \"7fdce71e-8085-4316-be40-e535530c2ca4\") " pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:36:12.627465 master-0 kubenswrapper[37036]: I0312 14:36:12.627443 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls\") pod \"dns-operator-589895fbb7-q4wwv\" (UID: \"8c6b9f13-4a3a-4920-a84b-f76516501f81\") " pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" Mar 12 14:36:12.627617 master-0 kubenswrapper[37036]: I0312 14:36:12.627467 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-audit-dir\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:36:12.627617 master-0 kubenswrapper[37036]: I0312 14:36:12.627490 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4q4w\" (UniqueName: \"kubernetes.io/projected/7433d9bf-4edf-4787-a7a1-e5102c7264c7-kube-api-access-t4q4w\") pod \"network-operator-7c649bf6d4-ldxfn\" (UID: \"7433d9bf-4edf-4787-a7a1-e5102c7264c7\") " pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" Mar 12 14:36:12.627617 master-0 kubenswrapper[37036]: I0312 14:36:12.627467 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-whereabouts-configmap\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:36:12.627765 master-0 kubenswrapper[37036]: I0312 14:36:12.627729 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-client\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:36:12.627765 master-0 kubenswrapper[37036]: I0312 14:36:12.627735 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8dd912f8-2c4d-4a0a-ba41-918ab5c235ba-webhook-certs\") pod \"multus-admission-controller-7769569c45-s5wj4\" (UID: \"8dd912f8-2c4d-4a0a-ba41-918ab5c235ba\") " pod="openshift-multus/multus-admission-controller-7769569c45-s5wj4" Mar 12 14:36:12.627864 master-0 kubenswrapper[37036]: I0312 14:36:12.627768 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-conf-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.627864 master-0 kubenswrapper[37036]: I0312 14:36:12.627793 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a35674af-162c-4a4a-8605-158b2326267e-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:36:12.627864 master-0 kubenswrapper[37036]: I0312 14:36:12.627821 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f72fbbe-69f0-4622-be05-b839ff9b4d45-config\") pod \"openshift-apiserver-operator-799b6db4d7-gt2tw\" (UID: \"3f72fbbe-69f0-4622-be05-b839ff9b4d45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" Mar 12 14:36:12.627864 master-0 kubenswrapper[37036]: I0312 14:36:12.627827 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fdce71e-8085-4316-be40-e535530c2ca4-metrics-certs\") pod \"network-metrics-daemon-n9v7g\" (UID: \"7fdce71e-8085-4316-be40-e535530c2ca4\") " pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:36:12.627864 master-0 kubenswrapper[37036]: I0312 14:36:12.627846 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmcz9\" (UniqueName: \"kubernetes.io/projected/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-kube-api-access-mmcz9\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:12.628181 master-0 kubenswrapper[37036]: I0312 14:36:12.627869 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvngn\" (UniqueName: \"kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn\") pod \"network-check-target-8q2fv\" (UID: \"8e733069-752a-4140-83eb-8287f1bce1a7\") " pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:36:12.628181 master-0 kubenswrapper[37036]: I0312 14:36:12.627891 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shknb\" (UniqueName: \"kubernetes.io/projected/3815db41-fe01-43f6-b75c-4ccca9124f51-kube-api-access-shknb\") pod \"node-resolver-nml4k\" (UID: \"3815db41-fe01-43f6-b75c-4ccca9124f51\") " pod="openshift-dns/node-resolver-nml4k" Mar 12 14:36:12.628181 master-0 kubenswrapper[37036]: I0312 14:36:12.627933 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhdq5\" (UniqueName: \"kubernetes.io/projected/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-kube-api-access-qhdq5\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:36:12.628181 master-0 kubenswrapper[37036]: I0312 14:36:12.627955 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f59d485-9f69-4f36-836e-6338f84b7d69-utilities\") pod \"redhat-operators-9bljc\" (UID: \"2f59d485-9f69-4f36-836e-6338f84b7d69\") " pod="openshift-marketplace/redhat-operators-9bljc" Mar 12 14:36:12.628181 master-0 kubenswrapper[37036]: I0312 14:36:12.627976 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1047bb4a-135f-488d-9399-0518cb3a827d-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:36:12.628181 master-0 kubenswrapper[37036]: I0312 14:36:12.627997 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/965d6e0e3f611771f8ba2352415f565a-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"965d6e0e3f611771f8ba2352415f565a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:36:12.628181 master-0 kubenswrapper[37036]: I0312 14:36:12.628001 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8c6b9f13-4a3a-4920-a84b-f76516501f81-metrics-tls\") pod \"dns-operator-589895fbb7-q4wwv\" (UID: \"8c6b9f13-4a3a-4920-a84b-f76516501f81\") " pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" Mar 12 14:36:12.628181 master-0 kubenswrapper[37036]: I0312 14:36:12.628021 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-env-overrides\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.628181 master-0 kubenswrapper[37036]: I0312 14:36:12.628046 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/761993bb-2cba-4e1a-b304-36a24817af94-ovn-node-metrics-cert\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.628181 master-0 kubenswrapper[37036]: I0312 14:36:12.628071 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/39bda5b8-c748-4023-8680-8e8454512e5b-node-pullsecrets\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:12.628181 master-0 kubenswrapper[37036]: I0312 14:36:12.628086 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f72fbbe-69f0-4622-be05-b839ff9b4d45-config\") pod \"openshift-apiserver-operator-799b6db4d7-gt2tw\" (UID: \"3f72fbbe-69f0-4622-be05-b839ff9b4d45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" Mar 12 14:36:12.628181 master-0 kubenswrapper[37036]: I0312 14:36:12.628104 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/39252b5a-d014-4319-ad81-3c1bf2ef585e-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:36:12.628501 master-0 kubenswrapper[37036]: I0312 14:36:12.628282 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f59d485-9f69-4f36-836e-6338f84b7d69-utilities\") pod \"redhat-operators-9bljc\" (UID: \"2f59d485-9f69-4f36-836e-6338f84b7d69\") " pod="openshift-marketplace/redhat-operators-9bljc" Mar 12 14:36:12.628501 master-0 kubenswrapper[37036]: I0312 14:36:12.628314 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:36:12.628501 master-0 kubenswrapper[37036]: I0312 14:36:12.628361 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-var-lib-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.628501 master-0 kubenswrapper[37036]: I0312 14:36:12.628381 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:36:12.628501 master-0 kubenswrapper[37036]: I0312 14:36:12.628401 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-cni-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.628501 master-0 kubenswrapper[37036]: I0312 14:36:12.628416 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3815db41-fe01-43f6-b75c-4ccca9124f51-hosts-file\") pod \"node-resolver-nml4k\" (UID: \"3815db41-fe01-43f6-b75c-4ccca9124f51\") " pod="openshift-dns/node-resolver-nml4k" Mar 12 14:36:12.628501 master-0 kubenswrapper[37036]: I0312 14:36:12.628438 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-serving-certs-ca-bundle\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:12.628501 master-0 kubenswrapper[37036]: I0312 14:36:12.628457 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkmrv\" (UniqueName: \"kubernetes.io/projected/a2435b91-86d6-415b-a978-34cc859e74f2-kube-api-access-qkmrv\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:36:12.628501 master-0 kubenswrapper[37036]: I0312 14:36:12.628477 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-slash\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.628501 master-0 kubenswrapper[37036]: I0312 14:36:12.628483 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:36:12.628501 master-0 kubenswrapper[37036]: I0312 14:36:12.628496 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxnzm\" (UniqueName: \"kubernetes.io/projected/9757756c-cb67-4b6f-99c3-dd63f904897a-kube-api-access-hxnzm\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:36:12.628877 master-0 kubenswrapper[37036]: I0312 14:36:12.628513 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-socket-dir-parent\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.628877 master-0 kubenswrapper[37036]: I0312 14:36:12.628530 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:36:12.628877 master-0 kubenswrapper[37036]: I0312 14:36:12.628547 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/42dbcb8f-e8c4-413e-977d-40aa6df226aa-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:36:12.628877 master-0 kubenswrapper[37036]: I0312 14:36:12.628565 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef824102-83a5-4629-8057-d4f1a57a530d-webhook-cert\") pod \"packageserver-5957c5c5dc-njb8x\" (UID: \"ef824102-83a5-4629-8057-d4f1a57a530d\") " pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:36:12.628877 master-0 kubenswrapper[37036]: I0312 14:36:12.628610 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-bwl7h\" (UID: \"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" Mar 12 14:36:12.628877 master-0 kubenswrapper[37036]: I0312 14:36:12.628651 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-k8s-cni-cncf-io\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.628877 master-0 kubenswrapper[37036]: I0312 14:36:12.628681 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df31c4c2-304e-4bad-8e6f-18c174eba675-config\") pod \"route-controller-manager-7f8bfc67b-pz8rc\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:36:12.628877 master-0 kubenswrapper[37036]: I0312 14:36:12.628698 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:36:12.628877 master-0 kubenswrapper[37036]: I0312 14:36:12.628712 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-systemd\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.628877 master-0 kubenswrapper[37036]: I0312 14:36:12.628729 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/8106d14a-b448-4dd1-bccd-926f85394b5d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-h8sq4\" (UID: \"8106d14a-b448-4dd1-bccd-926f85394b5d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" Mar 12 14:36:12.628877 master-0 kubenswrapper[37036]: I0312 14:36:12.628743 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:36:12.628877 master-0 kubenswrapper[37036]: I0312 14:36:12.628745 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/761993bb-2cba-4e1a-b304-36a24817af94-ovn-node-metrics-cert\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.628877 master-0 kubenswrapper[37036]: I0312 14:36:12.628773 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99433993-93cf-46cb-bb66-485672cb2554-serving-cert\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:36:12.628877 master-0 kubenswrapper[37036]: I0312 14:36:12.628803 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:36:12.628877 master-0 kubenswrapper[37036]: I0312 14:36:12.628834 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08ea0d9f-0635-4759-803e-572eca2f2d34-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-vpn8v\" (UID: \"08ea0d9f-0635-4759-803e-572eca2f2d34\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" Mar 12 14:36:12.628877 master-0 kubenswrapper[37036]: I0312 14:36:12.628860 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:36:12.628877 master-0 kubenswrapper[37036]: I0312 14:36:12.628886 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjsjh\" (UniqueName: \"kubernetes.io/projected/8e4d9407-ff79-4396-a37f-896617e024d4-kube-api-access-sjsjh\") pod \"machine-config-daemon-ngzc8\" (UID: \"8e4d9407-ff79-4396-a37f-896617e024d4\") " pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:36:12.629504 master-0 kubenswrapper[37036]: I0312 14:36:12.628936 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2435b91-86d6-415b-a978-34cc859e74f2-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:36:12.629504 master-0 kubenswrapper[37036]: I0312 14:36:12.628966 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:36:12.629504 master-0 kubenswrapper[37036]: I0312 14:36:12.628989 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-root\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:36:12.629504 master-0 kubenswrapper[37036]: I0312 14:36:12.629011 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-wtmp\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:36:12.629504 master-0 kubenswrapper[37036]: I0312 14:36:12.629009 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/42dbcb8f-e8c4-413e-977d-40aa6df226aa-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:36:12.629504 master-0 kubenswrapper[37036]: I0312 14:36:12.629040 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-metrics-client-ca\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:36:12.629504 master-0 kubenswrapper[37036]: I0312 14:36:12.629064 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-ovn\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.629504 master-0 kubenswrapper[37036]: I0312 14:36:12.629090 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6defef79-6058-466a-ae0b-8eb9258126be-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:36:12.629504 master-0 kubenswrapper[37036]: I0312 14:36:12.629113 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-cni-bin\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.629504 master-0 kubenswrapper[37036]: I0312 14:36:12.629139 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0a898118-6d01-4211-92f0-43967b75405c-available-featuregates\") pod \"openshift-config-operator-64488f9d78-ljnjj\" (UID: \"0a898118-6d01-4211-92f0-43967b75405c\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:36:12.629504 master-0 kubenswrapper[37036]: I0312 14:36:12.629164 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z8pd\" (UniqueName: \"kubernetes.io/projected/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-kube-api-access-2z8pd\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:36:12.629504 master-0 kubenswrapper[37036]: I0312 14:36:12.629185 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.629504 master-0 kubenswrapper[37036]: I0312 14:36:12.629210 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-proxy-ca-bundles\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:36:12.629504 master-0 kubenswrapper[37036]: I0312 14:36:12.629233 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vnhl\" (UniqueName: \"kubernetes.io/projected/8c6b9f13-4a3a-4920-a84b-f76516501f81-kube-api-access-2vnhl\") pod \"dns-operator-589895fbb7-q4wwv\" (UID: \"8c6b9f13-4a3a-4920-a84b-f76516501f81\") " pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" Mar 12 14:36:12.629504 master-0 kubenswrapper[37036]: I0312 14:36:12.629254 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m97zx\" (UniqueName: \"kubernetes.io/projected/6b77ad35-2fff-47bb-ad34-abb3868b09a9-kube-api-access-m97zx\") pod \"machine-config-operator-fdb5c78b5-lds9v\" (UID: \"6b77ad35-2fff-47bb-ad34-abb3868b09a9\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" Mar 12 14:36:12.629504 master-0 kubenswrapper[37036]: I0312 14:36:12.629274 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08ea0d9f-0635-4759-803e-572eca2f2d34-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-vpn8v\" (UID: \"08ea0d9f-0635-4759-803e-572eca2f2d34\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" Mar 12 14:36:12.629504 master-0 kubenswrapper[37036]: I0312 14:36:12.629277 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxt4g\" (UniqueName: \"kubernetes.io/projected/6defef79-6058-466a-ae0b-8eb9258126be-kube-api-access-zxt4g\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:36:12.629504 master-0 kubenswrapper[37036]: I0312 14:36:12.629318 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/39bda5b8-c748-4023-8680-8e8454512e5b-encryption-config\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:12.629504 master-0 kubenswrapper[37036]: I0312 14:36:12.629341 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd29b21c-7a0e-4311-952f-427b00468e66-serving-cert\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:36:12.630142 master-0 kubenswrapper[37036]: I0312 14:36:12.629535 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/8106d14a-b448-4dd1-bccd-926f85394b5d-operand-assets\") pod \"cluster-olm-operator-77899cf6d-h8sq4\" (UID: \"8106d14a-b448-4dd1-bccd-926f85394b5d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" Mar 12 14:36:12.630142 master-0 kubenswrapper[37036]: I0312 14:36:12.629609 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg62n\" (UniqueName: \"kubernetes.io/projected/df31c4c2-304e-4bad-8e6f-18c174eba675-kube-api-access-gg62n\") pod \"route-controller-manager-7f8bfc67b-pz8rc\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:36:12.630142 master-0 kubenswrapper[37036]: I0312 14:36:12.629687 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-sys\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:36:12.630142 master-0 kubenswrapper[37036]: I0312 14:36:12.629710 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/965d6e0e3f611771f8ba2352415f565a-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"965d6e0e3f611771f8ba2352415f565a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:36:12.630142 master-0 kubenswrapper[37036]: I0312 14:36:12.629729 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-pxgq9\" (UID: \"de61e1fe-294c-48a6-8cf3-aeb4637ef2cc\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" Mar 12 14:36:12.630142 master-0 kubenswrapper[37036]: I0312 14:36:12.629748 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-var-lib-kubelet\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.630142 master-0 kubenswrapper[37036]: I0312 14:36:12.629767 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpq4d\" (UniqueName: \"kubernetes.io/projected/1bc0d552-01c7-4212-a551-d16419f2dc80-kube-api-access-vpq4d\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:36:12.630142 master-0 kubenswrapper[37036]: I0312 14:36:12.629786 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/1f9b15c6-b4ee-4907-8daa-376e3b438896-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:36:12.630142 master-0 kubenswrapper[37036]: I0312 14:36:12.629803 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df31c4c2-304e-4bad-8e6f-18c174eba675-serving-cert\") pod \"route-controller-manager-7f8bfc67b-pz8rc\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:36:12.630142 master-0 kubenswrapper[37036]: I0312 14:36:12.629820 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62ptf\" (UniqueName: \"kubernetes.io/projected/f569ed3b-924d-4829-b192-f508ee70658d-kube-api-access-62ptf\") pod \"cluster-samples-operator-664cb58b85-q29ch\" (UID: \"f569ed3b-924d-4829-b192-f508ee70658d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-q29ch" Mar 12 14:36:12.630142 master-0 kubenswrapper[37036]: I0312 14:36:12.629835 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cba33300-f7ef-4547-97ff-62e223da79cf-catalog-content\") pod \"redhat-marketplace-vmhgb\" (UID: \"cba33300-f7ef-4547-97ff-62e223da79cf\") " pod="openshift-marketplace/redhat-marketplace-vmhgb" Mar 12 14:36:12.630142 master-0 kubenswrapper[37036]: I0312 14:36:12.629749 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0a898118-6d01-4211-92f0-43967b75405c-available-featuregates\") pod \"openshift-config-operator-64488f9d78-ljnjj\" (UID: \"0a898118-6d01-4211-92f0-43967b75405c\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:36:12.630142 master-0 kubenswrapper[37036]: I0312 14:36:12.629973 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-systemd\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.630142 master-0 kubenswrapper[37036]: I0312 14:36:12.630000 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6defef79-6058-466a-ae0b-8eb9258126be-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:36:12.630142 master-0 kubenswrapper[37036]: I0312 14:36:12.630008 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqh9t\" (UniqueName: \"kubernetes.io/projected/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-kube-api-access-cqh9t\") pod \"olm-operator-d64cfc9db-f48hv\" (UID: \"07a6a1d6-fecf-4847-b7c1-160d5d7320fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:36:12.630142 master-0 kubenswrapper[37036]: I0312 14:36:12.630084 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cba33300-f7ef-4547-97ff-62e223da79cf-catalog-content\") pod \"redhat-marketplace-vmhgb\" (UID: \"cba33300-f7ef-4547-97ff-62e223da79cf\") " pod="openshift-marketplace/redhat-marketplace-vmhgb" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.630193 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/39252b5a-d014-4319-ad81-3c1bf2ef585e-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.630232 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-default-certificate\") pod \"router-default-79f8cd6fdd-gjwhp\" (UID: \"e7f6ebd3-98c8-457c-a88c-7e81270f01b5\") " pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.630259 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-webhook-cert\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.630353 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3edaa533-ecbb-443e-a270-4cb4f923daf6-config\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.630418 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08ea0d9f-0635-4759-803e-572eca2f2d34-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-vpn8v\" (UID: \"08ea0d9f-0635-4759-803e-572eca2f2d34\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.630449 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-cni-multus\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.630502 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.630531 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/61d829d7-38e1-4826-942c-f7317c4a4bec-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-vfsmf\" (UID: \"61d829d7-38e1-4826-942c-f7317c4a4bec\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.630556 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2435b91-86d6-415b-a978-34cc859e74f2-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.630615 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-webhook-cert\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.630672 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-trusted-ca\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.630700 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/1f9b15c6-b4ee-4907-8daa-376e3b438896-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.630719 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert\") pod \"catalog-operator-7d9c49f57b-whr79\" (UID: \"272b53c4-134c-404d-9a27-c7371415b1f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.630740 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-service-ca-bundle\") pod \"router-default-79f8cd6fdd-gjwhp\" (UID: \"e7f6ebd3-98c8-457c-a88c-7e81270f01b5\") " pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.630780 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df31c4c2-304e-4bad-8e6f-18c174eba675-client-ca\") pod \"route-controller-manager-7f8bfc67b-pz8rc\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.630825 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2435b91-86d6-415b-a978-34cc859e74f2-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.630801 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3edaa533-ecbb-443e-a270-4cb4f923daf6-images\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.630875 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-run-netns\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.630956 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmrqg\" (UniqueName: \"kubernetes.io/projected/f3c13c5f-3d1f-4e0a-b77b-732255680086-kube-api-access-wmrqg\") pod \"control-plane-machine-set-operator-6686554ddc-7s8fj\" (UID: \"f3c13c5f-3d1f-4e0a-b77b-732255680086\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-7s8fj" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.630992 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27tm9\" (UniqueName: \"kubernetes.io/projected/8dd912f8-2c4d-4a0a-ba41-918ab5c235ba-kube-api-access-27tm9\") pod \"multus-admission-controller-7769569c45-s5wj4\" (UID: \"8dd912f8-2c4d-4a0a-ba41-918ab5c235ba\") " pod="openshift-multus/multus-admission-controller-7769569c45-s5wj4" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.631084 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clj2j\" (UniqueName: \"kubernetes.io/projected/8660cef9-0ab3-453e-a4b9-c243daa6ddb0-kube-api-access-clj2j\") pod \"csi-snapshot-controller-operator-5685fbc7d-ckmlv\" (UID: \"8660cef9-0ab3-453e-a4b9-c243daa6ddb0\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.631120 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.631143 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-trusted-ca\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.631146 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbv7q\" (UniqueName: \"kubernetes.io/projected/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-kube-api-access-bbv7q\") pod \"openshift-controller-manager-operator-8565d84698-zwdgk\" (UID: \"d00a8cc7-7774-40bd-94a1-9ac2d0f63234\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.631176 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-run\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.631144 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/272b53c4-134c-404d-9a27-c7371415b1f7-srv-cert\") pod \"catalog-operator-7d9c49f57b-whr79\" (UID: \"272b53c4-134c-404d-9a27-c7371415b1f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.631201 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-config\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.631223 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2742559-1f28-4f2c-a873-d6a9348972fb-utilities\") pod \"community-operators-4gbmc\" (UID: \"e2742559-1f28-4f2c-a873-d6a9348972fb\") " pod="openshift-marketplace/community-operators-4gbmc" Mar 12 14:36:12.631291 master-0 kubenswrapper[37036]: I0312 14:36:12.631263 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b9d51570-06dd-4e2f-9c19-07fb694279ae-host-slash\") pod \"iptables-alerter-vb4v5\" (UID: \"b9d51570-06dd-4e2f-9c19-07fb694279ae\") " pod="openshift-network-operator/iptables-alerter-vb4v5" Mar 12 14:36:12.632310 master-0 kubenswrapper[37036]: I0312 14:36:12.631353 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2742559-1f28-4f2c-a873-d6a9348972fb-utilities\") pod \"community-operators-4gbmc\" (UID: \"e2742559-1f28-4f2c-a873-d6a9348972fb\") " pod="openshift-marketplace/community-operators-4gbmc" Mar 12 14:36:12.632310 master-0 kubenswrapper[37036]: I0312 14:36:12.631403 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-config\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:36:12.632310 master-0 kubenswrapper[37036]: I0312 14:36:12.631432 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-sysconfig\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.632310 master-0 kubenswrapper[37036]: I0312 14:36:12.631470 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pfns\" (UniqueName: \"kubernetes.io/projected/95c11263-0d68-4b11-bcfd-bcb0e96a6988-kube-api-access-6pfns\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.632310 master-0 kubenswrapper[37036]: I0312 14:36:12.632164 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-config\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:36:12.632310 master-0 kubenswrapper[37036]: I0312 14:36:12.632199 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a35674af-162c-4a4a-8605-158b2326267e-serving-cert\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:36:12.632310 master-0 kubenswrapper[37036]: I0312 14:36:12.632240 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/59f21770-429b-4b63-82fd-50ce0daf698d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-jms82\" (UID: \"59f21770-429b-4b63-82fd-50ce0daf698d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" Mar 12 14:36:12.632310 master-0 kubenswrapper[37036]: I0312 14:36:12.632272 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-audit-policies\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:36:12.632310 master-0 kubenswrapper[37036]: I0312 14:36:12.632302 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-ovnkube-identity-cm\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:36:12.632652 master-0 kubenswrapper[37036]: I0312 14:36:12.632327 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-run-ovn-kubernetes\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.632652 master-0 kubenswrapper[37036]: I0312 14:36:12.632361 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:36:12.632652 master-0 kubenswrapper[37036]: I0312 14:36:12.632390 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-env-overrides\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:36:12.632652 master-0 kubenswrapper[37036]: I0312 14:36:12.632418 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dlf2\" (UniqueName: \"kubernetes.io/projected/99433993-93cf-46cb-bb66-485672cb2554-kube-api-access-2dlf2\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:36:12.632652 master-0 kubenswrapper[37036]: I0312 14:36:12.632441 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dc73c14-852d-4957-b6ac-84366ba0594f-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hkf2t\" (UID: \"3dc73c14-852d-4957-b6ac-84366ba0594f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" Mar 12 14:36:12.632652 master-0 kubenswrapper[37036]: I0312 14:36:12.632469 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-node-log\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.632652 master-0 kubenswrapper[37036]: I0312 14:36:12.632497 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39bda5b8-c748-4023-8680-8e8454512e5b-serving-cert\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:12.632652 master-0 kubenswrapper[37036]: I0312 14:36:12.632523 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:36:12.632652 master-0 kubenswrapper[37036]: I0312 14:36:12.632550 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcq7v\" (UniqueName: \"kubernetes.io/projected/dd29b21c-7a0e-4311-952f-427b00468e66-kube-api-access-rcq7v\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:36:12.632652 master-0 kubenswrapper[37036]: I0312 14:36:12.632575 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/7433d9bf-4edf-4787-a7a1-e5102c7264c7-host-etc-kube\") pod \"network-operator-7c649bf6d4-ldxfn\" (UID: \"7433d9bf-4edf-4787-a7a1-e5102c7264c7\") " pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" Mar 12 14:36:12.632652 master-0 kubenswrapper[37036]: I0312 14:36:12.632605 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-dvv78\" (UID: \"85459175-2c9c-425d-bdfb-0a79c92ed110\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:36:12.633104 master-0 kubenswrapper[37036]: I0312 14:36:12.632766 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ec846db-e344-4f9e-95e6-7a0055f52766-metrics-tls\") pod \"dns-default-fpjck\" (UID: \"3ec846db-e344-4f9e-95e6-7a0055f52766\") " pod="openshift-dns/dns-default-fpjck" Mar 12 14:36:12.633104 master-0 kubenswrapper[37036]: I0312 14:36:12.632797 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkgft\" (UniqueName: \"kubernetes.io/projected/3ec846db-e344-4f9e-95e6-7a0055f52766-kube-api-access-tkgft\") pod \"dns-default-fpjck\" (UID: \"3ec846db-e344-4f9e-95e6-7a0055f52766\") " pod="openshift-dns/dns-default-fpjck" Mar 12 14:36:12.633104 master-0 kubenswrapper[37036]: I0312 14:36:12.632858 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:36:12.633104 master-0 kubenswrapper[37036]: I0312 14:36:12.632924 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cqkl\" (UniqueName: \"kubernetes.io/projected/b9d51570-06dd-4e2f-9c19-07fb694279ae-kube-api-access-2cqkl\") pod \"iptables-alerter-vb4v5\" (UID: \"b9d51570-06dd-4e2f-9c19-07fb694279ae\") " pod="openshift-network-operator/iptables-alerter-vb4v5" Mar 12 14:36:12.633104 master-0 kubenswrapper[37036]: I0312 14:36:12.632960 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-kubelet\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.633104 master-0 kubenswrapper[37036]: I0312 14:36:12.633019 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-cni-binary-copy\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:36:12.633593 master-0 kubenswrapper[37036]: I0312 14:36:12.633514 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-client-ca\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:36:12.634036 master-0 kubenswrapper[37036]: I0312 14:36:12.634007 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:36:12.634230 master-0 kubenswrapper[37036]: I0312 14:36:12.634204 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9757756c-cb67-4b6f-99c3-dd63f904897a-cni-binary-copy\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:36:12.634230 master-0 kubenswrapper[37036]: I0312 14:36:12.634209 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dc73c14-852d-4957-b6ac-84366ba0594f-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hkf2t\" (UID: \"3dc73c14-852d-4957-b6ac-84366ba0594f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" Mar 12 14:36:12.634439 master-0 kubenswrapper[37036]: I0312 14:36:12.634418 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-env-overrides\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:36:12.634508 master-0 kubenswrapper[37036]: I0312 14:36:12.634439 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-ovnkube-identity-cm\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:36:12.634606 master-0 kubenswrapper[37036]: I0312 14:36:12.634585 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:36:12.634664 master-0 kubenswrapper[37036]: I0312 14:36:12.634648 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-cnibin\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.635176 master-0 kubenswrapper[37036]: I0312 14:36:12.635147 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/85459175-2c9c-425d-bdfb-0a79c92ed110-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-dvv78\" (UID: \"85459175-2c9c-425d-bdfb-0a79c92ed110\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:36:12.635233 master-0 kubenswrapper[37036]: I0312 14:36:12.635223 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-os-release\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.635399 master-0 kubenswrapper[37036]: I0312 14:36:12.635265 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5a56d42a-efb4-4956-acab-d12c7ca5276e-var-lock\") pod \"installer-4-master-0\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:36:12.635399 master-0 kubenswrapper[37036]: I0312 14:36:12.635297 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwtr9\" (UniqueName: \"kubernetes.io/projected/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-kube-api-access-wwtr9\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:36:12.635399 master-0 kubenswrapper[37036]: I0312 14:36:12.635328 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-encryption-config\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:36:12.636275 master-0 kubenswrapper[37036]: I0312 14:36:12.635434 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/a81be38f-e07e-4863-8d61-fdefc2713a6a-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:36:12.636275 master-0 kubenswrapper[37036]: I0312 14:36:12.635504 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-secret-telemeter-client\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:12.636275 master-0 kubenswrapper[37036]: I0312 14:36:12.635567 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-server-tls\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:12.636275 master-0 kubenswrapper[37036]: I0312 14:36:12.635597 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-sys\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.636275 master-0 kubenswrapper[37036]: I0312 14:36:12.635623 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-metrics-tls\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:36:12.636275 master-0 kubenswrapper[37036]: I0312 14:36:12.635663 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-lib-modules\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.636275 master-0 kubenswrapper[37036]: I0312 14:36:12.635829 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-etcd-serving-ca\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:12.636275 master-0 kubenswrapper[37036]: I0312 14:36:12.636110 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-276qm\" (UniqueName: \"kubernetes.io/projected/06eb9f4b-167e-435b-8ef6-ae44fc0b85a9-kube-api-access-276qm\") pod \"cluster-storage-operator-6fbfc8dc8f-xgrsw\" (UID: \"06eb9f4b-167e-435b-8ef6-ae44fc0b85a9\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw" Mar 12 14:36:12.636275 master-0 kubenswrapper[37036]: I0312 14:36:12.636255 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/a81be38f-e07e-4863-8d61-fdefc2713a6a-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:36:12.636729 master-0 kubenswrapper[37036]: I0312 14:36:12.636360 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:36:12.636729 master-0 kubenswrapper[37036]: I0312 14:36:12.636430 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-metrics-server-audit-profiles\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:12.636729 master-0 kubenswrapper[37036]: I0312 14:36:12.636503 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-serving-cert\") pod \"kube-apiserver-operator-68bd585b-smpl5\" (UID: \"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" Mar 12 14:36:12.636729 master-0 kubenswrapper[37036]: I0312 14:36:12.636570 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qv7x\" (UniqueName: \"kubernetes.io/projected/cba33300-f7ef-4547-97ff-62e223da79cf-kube-api-access-6qv7x\") pod \"redhat-marketplace-vmhgb\" (UID: \"cba33300-f7ef-4547-97ff-62e223da79cf\") " pod="openshift-marketplace/redhat-marketplace-vmhgb" Mar 12 14:36:12.636729 master-0 kubenswrapper[37036]: I0312 14:36:12.636603 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76d596c0-6a41-43e1-9516-aee9ad834ec2-serving-cert\") pod \"service-ca-operator-69b6fc6b88-fv6pp\" (UID: \"76d596c0-6a41-43e1-9516-aee9ad834ec2\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" Mar 12 14:36:12.636952 master-0 kubenswrapper[37036]: I0312 14:36:12.636734 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9czc5\" (UniqueName: \"kubernetes.io/projected/61de099a-410b-4d30-83e8-19cf5901cb27-kube-api-access-9czc5\") pod \"service-ca-84bfdbbb7f-7lx8p\" (UID: \"61de099a-410b-4d30-83e8-19cf5901cb27\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" Mar 12 14:36:12.636952 master-0 kubenswrapper[37036]: I0312 14:36:12.636741 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:36:12.637084 master-0 kubenswrapper[37036]: I0312 14:36:12.637055 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-serving-cert\") pod \"kube-apiserver-operator-68bd585b-smpl5\" (UID: \"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" Mar 12 14:36:12.637133 master-0 kubenswrapper[37036]: I0312 14:36:12.637114 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-bwl7h\" (UID: \"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" Mar 12 14:36:12.637214 master-0 kubenswrapper[37036]: I0312 14:36:12.637152 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56twk\" (UniqueName: \"kubernetes.io/projected/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-kube-api-access-56twk\") pod \"router-default-79f8cd6fdd-gjwhp\" (UID: \"e7f6ebd3-98c8-457c-a88c-7e81270f01b5\") " pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:36:12.637257 master-0 kubenswrapper[37036]: I0312 14:36:12.637215 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76d596c0-6a41-43e1-9516-aee9ad834ec2-serving-cert\") pod \"service-ca-operator-69b6fc6b88-fv6pp\" (UID: \"76d596c0-6a41-43e1-9516-aee9ad834ec2\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" Mar 12 14:36:12.637257 master-0 kubenswrapper[37036]: I0312 14:36:12.637238 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8e4d9407-ff79-4396-a37f-896617e024d4-rootfs\") pod \"machine-config-daemon-ngzc8\" (UID: \"8e4d9407-ff79-4396-a37f-896617e024d4\") " pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:36:12.637338 master-0 kubenswrapper[37036]: I0312 14:36:12.637322 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b77ad35-2fff-47bb-ad34-abb3868b09a9-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-lds9v\" (UID: \"6b77ad35-2fff-47bb-ad34-abb3868b09a9\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" Mar 12 14:36:12.637383 master-0 kubenswrapper[37036]: I0312 14:36:12.637363 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3edaa533-ecbb-443e-a270-4cb4f923daf6-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:36:12.637426 master-0 kubenswrapper[37036]: I0312 14:36:12.637403 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-etc-kubernetes\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.637467 master-0 kubenswrapper[37036]: I0312 14:36:12.637443 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert\") pod \"olm-operator-d64cfc9db-f48hv\" (UID: \"07a6a1d6-fecf-4847-b7c1-160d5d7320fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:36:12.637514 master-0 kubenswrapper[37036]: I0312 14:36:12.637479 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/900b2a0e-1e2b-41a3-86f5-639ec1e95969-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-b5qg4\" (UID: \"900b2a0e-1e2b-41a3-86f5-639ec1e95969\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-b5qg4" Mar 12 14:36:12.637557 master-0 kubenswrapper[37036]: I0312 14:36:12.637513 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-metrics-certs\") pod \"router-default-79f8cd6fdd-gjwhp\" (UID: \"e7f6ebd3-98c8-457c-a88c-7e81270f01b5\") " pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:36:12.637557 master-0 kubenswrapper[37036]: I0312 14:36:12.637545 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2gnl\" (UniqueName: \"kubernetes.io/projected/d56089bf-177c-492d-8964-73a45574e7ed-kube-api-access-f2gnl\") pod \"csi-snapshot-controller-7577d6f48-z9hzg\" (UID: \"d56089bf-177c-492d-8964-73a45574e7ed\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" Mar 12 14:36:12.637644 master-0 kubenswrapper[37036]: I0312 14:36:12.637581 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-config\") pod \"openshift-controller-manager-operator-8565d84698-zwdgk\" (UID: \"d00a8cc7-7774-40bd-94a1-9ac2d0f63234\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" Mar 12 14:36:12.637644 master-0 kubenswrapper[37036]: I0312 14:36:12.637619 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms688\" (UniqueName: \"kubernetes.io/projected/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-kube-api-access-ms688\") pod \"prometheus-operator-5ff8674d55-bwl7h\" (UID: \"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" Mar 12 14:36:12.637730 master-0 kubenswrapper[37036]: I0312 14:36:12.637657 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flj9j\" (UniqueName: \"kubernetes.io/projected/1047bb4a-135f-488d-9399-0518cb3a827d-kube-api-access-flj9j\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:36:12.637730 master-0 kubenswrapper[37036]: I0312 14:36:12.637689 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b77ad35-2fff-47bb-ad34-abb3868b09a9-images\") pod \"machine-config-operator-fdb5c78b5-lds9v\" (UID: \"6b77ad35-2fff-47bb-ad34-abb3868b09a9\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" Mar 12 14:36:12.637730 master-0 kubenswrapper[37036]: I0312 14:36:12.637720 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-system-cni-dir\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:36:12.637856 master-0 kubenswrapper[37036]: I0312 14:36:12.637754 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-tuned\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.637856 master-0 kubenswrapper[37036]: I0312 14:36:12.637793 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/8106d14a-b448-4dd1-bccd-926f85394b5d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-h8sq4\" (UID: \"8106d14a-b448-4dd1-bccd-926f85394b5d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" Mar 12 14:36:12.637856 master-0 kubenswrapper[37036]: I0312 14:36:12.637830 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70710a0b-8b5d-40f5-b726-fd5e2836ffbe-utilities\") pod \"certified-operators-mgqz4\" (UID: \"70710a0b-8b5d-40f5-b726-fd5e2836ffbe\") " pod="openshift-marketplace/certified-operators-mgqz4" Mar 12 14:36:12.637999 master-0 kubenswrapper[37036]: I0312 14:36:12.637864 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbwl8\" (UniqueName: \"kubernetes.io/projected/2f59d485-9f69-4f36-836e-6338f84b7d69-kube-api-access-fbwl8\") pod \"redhat-operators-9bljc\" (UID: \"2f59d485-9f69-4f36-836e-6338f84b7d69\") " pod="openshift-marketplace/redhat-operators-9bljc" Mar 12 14:36:12.637999 master-0 kubenswrapper[37036]: I0312 14:36:12.637920 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mbjg\" (UniqueName: \"kubernetes.io/projected/3f72fbbe-69f0-4622-be05-b839ff9b4d45-kube-api-access-2mbjg\") pod \"openshift-apiserver-operator-799b6db4d7-gt2tw\" (UID: \"3f72fbbe-69f0-4622-be05-b839ff9b4d45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" Mar 12 14:36:12.637999 master-0 kubenswrapper[37036]: I0312 14:36:12.637962 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bba274a-38c7-4d13-88a5-6bc39228416c-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-qtql5\" (UID: \"1bba274a-38c7-4d13-88a5-6bc39228416c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" Mar 12 14:36:12.638128 master-0 kubenswrapper[37036]: I0312 14:36:12.638084 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-srv-cert\") pod \"olm-operator-d64cfc9db-f48hv\" (UID: \"07a6a1d6-fecf-4847-b7c1-160d5d7320fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:36:12.638128 master-0 kubenswrapper[37036]: I0312 14:36:12.638086 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-telemeter-trusted-ca-bundle\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:12.638209 master-0 kubenswrapper[37036]: I0312 14:36:12.638151 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7nnk\" (UniqueName: \"kubernetes.io/projected/1f9b15c6-b4ee-4907-8daa-376e3b438896-kube-api-access-w7nnk\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:36:12.638209 master-0 kubenswrapper[37036]: I0312 14:36:12.638189 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-bwl7h\" (UID: \"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" Mar 12 14:36:12.638287 master-0 kubenswrapper[37036]: I0312 14:36:12.638231 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smwtd\" (UniqueName: \"kubernetes.io/projected/3edaa533-ecbb-443e-a270-4cb4f923daf6-kube-api-access-smwtd\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:36:12.638362 master-0 kubenswrapper[37036]: I0312 14:36:12.638266 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.638409 master-0 kubenswrapper[37036]: I0312 14:36:12.638397 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cba33300-f7ef-4547-97ff-62e223da79cf-utilities\") pod \"redhat-marketplace-vmhgb\" (UID: \"cba33300-f7ef-4547-97ff-62e223da79cf\") " pod="openshift-marketplace/redhat-marketplace-vmhgb" Mar 12 14:36:12.638450 master-0 kubenswrapper[37036]: I0312 14:36:12.638428 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef5679f7-5bf5-409d-b74b-64a9cbb6c701-cert\") pod \"ingress-canary-dbdr9\" (UID: \"ef5679f7-5bf5-409d-b74b-64a9cbb6c701\") " pod="openshift-ingress-canary/ingress-canary-dbdr9" Mar 12 14:36:12.638489 master-0 kubenswrapper[37036]: I0312 14:36:12.638449 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70710a0b-8b5d-40f5-b726-fd5e2836ffbe-utilities\") pod \"certified-operators-mgqz4\" (UID: \"70710a0b-8b5d-40f5-b726-fd5e2836ffbe\") " pod="openshift-marketplace/certified-operators-mgqz4" Mar 12 14:36:12.638489 master-0 kubenswrapper[37036]: I0312 14:36:12.638462 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/1f9b15c6-b4ee-4907-8daa-376e3b438896-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:36:12.638572 master-0 kubenswrapper[37036]: I0312 14:36:12.638329 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-config\") pod \"openshift-controller-manager-operator-8565d84698-zwdgk\" (UID: \"d00a8cc7-7774-40bd-94a1-9ac2d0f63234\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" Mar 12 14:36:12.638572 master-0 kubenswrapper[37036]: I0312 14:36:12.638553 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:36:12.638648 master-0 kubenswrapper[37036]: I0312 14:36:12.638595 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:36:12.638648 master-0 kubenswrapper[37036]: I0312 14:36:12.638636 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-cni-netd\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.638726 master-0 kubenswrapper[37036]: I0312 14:36:12.638666 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/95c11263-0d68-4b11-bcfd-bcb0e96a6988-cni-binary-copy\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.638726 master-0 kubenswrapper[37036]: I0312 14:36:12.638700 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-daemon-config\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.638800 master-0 kubenswrapper[37036]: I0312 14:36:12.638740 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70710a0b-8b5d-40f5-b726-fd5e2836ffbe-catalog-content\") pod \"certified-operators-mgqz4\" (UID: \"70710a0b-8b5d-40f5-b726-fd5e2836ffbe\") " pod="openshift-marketplace/certified-operators-mgqz4" Mar 12 14:36:12.638800 master-0 kubenswrapper[37036]: I0312 14:36:12.638598 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cba33300-f7ef-4547-97ff-62e223da79cf-utilities\") pod \"redhat-marketplace-vmhgb\" (UID: \"cba33300-f7ef-4547-97ff-62e223da79cf\") " pod="openshift-marketplace/redhat-marketplace-vmhgb" Mar 12 14:36:12.638887 master-0 kubenswrapper[37036]: I0312 14:36:12.638863 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/8106d14a-b448-4dd1-bccd-926f85394b5d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-h8sq4\" (UID: \"8106d14a-b448-4dd1-bccd-926f85394b5d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" Mar 12 14:36:12.638988 master-0 kubenswrapper[37036]: I0312 14:36:12.638970 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-tuned\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.639036 master-0 kubenswrapper[37036]: I0312 14:36:12.638992 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/95c11263-0d68-4b11-bcfd-bcb0e96a6988-cni-binary-copy\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.639104 master-0 kubenswrapper[37036]: I0312 14:36:12.639075 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-daemon-config\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.639211 master-0 kubenswrapper[37036]: I0312 14:36:12.639169 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bba274a-38c7-4d13-88a5-6bc39228416c-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-qtql5\" (UID: \"1bba274a-38c7-4d13-88a5-6bc39228416c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" Mar 12 14:36:12.639309 master-0 kubenswrapper[37036]: I0312 14:36:12.639288 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70710a0b-8b5d-40f5-b726-fd5e2836ffbe-catalog-content\") pod \"certified-operators-mgqz4\" (UID: \"70710a0b-8b5d-40f5-b726-fd5e2836ffbe\") " pod="openshift-marketplace/certified-operators-mgqz4" Mar 12 14:36:12.639357 master-0 kubenswrapper[37036]: I0312 14:36:12.639307 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:36:12.639357 master-0 kubenswrapper[37036]: I0312 14:36:12.639347 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a2435b91-86d6-415b-a978-34cc859e74f2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:36:12.639431 master-0 kubenswrapper[37036]: I0312 14:36:12.639340 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:36:12.639431 master-0 kubenswrapper[37036]: I0312 14:36:12.639404 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40912d56-8288-4d58-ad91-7455bd460887-config\") pod \"machine-approver-754bdc9f9d-44b6s\" (UID: \"40912d56-8288-4d58-ad91-7455bd460887\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" Mar 12 14:36:12.639537 master-0 kubenswrapper[37036]: I0312 14:36:12.639512 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08ea0d9f-0635-4759-803e-572eca2f2d34-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-vpn8v\" (UID: \"08ea0d9f-0635-4759-803e-572eca2f2d34\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" Mar 12 14:36:12.639622 master-0 kubenswrapper[37036]: I0312 14:36:12.639592 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/39bda5b8-c748-4023-8680-8e8454512e5b-etcd-client\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:12.639704 master-0 kubenswrapper[37036]: I0312 14:36:12.639684 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqqcc\" (UniqueName: \"kubernetes.io/projected/272b53c4-134c-404d-9a27-c7371415b1f7-kube-api-access-nqqcc\") pod \"catalog-operator-7d9c49f57b-whr79\" (UID: \"272b53c4-134c-404d-9a27-c7371415b1f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:36:12.639996 master-0 kubenswrapper[37036]: I0312 14:36:12.639872 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcz8p\" (UniqueName: \"kubernetes.io/projected/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a-kube-api-access-jcz8p\") pod \"machine-config-server-nj7qg\" (UID: \"6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a\") " pod="openshift-machine-config-operator/machine-config-server-nj7qg" Mar 12 14:36:12.639996 master-0 kubenswrapper[37036]: I0312 14:36:12.639950 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67sxk\" (UniqueName: \"kubernetes.io/projected/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-kube-api-access-67sxk\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:36:12.639996 master-0 kubenswrapper[37036]: I0312 14:36:12.639974 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08ea0d9f-0635-4759-803e-572eca2f2d34-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-vpn8v\" (UID: \"08ea0d9f-0635-4759-803e-572eca2f2d34\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" Mar 12 14:36:12.640142 master-0 kubenswrapper[37036]: I0312 14:36:12.640000 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:36:12.640142 master-0 kubenswrapper[37036]: I0312 14:36:12.640049 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:36:12.640142 master-0 kubenswrapper[37036]: I0312 14:36:12.640113 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bdqv\" (UniqueName: \"kubernetes.io/projected/7fdce71e-8085-4316-be40-e535530c2ca4-kube-api-access-5bdqv\") pod \"network-metrics-daemon-n9v7g\" (UID: \"7fdce71e-8085-4316-be40-e535530c2ca4\") " pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:36:12.640265 master-0 kubenswrapper[37036]: I0312 14:36:12.640178 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:12.640408 master-0 kubenswrapper[37036]: I0312 14:36:12.640367 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcwrv\" (UniqueName: \"kubernetes.io/projected/8d775283-2696-4411-8ddf-d4e6000f0a0c-kube-api-access-lcwrv\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:36:12.640477 master-0 kubenswrapper[37036]: I0312 14:36:12.640457 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/40912d56-8288-4d58-ad91-7455bd460887-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-44b6s\" (UID: \"40912d56-8288-4d58-ad91-7455bd460887\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" Mar 12 14:36:12.640557 master-0 kubenswrapper[37036]: I0312 14:36:12.640526 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:36:12.640612 master-0 kubenswrapper[37036]: I0312 14:36:12.640506 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 14:36:12.640612 master-0 kubenswrapper[37036]: I0312 14:36:12.640609 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:36:12.640750 master-0 kubenswrapper[37036]: I0312 14:36:12.640662 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-system-cni-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.640750 master-0 kubenswrapper[37036]: I0312 14:36:12.640710 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kvhc\" (UniqueName: \"kubernetes.io/projected/ef824102-83a5-4629-8057-d4f1a57a530d-kube-api-access-5kvhc\") pod \"packageserver-5957c5c5dc-njb8x\" (UID: \"ef824102-83a5-4629-8057-d4f1a57a530d\") " pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:36:12.640750 master-0 kubenswrapper[37036]: I0312 14:36:12.640741 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:36:12.641194 master-0 kubenswrapper[37036]: I0312 14:36:12.641157 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6f5cd3ff-ced6-47e3-8054-d83053d87680-images\") pod \"machine-api-operator-84bf6db4f9-qtx2d\" (UID: \"6f5cd3ff-ced6-47e3-8054-d83053d87680\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" Mar 12 14:36:12.641308 master-0 kubenswrapper[37036]: I0312 14:36:12.641256 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc9zd\" (UniqueName: \"kubernetes.io/projected/3dc73c14-852d-4957-b6ac-84366ba0594f-kube-api-access-sc9zd\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hkf2t\" (UID: \"3dc73c14-852d-4957-b6ac-84366ba0594f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" Mar 12 14:36:12.641826 master-0 kubenswrapper[37036]: I0312 14:36:12.641478 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef824102-83a5-4629-8057-d4f1a57a530d-apiservice-cert\") pod \"packageserver-5957c5c5dc-njb8x\" (UID: \"ef824102-83a5-4629-8057-d4f1a57a530d\") " pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:36:12.641885 master-0 kubenswrapper[37036]: I0312 14:36:12.641839 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9cfq\" (UniqueName: \"kubernetes.io/projected/70710a0b-8b5d-40f5-b726-fd5e2836ffbe-kube-api-access-b9cfq\") pod \"certified-operators-mgqz4\" (UID: \"70710a0b-8b5d-40f5-b726-fd5e2836ffbe\") " pod="openshift-marketplace/certified-operators-mgqz4" Mar 12 14:36:12.641885 master-0 kubenswrapper[37036]: I0312 14:36:12.641871 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd29b21c-7a0e-4311-952f-427b00468e66-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:36:12.642011 master-0 kubenswrapper[37036]: I0312 14:36:12.641913 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-config\") pod \"kube-apiserver-operator-68bd585b-smpl5\" (UID: \"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" Mar 12 14:36:12.642432 master-0 kubenswrapper[37036]: I0312 14:36:12.642158 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 12 14:36:12.642432 master-0 kubenswrapper[37036]: I0312 14:36:12.642384 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-config\") pod \"kube-apiserver-operator-68bd585b-smpl5\" (UID: \"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" Mar 12 14:36:12.642432 master-0 kubenswrapper[37036]: I0312 14:36:12.642424 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-etc-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.642449 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-kubernetes\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.642479 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ec846db-e344-4f9e-95e6-7a0055f52766-config-volume\") pod \"dns-default-fpjck\" (UID: \"3ec846db-e344-4f9e-95e6-7a0055f52766\") " pod="openshift-dns/dns-default-fpjck" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.642508 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f3c13c5f-3d1f-4e0a-b77b-732255680086-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-7s8fj\" (UID: \"f3c13c5f-3d1f-4e0a-b77b-732255680086\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-7s8fj" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.642533 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.642557 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.642586 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqx42\" (UniqueName: \"kubernetes.io/projected/61d829d7-38e1-4826-942c-f7317c4a4bec-kube-api-access-zqx42\") pod \"machine-config-controller-ff46b7bdf-vfsmf\" (UID: \"61d829d7-38e1-4826-942c-f7317c4a4bec\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.642626 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bba274a-38c7-4d13-88a5-6bc39228416c-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-qtql5\" (UID: \"1bba274a-38c7-4d13-88a5-6bc39228416c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.642702 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dkwb\" (UniqueName: \"kubernetes.io/projected/6f5cd3ff-ced6-47e3-8054-d83053d87680-kube-api-access-7dkwb\") pod \"machine-api-operator-84bf6db4f9-qtx2d\" (UID: \"6f5cd3ff-ced6-47e3-8054-d83053d87680\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.642748 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.642774 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/39252b5a-d014-4319-ad81-3c1bf2ef585e-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.642795 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-tls\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.642838 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k4mx\" (UniqueName: \"kubernetes.io/projected/761993bb-2cba-4e1a-b304-36a24817af94-kube-api-access-2k4mx\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.642855 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bba274a-38c7-4d13-88a5-6bc39228416c-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-qtql5\" (UID: \"1bba274a-38c7-4d13-88a5-6bc39228416c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.642859 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57930a54-89ab-4ec8-a504-74035bb74d63-serving-cert\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.642955 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-stats-auth\") pod \"router-default-79f8cd6fdd-gjwhp\" (UID: \"e7f6ebd3-98c8-457c-a88c-7e81270f01b5\") " pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.642981 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-metrics-client-ca\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643007 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-os-release\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643033 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39bda5b8-c748-4023-8680-8e8454512e5b-audit-dir\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643058 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4krm9\" (UniqueName: \"kubernetes.io/projected/39bda5b8-c748-4023-8680-8e8454512e5b-kube-api-access-4krm9\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643082 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f72fbbe-69f0-4622-be05-b839ff9b4d45-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-gt2tw\" (UID: \"3f72fbbe-69f0-4622-be05-b839ff9b4d45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643128 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57930a54-89ab-4ec8-a504-74035bb74d63-serving-cert\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643215 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ef824102-83a5-4629-8057-d4f1a57a530d-tmpfs\") pod \"packageserver-5957c5c5dc-njb8x\" (UID: \"ef824102-83a5-4629-8057-d4f1a57a530d\") " pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643241 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh2zk\" (UniqueName: \"kubernetes.io/projected/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-kube-api-access-jh2zk\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643262 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv69s\" (UniqueName: \"kubernetes.io/projected/5fb06459-09da-4620-91cf-8c3fe8f425db-kube-api-access-zv69s\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643280 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643291 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f72fbbe-69f0-4622-be05-b839ff9b4d45-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-gt2tw\" (UID: \"3f72fbbe-69f0-4622-be05-b839ff9b4d45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643302 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a35674af-162c-4a4a-8605-158b2326267e-service-ca\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643338 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rfxl\" (UniqueName: \"kubernetes.io/projected/0a898118-6d01-4211-92f0-43967b75405c-kube-api-access-8rfxl\") pod \"openshift-config-operator-64488f9d78-ljnjj\" (UID: \"0a898118-6d01-4211-92f0-43967b75405c\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643363 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e4d9407-ff79-4396-a37f-896617e024d4-mcd-auth-proxy-config\") pod \"machine-config-daemon-ngzc8\" (UID: \"8e4d9407-ff79-4396-a37f-896617e024d4\") " pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643386 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ef824102-83a5-4629-8057-d4f1a57a530d-tmpfs\") pod \"packageserver-5957c5c5dc-njb8x\" (UID: \"ef824102-83a5-4629-8057-d4f1a57a530d\") " pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643388 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-multus-certs\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643433 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643459 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdzwp\" (UniqueName: \"kubernetes.io/projected/4ef01b7f-f7cb-4fd4-a75d-fe7a657d68d4-kube-api-access-fdzwp\") pod \"migrator-57ccdf9b5-5zswp\" (UID: \"4ef01b7f-f7cb-4fd4-a75d-fe7a657d68d4\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zswp" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643483 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643511 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-config\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643536 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-etcd-serving-ca\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643561 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-bound-sa-token\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643588 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2cq8\" (UniqueName: \"kubernetes.io/projected/9757edbb-8ce2-4513-9b32-a552df50634c-kube-api-access-m2cq8\") pod \"cluster-autoscaler-operator-69576476f7-b7296\" (UID: \"9757edbb-8ce2-4513-9b32-a552df50634c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643610 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/39252b5a-d014-4319-ad81-3c1bf2ef585e-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643632 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-serving-cert\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643690 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/61de099a-410b-4d30-83e8-19cf5901cb27-signing-key\") pod \"service-ca-84bfdbbb7f-7lx8p\" (UID: \"61de099a-410b-4d30-83e8-19cf5901cb27\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643719 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6defef79-6058-466a-ae0b-8eb9258126be-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643746 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6defef79-6058-466a-ae0b-8eb9258126be-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643771 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2742559-1f28-4f2c-a873-d6a9348972fb-catalog-content\") pod \"community-operators-4gbmc\" (UID: \"e2742559-1f28-4f2c-a873-d6a9348972fb\") " pod="openshift-marketplace/community-operators-4gbmc" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643798 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/dd29b21c-7a0e-4311-952f-427b00468e66-snapshots\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643823 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643823 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd29b21c-7a0e-4311-952f-427b00468e66-service-ca-bundle\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643864 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1047bb4a-135f-488d-9399-0518cb3a827d-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643886 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dc73c14-852d-4957-b6ac-84366ba0594f-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hkf2t\" (UID: \"3dc73c14-852d-4957-b6ac-84366ba0594f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643922 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-client-ca-bundle\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643950 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-trusted-ca-bundle\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:12.643989 master-0 kubenswrapper[37036]: I0312 14:36:12.643970 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644075 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/dd29b21c-7a0e-4311-952f-427b00468e66-snapshots\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644155 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2742559-1f28-4f2c-a873-d6a9348972fb-catalog-content\") pod \"community-operators-4gbmc\" (UID: \"e2742559-1f28-4f2c-a873-d6a9348972fb\") " pod="openshift-marketplace/community-operators-4gbmc" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644222 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6defef79-6058-466a-ae0b-8eb9258126be-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644265 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-hostroot\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644295 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a35674af-162c-4a4a-8605-158b2326267e-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644345 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-ca\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644366 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dc73c14-852d-4957-b6ac-84366ba0594f-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hkf2t\" (UID: \"3dc73c14-852d-4957-b6ac-84366ba0594f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644375 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-pxgq9\" (UID: \"de61e1fe-294c-48a6-8cf3-aeb4637ef2cc\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644402 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/06eb9f4b-167e-435b-8ef6-ae44fc0b85a9-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-xgrsw\" (UID: \"06eb9f4b-167e-435b-8ef6-ae44fc0b85a9\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644430 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f5cd3ff-ced6-47e3-8054-d83053d87680-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-qtx2d\" (UID: \"6f5cd3ff-ced6-47e3-8054-d83053d87680\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644459 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8tts\" (UniqueName: \"kubernetes.io/projected/85459175-2c9c-425d-bdfb-0a79c92ed110-kube-api-access-v8tts\") pod \"package-server-manager-854648ff6d-dvv78\" (UID: \"85459175-2c9c-425d-bdfb-0a79c92ed110\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644486 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644493 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8d775283-2696-4411-8ddf-d4e6000f0a0c-etcd-ca\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644523 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a-certs\") pod \"machine-config-server-nj7qg\" (UID: \"6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a\") " pod="openshift-machine-config-operator/machine-config-server-nj7qg" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644553 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d775283-2696-4411-8ddf-d4e6000f0a0c-serving-cert\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644610 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/59f21770-429b-4b63-82fd-50ce0daf698d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-jms82\" (UID: \"59f21770-429b-4b63-82fd-50ce0daf698d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644674 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-client-certs\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644701 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644721 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644740 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfz8z\" (UniqueName: \"kubernetes.io/projected/e2742559-1f28-4f2c-a873-d6a9348972fb-kube-api-access-nfz8z\") pod \"community-operators-4gbmc\" (UID: \"e2742559-1f28-4f2c-a873-d6a9348972fb\") " pod="openshift-marketplace/community-operators-4gbmc" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644759 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f5cd3ff-ced6-47e3-8054-d83053d87680-config\") pod \"machine-api-operator-84bf6db4f9-qtx2d\" (UID: \"6f5cd3ff-ced6-47e3-8054-d83053d87680\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644777 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e4d9407-ff79-4396-a37f-896617e024d4-proxy-tls\") pod \"machine-config-daemon-ngzc8\" (UID: \"8e4d9407-ff79-4396-a37f-896617e024d4\") " pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644783 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d775283-2696-4411-8ddf-d4e6000f0a0c-serving-cert\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644864 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1f9b15c6-b4ee-4907-8daa-376e3b438896-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644888 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-cni-bin\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644935 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4pvp\" (UniqueName: \"kubernetes.io/projected/76d596c0-6a41-43e1-9516-aee9ad834ec2-kube-api-access-c4pvp\") pod \"service-ca-operator-69b6fc6b88-fv6pp\" (UID: \"76d596c0-6a41-43e1-9516-aee9ad834ec2\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644961 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9gvf\" (UniqueName: \"kubernetes.io/projected/40912d56-8288-4d58-ad91-7455bd460887-kube-api-access-l9gvf\") pod \"machine-approver-754bdc9f9d-44b6s\" (UID: \"40912d56-8288-4d58-ad91-7455bd460887\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644969 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1f9b15c6-b4ee-4907-8daa-376e3b438896-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644970 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1bc0d552-01c7-4212-a551-d16419f2dc80-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.644988 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv6gf\" (UniqueName: \"kubernetes.io/projected/ef5679f7-5bf5-409d-b74b-64a9cbb6c701-kube-api-access-vv6gf\") pod \"ingress-canary-dbdr9\" (UID: \"ef5679f7-5bf5-409d-b74b-64a9cbb6c701\") " pod="openshift-ingress-canary/ingress-canary-dbdr9" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645028 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-image-import-ca\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645052 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b9d51570-06dd-4e2f-9c19-07fb694279ae-iptables-alerter-script\") pod \"iptables-alerter-vb4v5\" (UID: \"b9d51570-06dd-4e2f-9c19-07fb694279ae\") " pod="openshift-network-operator/iptables-alerter-vb4v5" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645075 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxdqn\" (UniqueName: \"kubernetes.io/projected/59f21770-429b-4b63-82fd-50ce0daf698d-kube-api-access-qxdqn\") pod \"openshift-state-metrics-74cc79fd76-jms82\" (UID: \"59f21770-429b-4b63-82fd-50ce0daf698d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645093 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f569ed3b-924d-4829-b192-f508ee70658d-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-q29ch\" (UID: \"f569ed3b-924d-4829-b192-f508ee70658d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-q29ch" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645162 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/40912d56-8288-4d58-ad91-7455bd460887-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-44b6s\" (UID: \"40912d56-8288-4d58-ad91-7455bd460887\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645182 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-log-socket\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645201 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a898118-6d01-4211-92f0-43967b75405c-serving-cert\") pod \"openshift-config-operator-64488f9d78-ljnjj\" (UID: \"0a898118-6d01-4211-92f0-43967b75405c\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645221 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktncx\" (UniqueName: \"kubernetes.io/projected/39252b5a-d014-4319-ad81-3c1bf2ef585e-kube-api-access-ktncx\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645254 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/59f21770-429b-4b63-82fd-50ce0daf698d-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-jms82\" (UID: \"59f21770-429b-4b63-82fd-50ce0daf698d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645380 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645403 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-config\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645424 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-kubelet\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645440 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a56d42a-efb4-4956-acab-d12c7ca5276e-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645459 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6d7w\" (UniqueName: \"kubernetes.io/projected/addf66af-4d97-4c1e-960d-ace98c27961b-kube-api-access-l6d7w\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645473 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a898118-6d01-4211-92f0-43967b75405c-serving-cert\") pod \"openshift-config-operator-64488f9d78-ljnjj\" (UID: \"0a898118-6d01-4211-92f0-43967b75405c\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645480 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-ovnkube-config\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645504 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5fb06459-09da-4620-91cf-8c3fe8f425db-tmp\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645529 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-netns\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645551 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/39252b5a-d014-4319-ad81-3c1bf2ef585e-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645569 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-zwdgk\" (UID: \"d00a8cc7-7774-40bd-94a1-9ac2d0f63234\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645590 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j47xv\" (UniqueName: \"kubernetes.io/projected/42dbcb8f-e8c4-413e-977d-40aa6df226aa-kube-api-access-j47xv\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645608 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9757edbb-8ce2-4513-9b32-a552df50634c-cert\") pod \"cluster-autoscaler-operator-69576476f7-b7296\" (UID: \"9757edbb-8ce2-4513-9b32-a552df50634c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645630 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9757edbb-8ce2-4513-9b32-a552df50634c-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-b7296\" (UID: \"9757edbb-8ce2-4513-9b32-a552df50634c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645633 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-ovnkube-config\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645650 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-systemd-units\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645719 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5fb06459-09da-4620-91cf-8c3fe8f425db-tmp\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645781 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/39252b5a-d014-4319-ad81-3c1bf2ef585e-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645798 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/42dbcb8f-e8c4-413e-977d-40aa6df226aa-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645827 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtp2z\" (UniqueName: \"kubernetes.io/projected/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc-kube-api-access-dtp2z\") pod \"cloud-credential-operator-55d85b7b47-pxgq9\" (UID: \"de61e1fe-294c-48a6-8cf3-aeb4637ef2cc\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645847 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a-node-bootstrap-token\") pod \"machine-config-server-nj7qg\" (UID: \"6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a\") " pod="openshift-machine-config-operator/machine-config-server-nj7qg" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645868 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/1047bb4a-135f-488d-9399-0518cb3a827d-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645886 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/61de099a-410b-4d30-83e8-19cf5901cb27-signing-cabundle\") pod \"service-ca-84bfdbbb7f-7lx8p\" (UID: \"61de099a-410b-4d30-83e8-19cf5901cb27\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645923 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-audit\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645943 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vntrg\" (UniqueName: \"kubernetes.io/projected/f7b68603-8af3-4a50-8d39-86bbcdf1c597-kube-api-access-vntrg\") pod \"network-check-source-7c67b67d47-wdt59\" (UID: \"f7b68603-8af3-4a50-8d39-86bbcdf1c597\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-wdt59" Mar 12 14:36:12.645854 master-0 kubenswrapper[37036]: I0312 14:36:12.645962 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-zwdgk\" (UID: \"d00a8cc7-7774-40bd-94a1-9ac2d0f63234\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.646138 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/61d829d7-38e1-4826-942c-f7317c4a4bec-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-vfsmf\" (UID: \"61d829d7-38e1-4826-942c-f7317c4a4bec\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.646207 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/3edaa533-ecbb-443e-a270-4cb4f923daf6-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.646230 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-smpl5\" (UID: \"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.646250 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-cnibin\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.646384 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-modprobe-d\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.646415 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-sysctl-d\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.646441 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.646453 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57930a54-89ab-4ec8-a504-74035bb74d63-config\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.646471 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bba274a-38c7-4d13-88a5-6bc39228416c-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-qtql5\" (UID: \"1bba274a-38c7-4d13-88a5-6bc39228416c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.646500 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-textfile\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.646527 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1047bb4a-135f-488d-9399-0518cb3a827d-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.646557 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-etcd-client\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.646586 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-telemeter-client-tls\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.646587 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-textfile\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.646611 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-sysctl-conf\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.646656 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.646679 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.646703 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d596c0-6a41-43e1-9516-aee9ad834ec2-config\") pod \"service-ca-operator-69b6fc6b88-fv6pp\" (UID: \"76d596c0-6a41-43e1-9516-aee9ad834ec2\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.646723 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6z8v\" (UniqueName: \"kubernetes.io/projected/57930a54-89ab-4ec8-a504-74035bb74d63-kube-api-access-d6z8v\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.646743 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/addf66af-4d97-4c1e-960d-ace98c27961b-audit-log\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.646807 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/addf66af-4d97-4c1e-960d-ace98c27961b-audit-log\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.646843 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-ovnkube-script-lib\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.646891 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.646956 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtqp6\" (UniqueName: \"kubernetes.io/projected/8106d14a-b448-4dd1-bccd-926f85394b5d-kube-api-access-jtqp6\") pod \"cluster-olm-operator-77899cf6d-h8sq4\" (UID: \"8106d14a-b448-4dd1-bccd-926f85394b5d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.646972 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d596c0-6a41-43e1-9516-aee9ad834ec2-config\") pod \"service-ca-operator-69b6fc6b88-fv6pp\" (UID: \"76d596c0-6a41-43e1-9516-aee9ad834ec2\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.646999 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a81be38f-e07e-4863-8d61-fdefc2713a6a-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.647036 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f59d485-9f69-4f36-836e-6338f84b7d69-catalog-content\") pod \"redhat-operators-9bljc\" (UID: \"2f59d485-9f69-4f36-836e-6338f84b7d69\") " pod="openshift-marketplace/redhat-operators-9bljc" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.647065 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7433d9bf-4edf-4787-a7a1-e5102c7264c7-metrics-tls\") pod \"network-operator-7c649bf6d4-ldxfn\" (UID: \"7433d9bf-4edf-4787-a7a1-e5102c7264c7\") " pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.647091 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-ovnkube-script-lib\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.647224 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f59d485-9f69-4f36-836e-6338f84b7d69-catalog-content\") pod \"redhat-operators-9bljc\" (UID: \"2f59d485-9f69-4f36-836e-6338f84b7d69\") " pod="openshift-marketplace/redhat-operators-9bljc" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.647349 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7433d9bf-4edf-4787-a7a1-e5102c7264c7-metrics-tls\") pod \"network-operator-7c649bf6d4-ldxfn\" (UID: \"7433d9bf-4edf-4787-a7a1-e5102c7264c7\") " pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" Mar 12 14:36:12.651129 master-0 kubenswrapper[37036]: I0312 14:36:12.648739 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/761993bb-2cba-4e1a-b304-36a24817af94-env-overrides\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.656242 master-0 kubenswrapper[37036]: E0312 14:36:12.656203 37036 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 12 14:36:12.661303 master-0 kubenswrapper[37036]: I0312 14:36:12.661266 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 12 14:36:12.665040 master-0 kubenswrapper[37036]: I0312 14:36:12.665010 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6defef79-6058-466a-ae0b-8eb9258126be-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:36:12.681844 master-0 kubenswrapper[37036]: I0312 14:36:12.681793 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 12 14:36:12.685828 master-0 kubenswrapper[37036]: I0312 14:36:12.685770 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b9d51570-06dd-4e2f-9c19-07fb694279ae-iptables-alerter-script\") pod \"iptables-alerter-vb4v5\" (UID: \"b9d51570-06dd-4e2f-9c19-07fb694279ae\") " pod="openshift-network-operator/iptables-alerter-vb4v5" Mar 12 14:36:12.702468 master-0 kubenswrapper[37036]: I0312 14:36:12.702425 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 12 14:36:12.721278 master-0 kubenswrapper[37036]: I0312 14:36:12.721228 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 12 14:36:12.728287 master-0 kubenswrapper[37036]: I0312 14:36:12.728239 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/61de099a-410b-4d30-83e8-19cf5901cb27-signing-cabundle\") pod \"service-ca-84bfdbbb7f-7lx8p\" (UID: \"61de099a-410b-4d30-83e8-19cf5901cb27\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" Mar 12 14:36:12.743561 master-0 kubenswrapper[37036]: I0312 14:36:12.742406 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 12 14:36:12.744770 master-0 kubenswrapper[37036]: I0312 14:36:12.744737 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/61de099a-410b-4d30-83e8-19cf5901cb27-signing-key\") pod \"service-ca-84bfdbbb7f-7lx8p\" (UID: \"61de099a-410b-4d30-83e8-19cf5901cb27\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" Mar 12 14:36:12.748506 master-0 kubenswrapper[37036]: I0312 14:36:12.748465 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-var-lib-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.748557 master-0 kubenswrapper[37036]: I0312 14:36:12.748511 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:36:12.748557 master-0 kubenswrapper[37036]: I0312 14:36:12.748517 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-var-lib-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.748557 master-0 kubenswrapper[37036]: I0312 14:36:12.748530 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-cni-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.748557 master-0 kubenswrapper[37036]: I0312 14:36:12.748548 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3815db41-fe01-43f6-b75c-4ccca9124f51-hosts-file\") pod \"node-resolver-nml4k\" (UID: \"3815db41-fe01-43f6-b75c-4ccca9124f51\") " pod="openshift-dns/node-resolver-nml4k" Mar 12 14:36:12.748670 master-0 kubenswrapper[37036]: I0312 14:36:12.748571 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-slash\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.748670 master-0 kubenswrapper[37036]: I0312 14:36:12.748610 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-slash\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.748670 master-0 kubenswrapper[37036]: I0312 14:36:12.748617 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:36:12.748670 master-0 kubenswrapper[37036]: I0312 14:36:12.748660 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-socket-dir-parent\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.748810 master-0 kubenswrapper[37036]: I0312 14:36:12.748672 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-cni-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.748810 master-0 kubenswrapper[37036]: I0312 14:36:12.748680 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:36:12.748810 master-0 kubenswrapper[37036]: I0312 14:36:12.748740 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-socket-dir-parent\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.748810 master-0 kubenswrapper[37036]: I0312 14:36:12.748745 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3815db41-fe01-43f6-b75c-4ccca9124f51-hosts-file\") pod \"node-resolver-nml4k\" (UID: \"3815db41-fe01-43f6-b75c-4ccca9124f51\") " pod="openshift-dns/node-resolver-nml4k" Mar 12 14:36:12.748810 master-0 kubenswrapper[37036]: I0312 14:36:12.748800 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:36:12.748963 master-0 kubenswrapper[37036]: I0312 14:36:12.748892 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-k8s-cni-cncf-io\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.749033 master-0 kubenswrapper[37036]: I0312 14:36:12.749000 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:36:12.749068 master-0 kubenswrapper[37036]: I0312 14:36:12.748932 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-k8s-cni-cncf-io\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.749068 master-0 kubenswrapper[37036]: I0312 14:36:12.749043 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-systemd\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.749134 master-0 kubenswrapper[37036]: I0312 14:36:12.749069 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:36:12.749134 master-0 kubenswrapper[37036]: I0312 14:36:12.749083 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:36:12.749134 master-0 kubenswrapper[37036]: I0312 14:36:12.749101 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-systemd\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.749224 master-0 kubenswrapper[37036]: I0312 14:36:12.749154 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:36:12.749224 master-0 kubenswrapper[37036]: I0312 14:36:12.749183 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:36:12.749224 master-0 kubenswrapper[37036]: I0312 14:36:12.749193 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:36:12.749224 master-0 kubenswrapper[37036]: I0312 14:36:12.749211 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-ovn\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.749224 master-0 kubenswrapper[37036]: I0312 14:36:12.749223 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:36:12.749367 master-0 kubenswrapper[37036]: I0312 14:36:12.749241 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-cni-bin\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.749367 master-0 kubenswrapper[37036]: I0312 14:36:12.749252 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:36:12.749367 master-0 kubenswrapper[37036]: I0312 14:36:12.749279 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-ovn\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.749367 master-0 kubenswrapper[37036]: I0312 14:36:12.749286 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-root\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:36:12.749367 master-0 kubenswrapper[37036]: I0312 14:36:12.749311 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-cni-bin\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.749367 master-0 kubenswrapper[37036]: I0312 14:36:12.749315 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-wtmp\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:36:12.749623 master-0 kubenswrapper[37036]: I0312 14:36:12.749417 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.749623 master-0 kubenswrapper[37036]: I0312 14:36:12.749476 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-sys\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:36:12.749623 master-0 kubenswrapper[37036]: I0312 14:36:12.749505 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-wtmp\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:36:12.749623 master-0 kubenswrapper[37036]: I0312 14:36:12.749516 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/965d6e0e3f611771f8ba2352415f565a-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"965d6e0e3f611771f8ba2352415f565a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:36:12.749623 master-0 kubenswrapper[37036]: I0312 14:36:12.749535 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/965d6e0e3f611771f8ba2352415f565a-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"965d6e0e3f611771f8ba2352415f565a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:36:12.749623 master-0 kubenswrapper[37036]: I0312 14:36:12.749565 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-run-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.749623 master-0 kubenswrapper[37036]: I0312 14:36:12.749568 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-var-lib-kubelet\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.749623 master-0 kubenswrapper[37036]: I0312 14:36:12.749396 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-root\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:36:12.749623 master-0 kubenswrapper[37036]: I0312 14:36:12.749601 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-sys\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:36:12.749623 master-0 kubenswrapper[37036]: I0312 14:36:12.749612 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/1f9b15c6-b4ee-4907-8daa-376e3b438896-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:36:12.749940 master-0 kubenswrapper[37036]: I0312 14:36:12.749646 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-systemd\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.749940 master-0 kubenswrapper[37036]: I0312 14:36:12.749675 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/1f9b15c6-b4ee-4907-8daa-376e3b438896-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:36:12.749940 master-0 kubenswrapper[37036]: I0312 14:36:12.749648 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-var-lib-kubelet\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.749940 master-0 kubenswrapper[37036]: I0312 14:36:12.749725 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-systemd\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.749940 master-0 kubenswrapper[37036]: I0312 14:36:12.749761 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-cni-multus\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.749940 master-0 kubenswrapper[37036]: I0312 14:36:12.749777 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:36:12.749940 master-0 kubenswrapper[37036]: I0312 14:36:12.749820 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-run-netns\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.749940 master-0 kubenswrapper[37036]: I0312 14:36:12.749870 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:36:12.749940 master-0 kubenswrapper[37036]: I0312 14:36:12.749842 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-cni-multus\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.749940 master-0 kubenswrapper[37036]: I0312 14:36:12.749944 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-run-netns\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.750389 master-0 kubenswrapper[37036]: I0312 14:36:12.749997 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-run\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.750389 master-0 kubenswrapper[37036]: I0312 14:36:12.750039 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b9d51570-06dd-4e2f-9c19-07fb694279ae-host-slash\") pod \"iptables-alerter-vb4v5\" (UID: \"b9d51570-06dd-4e2f-9c19-07fb694279ae\") " pod="openshift-network-operator/iptables-alerter-vb4v5" Mar 12 14:36:12.750389 master-0 kubenswrapper[37036]: I0312 14:36:12.750065 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:36:12.750389 master-0 kubenswrapper[37036]: I0312 14:36:12.750101 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-sysconfig\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.750389 master-0 kubenswrapper[37036]: I0312 14:36:12.750116 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-run\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.750389 master-0 kubenswrapper[37036]: I0312 14:36:12.750138 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b9d51570-06dd-4e2f-9c19-07fb694279ae-host-slash\") pod \"iptables-alerter-vb4v5\" (UID: \"b9d51570-06dd-4e2f-9c19-07fb694279ae\") " pod="openshift-network-operator/iptables-alerter-vb4v5" Mar 12 14:36:12.750389 master-0 kubenswrapper[37036]: I0312 14:36:12.750148 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:36:12.750389 master-0 kubenswrapper[37036]: I0312 14:36:12.750170 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-run-ovn-kubernetes\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.750389 master-0 kubenswrapper[37036]: I0312 14:36:12.750219 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-sysconfig\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.750389 master-0 kubenswrapper[37036]: I0312 14:36:12.750238 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-node-log\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.750389 master-0 kubenswrapper[37036]: I0312 14:36:12.750262 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-run-ovn-kubernetes\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.750389 master-0 kubenswrapper[37036]: I0312 14:36:12.750280 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:36:12.750389 master-0 kubenswrapper[37036]: I0312 14:36:12.750297 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-node-log\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.750389 master-0 kubenswrapper[37036]: I0312 14:36:12.750333 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:36:12.750389 master-0 kubenswrapper[37036]: I0312 14:36:12.750396 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:36:12.750844 master-0 kubenswrapper[37036]: I0312 14:36:12.750430 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:36:12.750844 master-0 kubenswrapper[37036]: I0312 14:36:12.750502 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/7433d9bf-4edf-4787-a7a1-e5102c7264c7-host-etc-kube\") pod \"network-operator-7c649bf6d4-ldxfn\" (UID: \"7433d9bf-4edf-4787-a7a1-e5102c7264c7\") " pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" Mar 12 14:36:12.750844 master-0 kubenswrapper[37036]: I0312 14:36:12.750565 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-kubelet\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.750844 master-0 kubenswrapper[37036]: I0312 14:36:12.750605 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-kubelet\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.750844 master-0 kubenswrapper[37036]: I0312 14:36:12.750646 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-cnibin\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.750844 master-0 kubenswrapper[37036]: I0312 14:36:12.750674 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-os-release\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.750844 master-0 kubenswrapper[37036]: I0312 14:36:12.750695 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-cnibin\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.750844 master-0 kubenswrapper[37036]: I0312 14:36:12.750702 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5a56d42a-efb4-4956-acab-d12c7ca5276e-var-lock\") pod \"installer-4-master-0\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:36:12.750844 master-0 kubenswrapper[37036]: I0312 14:36:12.750733 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/7433d9bf-4edf-4787-a7a1-e5102c7264c7-host-etc-kube\") pod \"network-operator-7c649bf6d4-ldxfn\" (UID: \"7433d9bf-4edf-4787-a7a1-e5102c7264c7\") " pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" Mar 12 14:36:12.750844 master-0 kubenswrapper[37036]: I0312 14:36:12.750774 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-sys\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.750844 master-0 kubenswrapper[37036]: I0312 14:36:12.750798 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-os-release\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.750844 master-0 kubenswrapper[37036]: I0312 14:36:12.750807 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-lib-modules\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.751212 master-0 kubenswrapper[37036]: I0312 14:36:12.750861 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-sys\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.751212 master-0 kubenswrapper[37036]: I0312 14:36:12.750895 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5a56d42a-efb4-4956-acab-d12c7ca5276e-var-lock\") pod \"installer-4-master-0\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:36:12.751212 master-0 kubenswrapper[37036]: I0312 14:36:12.751001 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-etc-kubernetes\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.751212 master-0 kubenswrapper[37036]: I0312 14:36:12.751010 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-lib-modules\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.751212 master-0 kubenswrapper[37036]: I0312 14:36:12.751045 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-etc-kubernetes\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.751212 master-0 kubenswrapper[37036]: I0312 14:36:12.751067 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8e4d9407-ff79-4396-a37f-896617e024d4-rootfs\") pod \"machine-config-daemon-ngzc8\" (UID: \"8e4d9407-ff79-4396-a37f-896617e024d4\") " pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:36:12.751212 master-0 kubenswrapper[37036]: I0312 14:36:12.751111 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-system-cni-dir\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:36:12.751212 master-0 kubenswrapper[37036]: I0312 14:36:12.751165 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-system-cni-dir\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:36:12.751212 master-0 kubenswrapper[37036]: I0312 14:36:12.751167 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8e4d9407-ff79-4396-a37f-896617e024d4-rootfs\") pod \"machine-config-daemon-ngzc8\" (UID: \"8e4d9407-ff79-4396-a37f-896617e024d4\") " pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:36:12.751715 master-0 kubenswrapper[37036]: I0312 14:36:12.751251 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.751715 master-0 kubenswrapper[37036]: I0312 14:36:12.751283 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/1f9b15c6-b4ee-4907-8daa-376e3b438896-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:36:12.751715 master-0 kubenswrapper[37036]: I0312 14:36:12.751339 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.751715 master-0 kubenswrapper[37036]: I0312 14:36:12.751352 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-cni-netd\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.751715 master-0 kubenswrapper[37036]: I0312 14:36:12.751418 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-cni-netd\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.751715 master-0 kubenswrapper[37036]: I0312 14:36:12.751512 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:36:12.751715 master-0 kubenswrapper[37036]: I0312 14:36:12.751533 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-system-cni-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.751715 master-0 kubenswrapper[37036]: I0312 14:36:12.751538 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:36:12.751715 master-0 kubenswrapper[37036]: I0312 14:36:12.751616 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-system-cni-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.751715 master-0 kubenswrapper[37036]: I0312 14:36:12.751507 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/1f9b15c6-b4ee-4907-8daa-376e3b438896-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:36:12.751715 master-0 kubenswrapper[37036]: I0312 14:36:12.751632 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 14:36:12.751715 master-0 kubenswrapper[37036]: I0312 14:36:12.751697 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-etc-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.752162 master-0 kubenswrapper[37036]: I0312 14:36:12.751740 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-kubernetes\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.752162 master-0 kubenswrapper[37036]: I0312 14:36:12.751749 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 14:36:12.752162 master-0 kubenswrapper[37036]: I0312 14:36:12.751774 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-etc-openvswitch\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.752162 master-0 kubenswrapper[37036]: I0312 14:36:12.751822 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 14:36:12.752162 master-0 kubenswrapper[37036]: I0312 14:36:12.751839 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-kubernetes\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.752162 master-0 kubenswrapper[37036]: I0312 14:36:12.751844 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:36:12.752162 master-0 kubenswrapper[37036]: I0312 14:36:12.751865 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:36:12.752162 master-0 kubenswrapper[37036]: I0312 14:36:12.751930 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 14:36:12.752162 master-0 kubenswrapper[37036]: I0312 14:36:12.751986 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-os-release\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:36:12.752162 master-0 kubenswrapper[37036]: I0312 14:36:12.752009 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39bda5b8-c748-4023-8680-8e8454512e5b-audit-dir\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:12.752162 master-0 kubenswrapper[37036]: I0312 14:36:12.752081 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39bda5b8-c748-4023-8680-8e8454512e5b-audit-dir\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:12.752162 master-0 kubenswrapper[37036]: I0312 14:36:12.752106 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-os-release\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:36:12.752162 master-0 kubenswrapper[37036]: I0312 14:36:12.752112 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:36:12.752518 master-0 kubenswrapper[37036]: I0312 14:36:12.752198 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:36:12.752518 master-0 kubenswrapper[37036]: I0312 14:36:12.752233 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-multus-certs\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.752518 master-0 kubenswrapper[37036]: I0312 14:36:12.752254 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-multus-certs\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.752518 master-0 kubenswrapper[37036]: I0312 14:36:12.752267 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:36:12.752518 master-0 kubenswrapper[37036]: I0312 14:36:12.752315 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:36:12.752518 master-0 kubenswrapper[37036]: I0312 14:36:12.752379 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/39252b5a-d014-4319-ad81-3c1bf2ef585e-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:36:12.752518 master-0 kubenswrapper[37036]: I0312 14:36:12.752453 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/39252b5a-d014-4319-ad81-3c1bf2ef585e-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:36:12.752518 master-0 kubenswrapper[37036]: I0312 14:36:12.752481 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:36:12.752518 master-0 kubenswrapper[37036]: I0312 14:36:12.752501 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-hostroot\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.752803 master-0 kubenswrapper[37036]: I0312 14:36:12.752563 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 14:36:12.752803 master-0 kubenswrapper[37036]: I0312 14:36:12.752615 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:36:12.752803 master-0 kubenswrapper[37036]: I0312 14:36:12.752653 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-hostroot\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.752803 master-0 kubenswrapper[37036]: I0312 14:36:12.752766 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:36:12.752803 master-0 kubenswrapper[37036]: I0312 14:36:12.752783 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-cni-bin\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.752953 master-0 kubenswrapper[37036]: I0312 14:36:12.752856 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-host-cni-bin\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.752953 master-0 kubenswrapper[37036]: I0312 14:36:12.752936 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-log-socket\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.753014 master-0 kubenswrapper[37036]: I0312 14:36:12.752977 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-kubelet\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.753014 master-0 kubenswrapper[37036]: I0312 14:36:12.752996 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a56d42a-efb4-4956-acab-d12c7ca5276e-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:36:12.753014 master-0 kubenswrapper[37036]: I0312 14:36:12.753008 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-log-socket\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.753101 master-0 kubenswrapper[37036]: I0312 14:36:12.753022 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a56d42a-efb4-4956-acab-d12c7ca5276e-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:36:12.753101 master-0 kubenswrapper[37036]: I0312 14:36:12.753037 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-var-lib-kubelet\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.753101 master-0 kubenswrapper[37036]: I0312 14:36:12.753073 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-netns\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.753101 master-0 kubenswrapper[37036]: I0312 14:36:12.753100 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-systemd-units\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.753206 master-0 kubenswrapper[37036]: I0312 14:36:12.753132 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/761993bb-2cba-4e1a-b304-36a24817af94-systemd-units\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:12.753206 master-0 kubenswrapper[37036]: I0312 14:36:12.753137 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-host-run-netns\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.753308 master-0 kubenswrapper[37036]: I0312 14:36:12.753274 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-cnibin\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:36:12.753347 master-0 kubenswrapper[37036]: I0312 14:36:12.753313 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-modprobe-d\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.753383 master-0 kubenswrapper[37036]: I0312 14:36:12.753341 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-sysctl-d\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.753383 master-0 kubenswrapper[37036]: I0312 14:36:12.753348 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9757756c-cb67-4b6f-99c3-dd63f904897a-cnibin\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:36:12.753441 master-0 kubenswrapper[37036]: I0312 14:36:12.753396 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-sysctl-d\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.753441 master-0 kubenswrapper[37036]: I0312 14:36:12.753407 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:36:12.753441 master-0 kubenswrapper[37036]: I0312 14:36:12.753434 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1047bb4a-135f-488d-9399-0518cb3a827d-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:36:12.753525 master-0 kubenswrapper[37036]: I0312 14:36:12.753451 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-modprobe-d\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.753525 master-0 kubenswrapper[37036]: I0312 14:36:12.753458 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-sysctl-conf\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.753525 master-0 kubenswrapper[37036]: I0312 14:36:12.753490 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:36:12.753612 master-0 kubenswrapper[37036]: I0312 14:36:12.753546 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1047bb4a-135f-488d-9399-0518cb3a827d-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:36:12.753612 master-0 kubenswrapper[37036]: I0312 14:36:12.753567 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-etc-sysctl-conf\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.753667 master-0 kubenswrapper[37036]: I0312 14:36:12.753638 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:36:12.753700 master-0 kubenswrapper[37036]: I0312 14:36:12.753671 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-host\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.753878 master-0 kubenswrapper[37036]: I0312 14:36:12.753845 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5fb06459-09da-4620-91cf-8c3fe8f425db-host\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:12.753878 master-0 kubenswrapper[37036]: I0312 14:36:12.753864 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a35674af-162c-4a4a-8605-158b2326267e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:36:12.753953 master-0 kubenswrapper[37036]: I0312 14:36:12.753943 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a35674af-162c-4a4a-8605-158b2326267e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:36:12.753991 master-0 kubenswrapper[37036]: I0312 14:36:12.753977 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-conf-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.754038 master-0 kubenswrapper[37036]: I0312 14:36:12.754000 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a35674af-162c-4a4a-8605-158b2326267e-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:36:12.754098 master-0 kubenswrapper[37036]: I0312 14:36:12.754076 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/95c11263-0d68-4b11-bcfd-bcb0e96a6988-multus-conf-dir\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:12.754133 master-0 kubenswrapper[37036]: I0312 14:36:12.754122 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-audit-dir\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:36:12.754182 master-0 kubenswrapper[37036]: I0312 14:36:12.754162 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a35674af-162c-4a4a-8605-158b2326267e-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:36:12.754212 master-0 kubenswrapper[37036]: I0312 14:36:12.754193 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-audit-dir\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:36:12.754711 master-0 kubenswrapper[37036]: I0312 14:36:12.754660 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/39bda5b8-c748-4023-8680-8e8454512e5b-node-pullsecrets\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:12.754749 master-0 kubenswrapper[37036]: I0312 14:36:12.754719 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/39252b5a-d014-4319-ad81-3c1bf2ef585e-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:36:12.754809 master-0 kubenswrapper[37036]: I0312 14:36:12.754778 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/39bda5b8-c748-4023-8680-8e8454512e5b-node-pullsecrets\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:12.754841 master-0 kubenswrapper[37036]: I0312 14:36:12.754807 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/965d6e0e3f611771f8ba2352415f565a-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"965d6e0e3f611771f8ba2352415f565a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:36:12.755121 master-0 kubenswrapper[37036]: I0312 14:36:12.754789 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/965d6e0e3f611771f8ba2352415f565a-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"965d6e0e3f611771f8ba2352415f565a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:36:12.755184 master-0 kubenswrapper[37036]: I0312 14:36:12.755145 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/39252b5a-d014-4319-ad81-3c1bf2ef585e-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:36:12.762860 master-0 kubenswrapper[37036]: I0312 14:36:12.762822 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 12 14:36:12.781747 master-0 kubenswrapper[37036]: I0312 14:36:12.781698 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 12 14:36:12.800782 master-0 kubenswrapper[37036]: I0312 14:36:12.800683 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 12 14:36:12.827142 master-0 kubenswrapper[37036]: I0312 14:36:12.826392 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 12 14:36:12.828061 master-0 kubenswrapper[37036]: I0312 14:36:12.828021 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39bda5b8-c748-4023-8680-8e8454512e5b-serving-cert\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:12.844006 master-0 kubenswrapper[37036]: I0312 14:36:12.843933 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 12 14:36:12.851544 master-0 kubenswrapper[37036]: I0312 14:36:12.851492 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/39bda5b8-c748-4023-8680-8e8454512e5b-etcd-client\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:12.861802 master-0 kubenswrapper[37036]: I0312 14:36:12.861704 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 12 14:36:12.872664 master-0 kubenswrapper[37036]: I0312 14:36:12.872593 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-config\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:12.881232 master-0 kubenswrapper[37036]: I0312 14:36:12.881181 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 12 14:36:12.882731 master-0 kubenswrapper[37036]: I0312 14:36:12.882689 37036 scope.go:117] "RemoveContainer" containerID="38d6f94bd36743b5e1de43d22e67db88c9c5b063935ce36f553f6e277d2085b0" Mar 12 14:36:12.882840 master-0 kubenswrapper[37036]: I0312 14:36:12.882751 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:36:12.882911 master-0 kubenswrapper[37036]: I0312 14:36:12.882844 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:36:12.883180 master-0 kubenswrapper[37036]: I0312 14:36:12.883130 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:36:12.883278 master-0 kubenswrapper[37036]: I0312 14:36:12.883228 37036 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:36:12.886120 master-0 kubenswrapper[37036]: I0312 14:36:12.886074 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:36:12.886183 master-0 kubenswrapper[37036]: I0312 14:36:12.886127 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:36:12.886245 master-0 kubenswrapper[37036]: I0312 14:36:12.886188 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:36:12.886245 master-0 kubenswrapper[37036]: I0312 14:36:12.886238 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:36:12.886832 master-0 kubenswrapper[37036]: I0312 14:36:12.886796 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:36:12.887071 master-0 kubenswrapper[37036]: I0312 14:36:12.887045 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:36:12.890279 master-0 kubenswrapper[37036]: I0312 14:36:12.890238 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/39bda5b8-c748-4023-8680-8e8454512e5b-encryption-config\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:12.890575 master-0 kubenswrapper[37036]: I0312 14:36:12.890543 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 14:36:12.891244 master-0 kubenswrapper[37036]: I0312 14:36:12.891217 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:36:12.892242 master-0 kubenswrapper[37036]: I0312 14:36:12.892201 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:36:12.903197 master-0 kubenswrapper[37036]: I0312 14:36:12.903144 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 12 14:36:12.922320 master-0 kubenswrapper[37036]: I0312 14:36:12.922282 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 12 14:36:12.944552 master-0 kubenswrapper[37036]: I0312 14:36:12.944490 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 12 14:36:12.954297 master-0 kubenswrapper[37036]: I0312 14:36:12.954196 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-image-import-ca\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:12.958157 master-0 kubenswrapper[37036]: I0312 14:36:12.958102 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 12 14:36:12.958157 master-0 kubenswrapper[37036]: I0312 14:36:12.958165 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 12 14:36:12.961714 master-0 kubenswrapper[37036]: I0312 14:36:12.961662 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 12 14:36:12.967263 master-0 kubenswrapper[37036]: I0312 14:36:12.967217 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-audit\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:12.974262 master-0 kubenswrapper[37036]: I0312 14:36:12.971288 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 12 14:36:12.977254 master-0 kubenswrapper[37036]: I0312 14:36:12.977197 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:36:12.983032 master-0 kubenswrapper[37036]: I0312 14:36:12.982606 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 12 14:36:12.986616 master-0 kubenswrapper[37036]: I0312 14:36:12.986587 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:36:13.001403 master-0 kubenswrapper[37036]: I0312 14:36:13.001368 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 12 14:36:13.006373 master-0 kubenswrapper[37036]: I0312 14:36:13.006311 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-etcd-serving-ca\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:13.027532 master-0 kubenswrapper[37036]: I0312 14:36:13.027356 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 12 14:36:13.042205 master-0 kubenswrapper[37036]: I0312 14:36:13.042157 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 12 14:36:13.053165 master-0 kubenswrapper[37036]: I0312 14:36:13.053058 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/1f9b15c6-b4ee-4907-8daa-376e3b438896-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:36:13.060830 master-0 kubenswrapper[37036]: I0312 14:36:13.060778 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a56d42a-efb4-4956-acab-d12c7ca5276e-kubelet-dir\") pod \"5a56d42a-efb4-4956-acab-d12c7ca5276e\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " Mar 12 14:36:13.061142 master-0 kubenswrapper[37036]: I0312 14:36:13.060842 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5a56d42a-efb4-4956-acab-d12c7ca5276e-var-lock\") pod \"5a56d42a-efb4-4956-acab-d12c7ca5276e\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " Mar 12 14:36:13.061142 master-0 kubenswrapper[37036]: I0312 14:36:13.060918 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a56d42a-efb4-4956-acab-d12c7ca5276e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5a56d42a-efb4-4956-acab-d12c7ca5276e" (UID: "5a56d42a-efb4-4956-acab-d12c7ca5276e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:36:13.061142 master-0 kubenswrapper[37036]: I0312 14:36:13.061073 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a56d42a-efb4-4956-acab-d12c7ca5276e-var-lock" (OuterVolumeSpecName: "var-lock") pod "5a56d42a-efb4-4956-acab-d12c7ca5276e" (UID: "5a56d42a-efb4-4956-acab-d12c7ca5276e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:36:13.063452 master-0 kubenswrapper[37036]: I0312 14:36:13.062789 37036 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5a56d42a-efb4-4956-acab-d12c7ca5276e-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 14:36:13.063452 master-0 kubenswrapper[37036]: I0312 14:36:13.062817 37036 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a56d42a-efb4-4956-acab-d12c7ca5276e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:36:13.069647 master-0 kubenswrapper[37036]: I0312 14:36:13.069590 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 12 14:36:13.075453 master-0 kubenswrapper[37036]: I0312 14:36:13.075415 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39bda5b8-c748-4023-8680-8e8454512e5b-trusted-ca-bundle\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:13.081209 master-0 kubenswrapper[37036]: I0312 14:36:13.081168 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 12 14:36:13.101471 master-0 kubenswrapper[37036]: I0312 14:36:13.101424 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 12 14:36:13.121805 master-0 kubenswrapper[37036]: I0312 14:36:13.121764 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 12 14:36:13.134046 master-0 kubenswrapper[37036]: I0312 14:36:13.133495 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-default-certificate\") pod \"router-default-79f8cd6fdd-gjwhp\" (UID: \"e7f6ebd3-98c8-457c-a88c-7e81270f01b5\") " pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:36:13.143663 master-0 kubenswrapper[37036]: I0312 14:36:13.143437 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 12 14:36:13.172448 master-0 kubenswrapper[37036]: I0312 14:36:13.172373 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 12 14:36:13.173869 master-0 kubenswrapper[37036]: I0312 14:36:13.173837 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-stats-auth\") pod \"router-default-79f8cd6fdd-gjwhp\" (UID: \"e7f6ebd3-98c8-457c-a88c-7e81270f01b5\") " pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:36:13.182594 master-0 kubenswrapper[37036]: I0312 14:36:13.182536 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 12 14:36:13.201688 master-0 kubenswrapper[37036]: I0312 14:36:13.201632 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 12 14:36:13.208972 master-0 kubenswrapper[37036]: I0312 14:36:13.208624 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-metrics-certs\") pod \"router-default-79f8cd6fdd-gjwhp\" (UID: \"e7f6ebd3-98c8-457c-a88c-7e81270f01b5\") " pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:36:13.226146 master-0 kubenswrapper[37036]: I0312 14:36:13.226091 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 12 14:36:13.241326 master-0 kubenswrapper[37036]: I0312 14:36:13.241273 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 12 14:36:13.243494 master-0 kubenswrapper[37036]: I0312 14:36:13.243441 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f77c8e18b751d90bc0dfe2d4e304050" path="/var/lib/kubelet/pods/5f77c8e18b751d90bc0dfe2d4e304050/volumes" Mar 12 14:36:13.243876 master-0 kubenswrapper[37036]: I0312 14:36:13.243847 37036 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 12 14:36:13.251620 master-0 kubenswrapper[37036]: I0312 14:36:13.251578 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-service-ca-bundle\") pod \"router-default-79f8cd6fdd-gjwhp\" (UID: \"e7f6ebd3-98c8-457c-a88c-7e81270f01b5\") " pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:36:13.262240 master-0 kubenswrapper[37036]: I0312 14:36:13.262199 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 12 14:36:13.283013 master-0 kubenswrapper[37036]: I0312 14:36:13.282940 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 12 14:36:13.292702 master-0 kubenswrapper[37036]: I0312 14:36:13.292648 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/39252b5a-d014-4319-ad81-3c1bf2ef585e-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:36:13.306653 master-0 kubenswrapper[37036]: I0312 14:36:13.306533 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 12 14:36:13.313646 master-0 kubenswrapper[37036]: I0312 14:36:13.313594 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/39252b5a-d014-4319-ad81-3c1bf2ef585e-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:36:13.322127 master-0 kubenswrapper[37036]: I0312 14:36:13.322066 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 12 14:36:13.342232 master-0 kubenswrapper[37036]: I0312 14:36:13.342183 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 12 14:36:13.346304 master-0 kubenswrapper[37036]: I0312 14:36:13.345728 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a35674af-162c-4a4a-8605-158b2326267e-serving-cert\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:36:13.361845 master-0 kubenswrapper[37036]: I0312 14:36:13.361786 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 12 14:36:13.368121 master-0 kubenswrapper[37036]: I0312 14:36:13.368059 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-trusted-ca-bundle\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:36:13.381314 master-0 kubenswrapper[37036]: I0312 14:36:13.381277 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 12 14:36:13.383978 master-0 kubenswrapper[37036]: I0312 14:36:13.383927 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a35674af-162c-4a4a-8605-158b2326267e-service-ca\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:36:13.401869 master-0 kubenswrapper[37036]: I0312 14:36:13.401822 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 12 14:36:13.408040 master-0 kubenswrapper[37036]: I0312 14:36:13.408005 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/900b2a0e-1e2b-41a3-86f5-639ec1e95969-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-b5qg4\" (UID: \"900b2a0e-1e2b-41a3-86f5-639ec1e95969\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-b5qg4" Mar 12 14:36:13.422879 master-0 kubenswrapper[37036]: I0312 14:36:13.422836 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 12 14:36:13.427227 master-0 kubenswrapper[37036]: I0312 14:36:13.427203 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-encryption-config\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:36:13.442091 master-0 kubenswrapper[37036]: I0312 14:36:13.442021 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 12 14:36:13.462062 master-0 kubenswrapper[37036]: I0312 14:36:13.462023 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 12 14:36:13.466087 master-0 kubenswrapper[37036]: I0312 14:36:13.466056 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-audit-policies\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:36:13.482310 master-0 kubenswrapper[37036]: I0312 14:36:13.482266 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 12 14:36:13.485184 master-0 kubenswrapper[37036]: I0312 14:36:13.485138 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-etcd-serving-ca\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:36:13.501488 master-0 kubenswrapper[37036]: I0312 14:36:13.501443 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 12 14:36:13.522680 master-0 kubenswrapper[37036]: I0312 14:36:13.522636 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 12 14:36:13.524982 master-0 kubenswrapper[37036]: I0312 14:36:13.524946 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-serving-cert\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:36:13.541883 master-0 kubenswrapper[37036]: I0312 14:36:13.541844 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 12 14:36:13.547692 master-0 kubenswrapper[37036]: I0312 14:36:13.547641 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-etcd-client\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:36:13.559925 master-0 kubenswrapper[37036]: I0312 14:36:13.559798 37036 request.go:700] Waited for 1.011911308s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&limit=500&resourceVersion=0 Mar 12 14:36:13.561319 master-0 kubenswrapper[37036]: I0312 14:36:13.561299 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 12 14:36:13.563546 master-0 kubenswrapper[37036]: I0312 14:36:13.563506 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ec846db-e344-4f9e-95e6-7a0055f52766-config-volume\") pod \"dns-default-fpjck\" (UID: \"3ec846db-e344-4f9e-95e6-7a0055f52766\") " pod="openshift-dns/dns-default-fpjck" Mar 12 14:36:13.581656 master-0 kubenswrapper[37036]: I0312 14:36:13.581611 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 12 14:36:13.583790 master-0 kubenswrapper[37036]: I0312 14:36:13.583759 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3ec846db-e344-4f9e-95e6-7a0055f52766-metrics-tls\") pod \"dns-default-fpjck\" (UID: \"3ec846db-e344-4f9e-95e6-7a0055f52766\") " pod="openshift-dns/dns-default-fpjck" Mar 12 14:36:13.601199 master-0 kubenswrapper[37036]: I0312 14:36:13.601129 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 12 14:36:13.622153 master-0 kubenswrapper[37036]: I0312 14:36:13.622114 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 12 14:36:13.623426 master-0 kubenswrapper[37036]: I0312 14:36:13.623380 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f3c13c5f-3d1f-4e0a-b77b-732255680086-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-7s8fj\" (UID: \"f3c13c5f-3d1f-4e0a-b77b-732255680086\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-7s8fj" Mar 12 14:36:13.627889 master-0 kubenswrapper[37036]: E0312 14:36:13.627867 37036 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.628080 master-0 kubenswrapper[37036]: E0312 14:36:13.628065 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-federate-client-tls podName:f9dfe48c-daa1-4c18-9cf5-7b4930a0e649 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.128044242 +0000 UTC m=+33.135785249 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-federate-client-tls") pod "telemeter-client-cbb5fd9f8-xq7vd" (UID: "f9dfe48c-daa1-4c18-9cf5-7b4930a0e649") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.628186 master-0 kubenswrapper[37036]: E0312 14:36:13.627944 37036 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.628323 master-0 kubenswrapper[37036]: E0312 14:36:13.627953 37036 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.628388 master-0 kubenswrapper[37036]: E0312 14:36:13.628178 37036 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.628388 master-0 kubenswrapper[37036]: E0312 14:36:13.628298 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b77ad35-2fff-47bb-ad34-abb3868b09a9-auth-proxy-config podName:6b77ad35-2fff-47bb-ad34-abb3868b09a9 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.128287478 +0000 UTC m=+33.136028495 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/6b77ad35-2fff-47bb-ad34-abb3868b09a9-auth-proxy-config") pod "machine-config-operator-fdb5c78b5-lds9v" (UID: "6b77ad35-2fff-47bb-ad34-abb3868b09a9") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.628492 master-0 kubenswrapper[37036]: E0312 14:36:13.628409 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dd912f8-2c4d-4a0a-ba41-918ab5c235ba-webhook-certs podName:8dd912f8-2c4d-4a0a-ba41-918ab5c235ba nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.12839005 +0000 UTC m=+33.136131057 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8dd912f8-2c4d-4a0a-ba41-918ab5c235ba-webhook-certs") pod "multus-admission-controller-7769569c45-s5wj4" (UID: "8dd912f8-2c4d-4a0a-ba41-918ab5c235ba") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.628492 master-0 kubenswrapper[37036]: E0312 14:36:13.628430 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1047bb4a-135f-488d-9399-0518cb3a827d-images podName:1047bb4a-135f-488d-9399-0518cb3a827d nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.128422592 +0000 UTC m=+33.136163629 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/1047bb4a-135f-488d-9399-0518cb3a827d-images") pod "cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" (UID: "1047bb4a-135f-488d-9399-0518cb3a827d") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.630076 master-0 kubenswrapper[37036]: E0312 14:36:13.630052 37036 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.630157 master-0 kubenswrapper[37036]: E0312 14:36:13.630096 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-proxy-ca-bundles podName:99433993-93cf-46cb-bb66-485672cb2554 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.130086304 +0000 UTC m=+33.137827241 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-proxy-ca-bundles") pod "controller-manager-6689dcd7fd-vw9vd" (UID: "99433993-93cf-46cb-bb66-485672cb2554") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.630157 master-0 kubenswrapper[37036]: E0312 14:36:13.630116 37036 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.630157 master-0 kubenswrapper[37036]: E0312 14:36:13.630139 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-kube-rbac-proxy-config podName:a81be38f-e07e-4863-8d61-fdefc2713a6a nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.130133365 +0000 UTC m=+33.137874302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-68b88f8cb5-vfvts" (UID: "a81be38f-e07e-4863-8d61-fdefc2713a6a") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.630157 master-0 kubenswrapper[37036]: E0312 14:36:13.630155 37036 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.630351 master-0 kubenswrapper[37036]: E0312 14:36:13.630177 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef824102-83a5-4629-8057-d4f1a57a530d-webhook-cert podName:ef824102-83a5-4629-8057-d4f1a57a530d nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.130170576 +0000 UTC m=+33.137911513 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/ef824102-83a5-4629-8057-d4f1a57a530d-webhook-cert") pod "packageserver-5957c5c5dc-njb8x" (UID: "ef824102-83a5-4629-8057-d4f1a57a530d") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.630351 master-0 kubenswrapper[37036]: E0312 14:36:13.630198 37036 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.630351 master-0 kubenswrapper[37036]: E0312 14:36:13.630215 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-metrics-client-ca podName:b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.130210097 +0000 UTC m=+33.137951034 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-metrics-client-ca") pod "node-exporter-5pkwh" (UID: "b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.630351 master-0 kubenswrapper[37036]: E0312 14:36:13.630234 37036 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.630351 master-0 kubenswrapper[37036]: E0312 14:36:13.630251 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-metrics-client-ca podName:4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.130246688 +0000 UTC m=+33.137987625 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-metrics-client-ca") pod "prometheus-operator-5ff8674d55-bwl7h" (UID: "4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.630351 master-0 kubenswrapper[37036]: E0312 14:36:13.630273 37036 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.630351 master-0 kubenswrapper[37036]: E0312 14:36:13.630293 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-serving-certs-ca-bundle podName:f9dfe48c-daa1-4c18-9cf5-7b4930a0e649 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.130288879 +0000 UTC m=+33.138029816 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-serving-certs-ca-bundle") pod "telemeter-client-cbb5fd9f8-xq7vd" (UID: "f9dfe48c-daa1-4c18-9cf5-7b4930a0e649") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.630351 master-0 kubenswrapper[37036]: E0312 14:36:13.630309 37036 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.630351 master-0 kubenswrapper[37036]: E0312 14:36:13.630328 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/99433993-93cf-46cb-bb66-485672cb2554-serving-cert podName:99433993-93cf-46cb-bb66-485672cb2554 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.13032318 +0000 UTC m=+33.138064117 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/99433993-93cf-46cb-bb66-485672cb2554-serving-cert") pod "controller-manager-6689dcd7fd-vw9vd" (UID: "99433993-93cf-46cb-bb66-485672cb2554") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.630351 master-0 kubenswrapper[37036]: E0312 14:36:13.630347 37036 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.630787 master-0 kubenswrapper[37036]: E0312 14:36:13.630367 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd29b21c-7a0e-4311-952f-427b00468e66-serving-cert podName:dd29b21c-7a0e-4311-952f-427b00468e66 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.13036228 +0000 UTC m=+33.138103217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/dd29b21c-7a0e-4311-952f-427b00468e66-serving-cert") pod "insights-operator-8f89dfddd-gltz7" (UID: "dd29b21c-7a0e-4311-952f-427b00468e66") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.630787 master-0 kubenswrapper[37036]: E0312 14:36:13.630387 37036 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.630787 master-0 kubenswrapper[37036]: E0312 14:36:13.630409 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df31c4c2-304e-4bad-8e6f-18c174eba675-serving-cert podName:df31c4c2-304e-4bad-8e6f-18c174eba675 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.130401931 +0000 UTC m=+33.138142868 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/df31c4c2-304e-4bad-8e6f-18c174eba675-serving-cert") pod "route-controller-manager-7f8bfc67b-pz8rc" (UID: "df31c4c2-304e-4bad-8e6f-18c174eba675") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.630787 master-0 kubenswrapper[37036]: E0312 14:36:13.630422 37036 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.630787 master-0 kubenswrapper[37036]: E0312 14:36:13.630441 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc-cloud-credential-operator-serving-cert podName:de61e1fe-294c-48a6-8cf3-aeb4637ef2cc nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.130436342 +0000 UTC m=+33.138177279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-55d85b7b47-pxgq9" (UID: "de61e1fe-294c-48a6-8cf3-aeb4637ef2cc") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.630787 master-0 kubenswrapper[37036]: E0312 14:36:13.630463 37036 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.630787 master-0 kubenswrapper[37036]: E0312 14:36:13.630481 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3edaa533-ecbb-443e-a270-4cb4f923daf6-config podName:3edaa533-ecbb-443e-a270-4cb4f923daf6 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.130476203 +0000 UTC m=+33.138217140 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/3edaa533-ecbb-443e-a270-4cb4f923daf6-config") pod "cluster-baremetal-operator-5cdb4c5598-hs6mc" (UID: "3edaa533-ecbb-443e-a270-4cb4f923daf6") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.631146 master-0 kubenswrapper[37036]: E0312 14:36:13.631123 37036 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.631285 master-0 kubenswrapper[37036]: E0312 14:36:13.631272 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/df31c4c2-304e-4bad-8e6f-18c174eba675-config podName:df31c4c2-304e-4bad-8e6f-18c174eba675 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.131256243 +0000 UTC m=+33.138997260 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/df31c4c2-304e-4bad-8e6f-18c174eba675-config") pod "route-controller-manager-7f8bfc67b-pz8rc" (UID: "df31c4c2-304e-4bad-8e6f-18c174eba675") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.631401 master-0 kubenswrapper[37036]: E0312 14:36:13.631388 37036 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.631508 master-0 kubenswrapper[37036]: E0312 14:36:13.631496 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61d829d7-38e1-4826-942c-f7317c4a4bec-proxy-tls podName:61d829d7-38e1-4826-942c-f7317c4a4bec nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.131484059 +0000 UTC m=+33.139225086 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/61d829d7-38e1-4826-942c-f7317c4a4bec-proxy-tls") pod "machine-config-controller-ff46b7bdf-vfsmf" (UID: "61d829d7-38e1-4826-942c-f7317c4a4bec") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.631616 master-0 kubenswrapper[37036]: E0312 14:36:13.631590 37036 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.631675 master-0 kubenswrapper[37036]: E0312 14:36:13.631638 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3edaa533-ecbb-443e-a270-4cb4f923daf6-images podName:3edaa533-ecbb-443e-a270-4cb4f923daf6 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.131627442 +0000 UTC m=+33.139368459 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/3edaa533-ecbb-443e-a270-4cb4f923daf6-images") pod "cluster-baremetal-operator-5cdb4c5598-hs6mc" (UID: "3edaa533-ecbb-443e-a270-4cb4f923daf6") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.631747 master-0 kubenswrapper[37036]: E0312 14:36:13.631733 37036 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.631857 master-0 kubenswrapper[37036]: E0312 14:36:13.631845 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/df31c4c2-304e-4bad-8e6f-18c174eba675-client-ca podName:df31c4c2-304e-4bad-8e6f-18c174eba675 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.131834227 +0000 UTC m=+33.139575254 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/df31c4c2-304e-4bad-8e6f-18c174eba675-client-ca") pod "route-controller-manager-7f8bfc67b-pz8rc" (UID: "df31c4c2-304e-4bad-8e6f-18c174eba675") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.633872 master-0 kubenswrapper[37036]: E0312 14:36:13.633849 37036 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.633968 master-0 kubenswrapper[37036]: E0312 14:36:13.633917 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59f21770-429b-4b63-82fd-50ce0daf698d-openshift-state-metrics-kube-rbac-proxy-config podName:59f21770-429b-4b63-82fd-50ce0daf698d nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.133888519 +0000 UTC m=+33.141629456 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/59f21770-429b-4b63-82fd-50ce0daf698d-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-74cc79fd76-jms82" (UID: "59f21770-429b-4b63-82fd-50ce0daf698d") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.635029 master-0 kubenswrapper[37036]: E0312 14:36:13.634999 37036 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.635108 master-0 kubenswrapper[37036]: E0312 14:36:13.635050 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-client-ca podName:99433993-93cf-46cb-bb66-485672cb2554 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.135038688 +0000 UTC m=+33.142779625 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-client-ca") pod "controller-manager-6689dcd7fd-vw9vd" (UID: "99433993-93cf-46cb-bb66-485672cb2554") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.637141 master-0 kubenswrapper[37036]: E0312 14:36:13.637107 37036 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.637251 master-0 kubenswrapper[37036]: E0312 14:36:13.637197 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-metrics-server-audit-profiles podName:addf66af-4d97-4c1e-960d-ace98c27961b nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.137153661 +0000 UTC m=+33.144894648 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-metrics-server-audit-profiles") pod "metrics-server-85b44c7984-pzbfq" (UID: "addf66af-4d97-4c1e-960d-ace98c27961b") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.637251 master-0 kubenswrapper[37036]: E0312 14:36:13.637224 37036 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.637363 master-0 kubenswrapper[37036]: E0312 14:36:13.637257 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-secret-telemeter-client podName:f9dfe48c-daa1-4c18-9cf5-7b4930a0e649 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.137248494 +0000 UTC m=+33.144989541 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-secret-telemeter-client") pod "telemeter-client-cbb5fd9f8-xq7vd" (UID: "f9dfe48c-daa1-4c18-9cf5-7b4930a0e649") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.638542 master-0 kubenswrapper[37036]: E0312 14:36:13.638521 37036 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.638642 master-0 kubenswrapper[37036]: E0312 14:36:13.638613 37036 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.638695 master-0 kubenswrapper[37036]: E0312 14:36:13.638670 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b77ad35-2fff-47bb-ad34-abb3868b09a9-images podName:6b77ad35-2fff-47bb-ad34-abb3868b09a9 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.138659139 +0000 UTC m=+33.146400136 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/6b77ad35-2fff-47bb-ad34-abb3868b09a9-images") pod "machine-config-operator-fdb5c78b5-lds9v" (UID: "6b77ad35-2fff-47bb-ad34-abb3868b09a9") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.638695 master-0 kubenswrapper[37036]: E0312 14:36:13.638533 37036 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.638795 master-0 kubenswrapper[37036]: E0312 14:36:13.638573 37036 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.638795 master-0 kubenswrapper[37036]: E0312 14:36:13.638733 37036 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.638795 master-0 kubenswrapper[37036]: E0312 14:36:13.638712 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-prometheus-operator-tls podName:4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.1387042 +0000 UTC m=+33.146445207 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-prometheus-operator-tls") pod "prometheus-operator-5ff8674d55-bwl7h" (UID: "4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.638795 master-0 kubenswrapper[37036]: E0312 14:36:13.638765 37036 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.638795 master-0 kubenswrapper[37036]: E0312 14:36:13.638785 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-server-tls podName:addf66af-4d97-4c1e-960d-ace98c27961b nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.138769822 +0000 UTC m=+33.146510809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-server-tls") pod "metrics-server-85b44c7984-pzbfq" (UID: "addf66af-4d97-4c1e-960d-ace98c27961b") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.639032 master-0 kubenswrapper[37036]: E0312 14:36:13.638771 37036 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.639032 master-0 kubenswrapper[37036]: E0312 14:36:13.638807 37036 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.639032 master-0 kubenswrapper[37036]: E0312 14:36:13.638808 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-prometheus-operator-kube-rbac-proxy-config podName:4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.138797682 +0000 UTC m=+33.146538699 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-5ff8674d55-bwl7h" (UID: "4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.639032 master-0 kubenswrapper[37036]: E0312 14:36:13.638859 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b77ad35-2fff-47bb-ad34-abb3868b09a9-proxy-tls podName:6b77ad35-2fff-47bb-ad34-abb3868b09a9 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.138849034 +0000 UTC m=+33.146590071 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/6b77ad35-2fff-47bb-ad34-abb3868b09a9-proxy-tls") pod "machine-config-operator-fdb5c78b5-lds9v" (UID: "6b77ad35-2fff-47bb-ad34-abb3868b09a9") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.639032 master-0 kubenswrapper[37036]: E0312 14:36:13.638874 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef5679f7-5bf5-409d-b74b-64a9cbb6c701-cert podName:ef5679f7-5bf5-409d-b74b-64a9cbb6c701 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.138866674 +0000 UTC m=+33.146607711 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef5679f7-5bf5-409d-b74b-64a9cbb6c701-cert") pod "ingress-canary-dbdr9" (UID: "ef5679f7-5bf5-409d-b74b-64a9cbb6c701") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.639032 master-0 kubenswrapper[37036]: E0312 14:36:13.638889 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-telemeter-trusted-ca-bundle podName:f9dfe48c-daa1-4c18-9cf5-7b4930a0e649 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.138882554 +0000 UTC m=+33.146623581 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-telemeter-trusted-ca-bundle") pod "telemeter-client-cbb5fd9f8-xq7vd" (UID: "f9dfe48c-daa1-4c18-9cf5-7b4930a0e649") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.639326 master-0 kubenswrapper[37036]: E0312 14:36:13.639312 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3edaa533-ecbb-443e-a270-4cb4f923daf6-cert podName:3edaa533-ecbb-443e-a270-4cb4f923daf6 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.139298205 +0000 UTC m=+33.147039142 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3edaa533-ecbb-443e-a270-4cb4f923daf6-cert") pod "cluster-baremetal-operator-5cdb4c5598-hs6mc" (UID: "3edaa533-ecbb-443e-a270-4cb4f923daf6") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.639969 master-0 kubenswrapper[37036]: E0312 14:36:13.639892 37036 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.640027 master-0 kubenswrapper[37036]: E0312 14:36:13.639998 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/40912d56-8288-4d58-ad91-7455bd460887-config podName:40912d56-8288-4d58-ad91-7455bd460887 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.139983722 +0000 UTC m=+33.147724729 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/40912d56-8288-4d58-ad91-7455bd460887-config") pod "machine-approver-754bdc9f9d-44b6s" (UID: "40912d56-8288-4d58-ad91-7455bd460887") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.640767 master-0 kubenswrapper[37036]: E0312 14:36:13.640745 37036 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.640844 master-0 kubenswrapper[37036]: E0312 14:36:13.640786 37036 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.640844 master-0 kubenswrapper[37036]: E0312 14:36:13.640800 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/40912d56-8288-4d58-ad91-7455bd460887-auth-proxy-config podName:40912d56-8288-4d58-ad91-7455bd460887 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.140786302 +0000 UTC m=+33.148527319 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/40912d56-8288-4d58-ad91-7455bd460887-auth-proxy-config") pod "machine-approver-754bdc9f9d-44b6s" (UID: "40912d56-8288-4d58-ad91-7455bd460887") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.640969 master-0 kubenswrapper[37036]: E0312 14:36:13.640864 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-secret-telemeter-client-kube-rbac-proxy-config podName:f9dfe48c-daa1-4c18-9cf5-7b4930a0e649 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.140850393 +0000 UTC m=+33.148591410 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-cbb5fd9f8-xq7vd" (UID: "f9dfe48c-daa1-4c18-9cf5-7b4930a0e649") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.641112 master-0 kubenswrapper[37036]: E0312 14:36:13.641089 37036 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.641213 master-0 kubenswrapper[37036]: E0312 14:36:13.641135 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-tls podName:a81be38f-e07e-4863-8d61-fdefc2713a6a nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.141124571 +0000 UTC m=+33.148865578 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-tls") pod "kube-state-metrics-68b88f8cb5-vfvts" (UID: "a81be38f-e07e-4863-8d61-fdefc2713a6a") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.641342 master-0 kubenswrapper[37036]: I0312 14:36:13.641325 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 12 14:36:13.641440 master-0 kubenswrapper[37036]: E0312 14:36:13.641348 37036 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.641501 master-0 kubenswrapper[37036]: E0312 14:36:13.641465 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6f5cd3ff-ced6-47e3-8054-d83053d87680-images podName:6f5cd3ff-ced6-47e3-8054-d83053d87680 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.141454009 +0000 UTC m=+33.149195006 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/6f5cd3ff-ced6-47e3-8054-d83053d87680-images") pod "machine-api-operator-84bf6db4f9-qtx2d" (UID: "6f5cd3ff-ced6-47e3-8054-d83053d87680") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.642723 master-0 kubenswrapper[37036]: E0312 14:36:13.642696 37036 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.642815 master-0 kubenswrapper[37036]: E0312 14:36:13.642750 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dd29b21c-7a0e-4311-952f-427b00468e66-trusted-ca-bundle podName:dd29b21c-7a0e-4311-952f-427b00468e66 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.142737941 +0000 UTC m=+33.150478928 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/dd29b21c-7a0e-4311-952f-427b00468e66-trusted-ca-bundle") pod "insights-operator-8f89dfddd-gltz7" (UID: "dd29b21c-7a0e-4311-952f-427b00468e66") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.642815 master-0 kubenswrapper[37036]: E0312 14:36:13.642697 37036 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.642815 master-0 kubenswrapper[37036]: E0312 14:36:13.642796 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef824102-83a5-4629-8057-d4f1a57a530d-apiservice-cert podName:ef824102-83a5-4629-8057-d4f1a57a530d nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.142784443 +0000 UTC m=+33.150525450 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/ef824102-83a5-4629-8057-d4f1a57a530d-apiservice-cert") pod "packageserver-5957c5c5dc-njb8x" (UID: "ef824102-83a5-4629-8057-d4f1a57a530d") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.643026 master-0 kubenswrapper[37036]: E0312 14:36:13.643007 37036 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.643151 master-0 kubenswrapper[37036]: E0312 14:36:13.643137 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-tls podName:b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.143124771 +0000 UTC m=+33.150865768 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-tls") pod "node-exporter-5pkwh" (UID: "b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.643237 master-0 kubenswrapper[37036]: E0312 14:36:13.643033 37036 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.643340 master-0 kubenswrapper[37036]: E0312 14:36:13.643329 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-kube-rbac-proxy-config podName:b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.143319776 +0000 UTC m=+33.151060803 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-kube-rbac-proxy-config") pod "node-exporter-5pkwh" (UID: "b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.643410 master-0 kubenswrapper[37036]: E0312 14:36:13.643238 37036 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.643517 master-0 kubenswrapper[37036]: E0312 14:36:13.643504 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-metrics-client-ca podName:f9dfe48c-daa1-4c18-9cf5-7b4930a0e649 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.14349321 +0000 UTC m=+33.151234207 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-metrics-client-ca") pod "telemeter-client-cbb5fd9f8-xq7vd" (UID: "f9dfe48c-daa1-4c18-9cf5-7b4930a0e649") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.643624 master-0 kubenswrapper[37036]: E0312 14:36:13.643610 37036 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.643744 master-0 kubenswrapper[37036]: E0312 14:36:13.643732 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e4d9407-ff79-4396-a37f-896617e024d4-mcd-auth-proxy-config podName:8e4d9407-ff79-4396-a37f-896617e024d4 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.143721315 +0000 UTC m=+33.151462322 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/8e4d9407-ff79-4396-a37f-896617e024d4-mcd-auth-proxy-config") pod "machine-config-daemon-ngzc8" (UID: "8e4d9407-ff79-4396-a37f-896617e024d4") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.644547 master-0 kubenswrapper[37036]: E0312 14:36:13.644518 37036 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.644659 master-0 kubenswrapper[37036]: E0312 14:36:13.644630 37036 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.644659 master-0 kubenswrapper[37036]: E0312 14:36:13.644643 37036 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.644769 master-0 kubenswrapper[37036]: E0312 14:36:13.644665 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-config podName:99433993-93cf-46cb-bb66-485672cb2554 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.144653849 +0000 UTC m=+33.152394786 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-config") pod "controller-manager-6689dcd7fd-vw9vd" (UID: "99433993-93cf-46cb-bb66-485672cb2554") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.644769 master-0 kubenswrapper[37036]: E0312 14:36:13.644581 37036 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.644769 master-0 kubenswrapper[37036]: E0312 14:36:13.644722 37036 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.644769 master-0 kubenswrapper[37036]: E0312 14:36:13.644594 37036 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-7nn9s21bftmgp: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.644769 master-0 kubenswrapper[37036]: E0312 14:36:13.644773 37036 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.645000 master-0 kubenswrapper[37036]: E0312 14:36:13.644603 37036 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.645000 master-0 kubenswrapper[37036]: E0312 14:36:13.644612 37036 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.645000 master-0 kubenswrapper[37036]: E0312 14:36:13.644708 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a-certs podName:6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.14469804 +0000 UTC m=+33.152438977 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a-certs") pod "machine-config-server-nj7qg" (UID: "6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.645000 master-0 kubenswrapper[37036]: E0312 14:36:13.644873 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc-cco-trusted-ca podName:de61e1fe-294c-48a6-8cf3-aeb4637ef2cc nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.144854964 +0000 UTC m=+33.152595901 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc-cco-trusted-ca") pod "cloud-credential-operator-55d85b7b47-pxgq9" (UID: "de61e1fe-294c-48a6-8cf3-aeb4637ef2cc") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.645000 master-0 kubenswrapper[37036]: E0312 14:36:13.644889 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1047bb4a-135f-488d-9399-0518cb3a827d-auth-proxy-config podName:1047bb4a-135f-488d-9399-0518cb3a827d nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.144881845 +0000 UTC m=+33.152622782 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/1047bb4a-135f-488d-9399-0518cb3a827d-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" (UID: "1047bb4a-135f-488d-9399-0518cb3a827d") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.645000 master-0 kubenswrapper[37036]: E0312 14:36:13.644919 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59f21770-429b-4b63-82fd-50ce0daf698d-openshift-state-metrics-tls podName:59f21770-429b-4b63-82fd-50ce0daf698d nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.144912006 +0000 UTC m=+33.152652943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/59f21770-429b-4b63-82fd-50ce0daf698d-openshift-state-metrics-tls") pod "openshift-state-metrics-74cc79fd76-jms82" (UID: "59f21770-429b-4b63-82fd-50ce0daf698d") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.645000 master-0 kubenswrapper[37036]: E0312 14:36:13.644932 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-client-certs podName:addf66af-4d97-4c1e-960d-ace98c27961b nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.144926656 +0000 UTC m=+33.152667593 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-client-certs") pod "metrics-server-85b44c7984-pzbfq" (UID: "addf66af-4d97-4c1e-960d-ace98c27961b") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.645000 master-0 kubenswrapper[37036]: E0312 14:36:13.644941 37036 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.645000 master-0 kubenswrapper[37036]: E0312 14:36:13.644969 37036 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.645000 master-0 kubenswrapper[37036]: E0312 14:36:13.644979 37036 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.645571 master-0 kubenswrapper[37036]: E0312 14:36:13.644942 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-client-ca-bundle podName:addf66af-4d97-4c1e-960d-ace98c27961b nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.144938196 +0000 UTC m=+33.152679133 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-client-ca-bundle") pod "metrics-server-85b44c7984-pzbfq" (UID: "addf66af-4d97-4c1e-960d-ace98c27961b") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.645571 master-0 kubenswrapper[37036]: E0312 14:36:13.645039 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f5cd3ff-ced6-47e3-8054-d83053d87680-machine-api-operator-tls podName:6f5cd3ff-ced6-47e3-8054-d83053d87680 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.145027288 +0000 UTC m=+33.152768285 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/6f5cd3ff-ced6-47e3-8054-d83053d87680-machine-api-operator-tls") pod "machine-api-operator-84bf6db4f9-qtx2d" (UID: "6f5cd3ff-ced6-47e3-8054-d83053d87680") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.645571 master-0 kubenswrapper[37036]: E0312 14:36:13.645060 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06eb9f4b-167e-435b-8ef6-ae44fc0b85a9-cluster-storage-operator-serving-cert podName:06eb9f4b-167e-435b-8ef6-ae44fc0b85a9 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.145051069 +0000 UTC m=+33.152792106 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/06eb9f4b-167e-435b-8ef6-ae44fc0b85a9-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-6fbfc8dc8f-xgrsw" (UID: "06eb9f4b-167e-435b-8ef6-ae44fc0b85a9") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.645571 master-0 kubenswrapper[37036]: E0312 14:36:13.645089 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6f5cd3ff-ced6-47e3-8054-d83053d87680-config podName:6f5cd3ff-ced6-47e3-8054-d83053d87680 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.14507929 +0000 UTC m=+33.152820407 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6f5cd3ff-ced6-47e3-8054-d83053d87680-config") pod "machine-api-operator-84bf6db4f9-qtx2d" (UID: "6f5cd3ff-ced6-47e3-8054-d83053d87680") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.645571 master-0 kubenswrapper[37036]: E0312 14:36:13.645113 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e4d9407-ff79-4396-a37f-896617e024d4-proxy-tls podName:8e4d9407-ff79-4396-a37f-896617e024d4 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.14510525 +0000 UTC m=+33.152846277 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/8e4d9407-ff79-4396-a37f-896617e024d4-proxy-tls") pod "machine-config-daemon-ngzc8" (UID: "8e4d9407-ff79-4396-a37f-896617e024d4") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.645571 master-0 kubenswrapper[37036]: E0312 14:36:13.645136 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-configmap-kubelet-serving-ca-bundle podName:addf66af-4d97-4c1e-960d-ace98c27961b nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.145127651 +0000 UTC m=+33.152868588 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-configmap-kubelet-serving-ca-bundle") pod "metrics-server-85b44c7984-pzbfq" (UID: "addf66af-4d97-4c1e-960d-ace98c27961b") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.645854 master-0 kubenswrapper[37036]: E0312 14:36:13.645783 37036 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.645854 master-0 kubenswrapper[37036]: E0312 14:36:13.645824 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40912d56-8288-4d58-ad91-7455bd460887-machine-approver-tls podName:40912d56-8288-4d58-ad91-7455bd460887 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.145813799 +0000 UTC m=+33.153554736 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/40912d56-8288-4d58-ad91-7455bd460887-machine-approver-tls") pod "machine-approver-754bdc9f9d-44b6s" (UID: "40912d56-8288-4d58-ad91-7455bd460887") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.645854 master-0 kubenswrapper[37036]: E0312 14:36:13.645827 37036 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.645854 master-0 kubenswrapper[37036]: E0312 14:36:13.645829 37036 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.646053 master-0 kubenswrapper[37036]: E0312 14:36:13.645860 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f569ed3b-924d-4829-b192-f508ee70658d-samples-operator-tls podName:f569ed3b-924d-4829-b192-f508ee70658d nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.1458522 +0000 UTC m=+33.153593137 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f569ed3b-924d-4829-b192-f508ee70658d-samples-operator-tls") pod "cluster-samples-operator-664cb58b85-q29ch" (UID: "f569ed3b-924d-4829-b192-f508ee70658d") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.646053 master-0 kubenswrapper[37036]: E0312 14:36:13.645886 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/59f21770-429b-4b63-82fd-50ce0daf698d-metrics-client-ca podName:59f21770-429b-4b63-82fd-50ce0daf698d nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.14587827 +0000 UTC m=+33.153619207 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/59f21770-429b-4b63-82fd-50ce0daf698d-metrics-client-ca") pod "openshift-state-metrics-74cc79fd76-jms82" (UID: "59f21770-429b-4b63-82fd-50ce0daf698d") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.646183 master-0 kubenswrapper[37036]: E0312 14:36:13.646163 37036 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.646308 master-0 kubenswrapper[37036]: E0312 14:36:13.646294 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dd29b21c-7a0e-4311-952f-427b00468e66-service-ca-bundle podName:dd29b21c-7a0e-4311-952f-427b00468e66 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.14627745 +0000 UTC m=+33.154018427 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/dd29b21c-7a0e-4311-952f-427b00468e66-service-ca-bundle") pod "insights-operator-8f89dfddd-gltz7" (UID: "dd29b21c-7a0e-4311-952f-427b00468e66") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.646425 master-0 kubenswrapper[37036]: E0312 14:36:13.646395 37036 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.646492 master-0 kubenswrapper[37036]: E0312 14:36:13.646428 37036 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.646492 master-0 kubenswrapper[37036]: E0312 14:36:13.646401 37036 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.646492 master-0 kubenswrapper[37036]: E0312 14:36:13.646462 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1047bb4a-135f-488d-9399-0518cb3a827d-cloud-controller-manager-operator-tls podName:1047bb4a-135f-488d-9399-0518cb3a827d nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.146444274 +0000 UTC m=+33.154185291 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/1047bb4a-135f-488d-9399-0518cb3a827d-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" (UID: "1047bb4a-135f-488d-9399-0518cb3a827d") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.646627 master-0 kubenswrapper[37036]: E0312 14:36:13.646508 37036 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.646627 master-0 kubenswrapper[37036]: E0312 14:36:13.646516 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9757edbb-8ce2-4513-9b32-a552df50634c-cert podName:9757edbb-8ce2-4513-9b32-a552df50634c nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.146503855 +0000 UTC m=+33.154244902 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9757edbb-8ce2-4513-9b32-a552df50634c-cert") pod "cluster-autoscaler-operator-69576476f7-b7296" (UID: "9757edbb-8ce2-4513-9b32-a552df50634c") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.646627 master-0 kubenswrapper[37036]: E0312 14:36:13.646526 37036 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.646627 master-0 kubenswrapper[37036]: E0312 14:36:13.646539 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/61d829d7-38e1-4826-942c-f7317c4a4bec-mcc-auth-proxy-config podName:61d829d7-38e1-4826-942c-f7317c4a4bec nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.146530706 +0000 UTC m=+33.154271733 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcc-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/61d829d7-38e1-4826-942c-f7317c4a4bec-mcc-auth-proxy-config") pod "machine-config-controller-ff46b7bdf-vfsmf" (UID: "61d829d7-38e1-4826-942c-f7317c4a4bec") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.646627 master-0 kubenswrapper[37036]: E0312 14:36:13.646558 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a-node-bootstrap-token podName:6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.146548916 +0000 UTC m=+33.154289953 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a-node-bootstrap-token") pod "machine-config-server-nj7qg" (UID: "6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.646627 master-0 kubenswrapper[37036]: E0312 14:36:13.646578 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3edaa533-ecbb-443e-a270-4cb4f923daf6-cluster-baremetal-operator-tls podName:3edaa533-ecbb-443e-a270-4cb4f923daf6 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.146570007 +0000 UTC m=+33.154311044 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/3edaa533-ecbb-443e-a270-4cb4f923daf6-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5cdb4c5598-hs6mc" (UID: "3edaa533-ecbb-443e-a270-4cb4f923daf6") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.646918 master-0 kubenswrapper[37036]: E0312 14:36:13.646883 37036 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.647026 master-0 kubenswrapper[37036]: E0312 14:36:13.647012 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9757edbb-8ce2-4513-9b32-a552df50634c-auth-proxy-config podName:9757edbb-8ce2-4513-9b32-a552df50634c nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.147000588 +0000 UTC m=+33.154741605 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/9757edbb-8ce2-4513-9b32-a552df50634c-auth-proxy-config") pod "cluster-autoscaler-operator-69576476f7-b7296" (UID: "9757edbb-8ce2-4513-9b32-a552df50634c") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.647104 master-0 kubenswrapper[37036]: E0312 14:36:13.646941 37036 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.647198 master-0 kubenswrapper[37036]: E0312 14:36:13.647187 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-telemeter-client-tls podName:f9dfe48c-daa1-4c18-9cf5-7b4930a0e649 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.147179553 +0000 UTC m=+33.154920490 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-telemeter-client-tls") pod "telemeter-client-cbb5fd9f8-xq7vd" (UID: "f9dfe48c-daa1-4c18-9cf5-7b4930a0e649") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:13.647258 master-0 kubenswrapper[37036]: E0312 14:36:13.646962 37036 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.647343 master-0 kubenswrapper[37036]: E0312 14:36:13.647330 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-custom-resource-state-configmap podName:a81be38f-e07e-4863-8d61-fdefc2713a6a nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.147322706 +0000 UTC m=+33.155063643 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-68b88f8cb5-vfvts" (UID: "a81be38f-e07e-4863-8d61-fdefc2713a6a") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.647405 master-0 kubenswrapper[37036]: E0312 14:36:13.647068 37036 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.647485 master-0 kubenswrapper[37036]: E0312 14:36:13.647474 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a81be38f-e07e-4863-8d61-fdefc2713a6a-metrics-client-ca podName:a81be38f-e07e-4863-8d61-fdefc2713a6a nodeName:}" failed. No retries permitted until 2026-03-12 14:36:14.14746746 +0000 UTC m=+33.155208397 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/a81be38f-e07e-4863-8d61-fdefc2713a6a-metrics-client-ca") pod "kube-state-metrics-68b88f8cb5-vfvts" (UID: "a81be38f-e07e-4863-8d61-fdefc2713a6a") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:13.661519 master-0 kubenswrapper[37036]: I0312 14:36:13.661480 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 14:36:13.683916 master-0 kubenswrapper[37036]: I0312 14:36:13.683863 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 14:36:13.701782 master-0 kubenswrapper[37036]: I0312 14:36:13.701729 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 14:36:13.721629 master-0 kubenswrapper[37036]: I0312 14:36:13.721540 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 14:36:13.743277 master-0 kubenswrapper[37036]: I0312 14:36:13.743220 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 14:36:13.761803 master-0 kubenswrapper[37036]: I0312 14:36:13.761759 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 14:36:13.781886 master-0 kubenswrapper[37036]: I0312 14:36:13.781806 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 14:36:13.802239 master-0 kubenswrapper[37036]: I0312 14:36:13.802164 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 12 14:36:13.821368 master-0 kubenswrapper[37036]: I0312 14:36:13.821203 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 14:36:13.841432 master-0 kubenswrapper[37036]: I0312 14:36:13.841374 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 12 14:36:13.869398 master-0 kubenswrapper[37036]: I0312 14:36:13.869351 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 12 14:36:13.881425 master-0 kubenswrapper[37036]: I0312 14:36:13.881387 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 12 14:36:13.920356 master-0 kubenswrapper[37036]: I0312 14:36:13.920278 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 14:36:13.921608 master-0 kubenswrapper[37036]: I0312 14:36:13.921562 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 14:36:13.942177 master-0 kubenswrapper[37036]: I0312 14:36:13.942137 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 12 14:36:13.962460 master-0 kubenswrapper[37036]: I0312 14:36:13.962407 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 14:36:13.982374 master-0 kubenswrapper[37036]: I0312 14:36:13.982321 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 12 14:36:13.983372 master-0 kubenswrapper[37036]: I0312 14:36:13.983346 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_48512e02022680c9d90092634f0fc146/kube-apiserver-check-endpoints/0.log" Mar 12 14:36:13.984708 master-0 kubenswrapper[37036]: I0312 14:36:13.984680 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:36:14.002395 master-0 kubenswrapper[37036]: I0312 14:36:14.002348 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 12 14:36:14.022309 master-0 kubenswrapper[37036]: I0312 14:36:14.022244 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 12 14:36:14.042297 master-0 kubenswrapper[37036]: I0312 14:36:14.042238 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 12 14:36:14.062016 master-0 kubenswrapper[37036]: I0312 14:36:14.061964 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 12 14:36:14.082185 master-0 kubenswrapper[37036]: I0312 14:36:14.082071 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 12 14:36:14.101892 master-0 kubenswrapper[37036]: I0312 14:36:14.101855 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 12 14:36:14.121977 master-0 kubenswrapper[37036]: I0312 14:36:14.121942 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 12 14:36:14.142011 master-0 kubenswrapper[37036]: I0312 14:36:14.141957 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 12 14:36:14.161301 master-0 kubenswrapper[37036]: I0312 14:36:14.161266 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 12 14:36:14.181844 master-0 kubenswrapper[37036]: I0312 14:36:14.181815 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 12 14:36:14.191485 master-0 kubenswrapper[37036]: I0312 14:36:14.191457 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-client-ca\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:36:14.191683 master-0 kubenswrapper[37036]: I0312 14:36:14.191666 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-secret-telemeter-client\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:14.191767 master-0 kubenswrapper[37036]: I0312 14:36:14.191734 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-client-ca\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:36:14.191824 master-0 kubenswrapper[37036]: I0312 14:36:14.191758 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-server-tls\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:14.191922 master-0 kubenswrapper[37036]: I0312 14:36:14.191909 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-metrics-server-audit-profiles\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:14.192023 master-0 kubenswrapper[37036]: I0312 14:36:14.192011 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-bwl7h\" (UID: \"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" Mar 12 14:36:14.192112 master-0 kubenswrapper[37036]: I0312 14:36:14.192100 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3edaa533-ecbb-443e-a270-4cb4f923daf6-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:36:14.192204 master-0 kubenswrapper[37036]: I0312 14:36:14.192193 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b77ad35-2fff-47bb-ad34-abb3868b09a9-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-lds9v\" (UID: \"6b77ad35-2fff-47bb-ad34-abb3868b09a9\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" Mar 12 14:36:14.192313 master-0 kubenswrapper[37036]: I0312 14:36:14.192298 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b77ad35-2fff-47bb-ad34-abb3868b09a9-images\") pod \"machine-config-operator-fdb5c78b5-lds9v\" (UID: \"6b77ad35-2fff-47bb-ad34-abb3868b09a9\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" Mar 12 14:36:14.192409 master-0 kubenswrapper[37036]: I0312 14:36:14.192397 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-telemeter-trusted-ca-bundle\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:14.192511 master-0 kubenswrapper[37036]: I0312 14:36:14.192498 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef5679f7-5bf5-409d-b74b-64a9cbb6c701-cert\") pod \"ingress-canary-dbdr9\" (UID: \"ef5679f7-5bf5-409d-b74b-64a9cbb6c701\") " pod="openshift-ingress-canary/ingress-canary-dbdr9" Mar 12 14:36:14.192625 master-0 kubenswrapper[37036]: I0312 14:36:14.192610 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-bwl7h\" (UID: \"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" Mar 12 14:36:14.192729 master-0 kubenswrapper[37036]: I0312 14:36:14.192717 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40912d56-8288-4d58-ad91-7455bd460887-config\") pod \"machine-approver-754bdc9f9d-44b6s\" (UID: \"40912d56-8288-4d58-ad91-7455bd460887\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" Mar 12 14:36:14.192820 master-0 kubenswrapper[37036]: I0312 14:36:14.192807 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:14.192930 master-0 kubenswrapper[37036]: I0312 14:36:14.192412 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3edaa533-ecbb-443e-a270-4cb4f923daf6-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:36:14.193002 master-0 kubenswrapper[37036]: I0312 14:36:14.192908 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:36:14.193172 master-0 kubenswrapper[37036]: I0312 14:36:14.193152 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6f5cd3ff-ced6-47e3-8054-d83053d87680-images\") pod \"machine-api-operator-84bf6db4f9-qtx2d\" (UID: \"6f5cd3ff-ced6-47e3-8054-d83053d87680\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" Mar 12 14:36:14.193289 master-0 kubenswrapper[37036]: I0312 14:36:14.193276 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/40912d56-8288-4d58-ad91-7455bd460887-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-44b6s\" (UID: \"40912d56-8288-4d58-ad91-7455bd460887\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" Mar 12 14:36:14.193385 master-0 kubenswrapper[37036]: I0312 14:36:14.193365 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef824102-83a5-4629-8057-d4f1a57a530d-apiservice-cert\") pod \"packageserver-5957c5c5dc-njb8x\" (UID: \"ef824102-83a5-4629-8057-d4f1a57a530d\") " pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:36:14.193538 master-0 kubenswrapper[37036]: I0312 14:36:14.193520 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd29b21c-7a0e-4311-952f-427b00468e66-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:36:14.193666 master-0 kubenswrapper[37036]: I0312 14:36:14.193653 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:36:14.193747 master-0 kubenswrapper[37036]: I0312 14:36:14.193736 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-tls\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:36:14.193867 master-0 kubenswrapper[37036]: I0312 14:36:14.193854 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-metrics-client-ca\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:14.194009 master-0 kubenswrapper[37036]: I0312 14:36:14.193995 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e4d9407-ff79-4396-a37f-896617e024d4-mcd-auth-proxy-config\") pod \"machine-config-daemon-ngzc8\" (UID: \"8e4d9407-ff79-4396-a37f-896617e024d4\") " pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:36:14.194158 master-0 kubenswrapper[37036]: I0312 14:36:14.194136 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-config\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:36:14.194300 master-0 kubenswrapper[37036]: I0312 14:36:14.194285 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd29b21c-7a0e-4311-952f-427b00468e66-service-ca-bundle\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:36:14.194467 master-0 kubenswrapper[37036]: I0312 14:36:14.194435 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1047bb4a-135f-488d-9399-0518cb3a827d-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:36:14.194528 master-0 kubenswrapper[37036]: I0312 14:36:14.194442 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-config\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:36:14.194528 master-0 kubenswrapper[37036]: I0312 14:36:14.194479 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-client-ca-bundle\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:14.194592 master-0 kubenswrapper[37036]: I0312 14:36:14.194527 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-pxgq9\" (UID: \"de61e1fe-294c-48a6-8cf3-aeb4637ef2cc\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" Mar 12 14:36:14.194592 master-0 kubenswrapper[37036]: I0312 14:36:14.194541 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e4d9407-ff79-4396-a37f-896617e024d4-mcd-auth-proxy-config\") pod \"machine-config-daemon-ngzc8\" (UID: \"8e4d9407-ff79-4396-a37f-896617e024d4\") " pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:36:14.194592 master-0 kubenswrapper[37036]: I0312 14:36:14.194551 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/06eb9f4b-167e-435b-8ef6-ae44fc0b85a9-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-xgrsw\" (UID: \"06eb9f4b-167e-435b-8ef6-ae44fc0b85a9\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw" Mar 12 14:36:14.194674 master-0 kubenswrapper[37036]: I0312 14:36:14.194630 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f5cd3ff-ced6-47e3-8054-d83053d87680-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-qtx2d\" (UID: \"6f5cd3ff-ced6-47e3-8054-d83053d87680\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" Mar 12 14:36:14.194674 master-0 kubenswrapper[37036]: I0312 14:36:14.194665 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a-certs\") pod \"machine-config-server-nj7qg\" (UID: \"6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a\") " pod="openshift-machine-config-operator/machine-config-server-nj7qg" Mar 12 14:36:14.194736 master-0 kubenswrapper[37036]: I0312 14:36:14.194690 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/59f21770-429b-4b63-82fd-50ce0daf698d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-jms82\" (UID: \"59f21770-429b-4b63-82fd-50ce0daf698d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" Mar 12 14:36:14.194736 master-0 kubenswrapper[37036]: I0312 14:36:14.194719 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f5cd3ff-ced6-47e3-8054-d83053d87680-config\") pod \"machine-api-operator-84bf6db4f9-qtx2d\" (UID: \"6f5cd3ff-ced6-47e3-8054-d83053d87680\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" Mar 12 14:36:14.194799 master-0 kubenswrapper[37036]: I0312 14:36:14.194738 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e4d9407-ff79-4396-a37f-896617e024d4-proxy-tls\") pod \"machine-config-daemon-ngzc8\" (UID: \"8e4d9407-ff79-4396-a37f-896617e024d4\") " pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:36:14.194799 master-0 kubenswrapper[37036]: I0312 14:36:14.194758 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-client-certs\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:14.194799 master-0 kubenswrapper[37036]: I0312 14:36:14.194779 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:14.194924 master-0 kubenswrapper[37036]: I0312 14:36:14.194812 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f569ed3b-924d-4829-b192-f508ee70658d-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-q29ch\" (UID: \"f569ed3b-924d-4829-b192-f508ee70658d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-q29ch" Mar 12 14:36:14.194924 master-0 kubenswrapper[37036]: I0312 14:36:14.194833 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/40912d56-8288-4d58-ad91-7455bd460887-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-44b6s\" (UID: \"40912d56-8288-4d58-ad91-7455bd460887\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" Mar 12 14:36:14.194924 master-0 kubenswrapper[37036]: I0312 14:36:14.194846 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-pxgq9\" (UID: \"de61e1fe-294c-48a6-8cf3-aeb4637ef2cc\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" Mar 12 14:36:14.194924 master-0 kubenswrapper[37036]: I0312 14:36:14.194889 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/59f21770-429b-4b63-82fd-50ce0daf698d-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-jms82\" (UID: \"59f21770-429b-4b63-82fd-50ce0daf698d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" Mar 12 14:36:14.195046 master-0 kubenswrapper[37036]: I0312 14:36:14.194950 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a-node-bootstrap-token\") pod \"machine-config-server-nj7qg\" (UID: \"6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a\") " pod="openshift-machine-config-operator/machine-config-server-nj7qg" Mar 12 14:36:14.195046 master-0 kubenswrapper[37036]: I0312 14:36:14.194971 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/1047bb4a-135f-488d-9399-0518cb3a827d-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:36:14.195046 master-0 kubenswrapper[37036]: I0312 14:36:14.194991 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9757edbb-8ce2-4513-9b32-a552df50634c-cert\") pod \"cluster-autoscaler-operator-69576476f7-b7296\" (UID: \"9757edbb-8ce2-4513-9b32-a552df50634c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" Mar 12 14:36:14.195046 master-0 kubenswrapper[37036]: I0312 14:36:14.195009 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9757edbb-8ce2-4513-9b32-a552df50634c-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-b7296\" (UID: \"9757edbb-8ce2-4513-9b32-a552df50634c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" Mar 12 14:36:14.195046 master-0 kubenswrapper[37036]: I0312 14:36:14.195028 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/3edaa533-ecbb-443e-a270-4cb4f923daf6-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:36:14.195183 master-0 kubenswrapper[37036]: I0312 14:36:14.195074 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/61d829d7-38e1-4826-942c-f7317c4a4bec-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-vfsmf\" (UID: \"61d829d7-38e1-4826-942c-f7317c4a4bec\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" Mar 12 14:36:14.195183 master-0 kubenswrapper[37036]: I0312 14:36:14.195104 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:36:14.195183 master-0 kubenswrapper[37036]: I0312 14:36:14.195135 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-telemeter-client-tls\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:14.195183 master-0 kubenswrapper[37036]: I0312 14:36:14.195163 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e4d9407-ff79-4396-a37f-896617e024d4-proxy-tls\") pod \"machine-config-daemon-ngzc8\" (UID: \"8e4d9407-ff79-4396-a37f-896617e024d4\") " pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:36:14.195183 master-0 kubenswrapper[37036]: I0312 14:36:14.195169 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a81be38f-e07e-4863-8d61-fdefc2713a6a-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:36:14.195354 master-0 kubenswrapper[37036]: I0312 14:36:14.195215 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-federate-client-tls\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:14.195354 master-0 kubenswrapper[37036]: I0312 14:36:14.195261 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b77ad35-2fff-47bb-ad34-abb3868b09a9-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-lds9v\" (UID: \"6b77ad35-2fff-47bb-ad34-abb3868b09a9\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" Mar 12 14:36:14.195354 master-0 kubenswrapper[37036]: I0312 14:36:14.195295 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8dd912f8-2c4d-4a0a-ba41-918ab5c235ba-webhook-certs\") pod \"multus-admission-controller-7769569c45-s5wj4\" (UID: \"8dd912f8-2c4d-4a0a-ba41-918ab5c235ba\") " pod="openshift-multus/multus-admission-controller-7769569c45-s5wj4" Mar 12 14:36:14.195555 master-0 kubenswrapper[37036]: I0312 14:36:14.195530 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9757edbb-8ce2-4513-9b32-a552df50634c-cert\") pod \"cluster-autoscaler-operator-69576476f7-b7296\" (UID: \"9757edbb-8ce2-4513-9b32-a552df50634c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" Mar 12 14:36:14.195610 master-0 kubenswrapper[37036]: I0312 14:36:14.195539 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/3edaa533-ecbb-443e-a270-4cb4f923daf6-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:36:14.195610 master-0 kubenswrapper[37036]: I0312 14:36:14.195575 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1047bb4a-135f-488d-9399-0518cb3a827d-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:36:14.195610 master-0 kubenswrapper[37036]: I0312 14:36:14.195608 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-serving-certs-ca-bundle\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:14.195721 master-0 kubenswrapper[37036]: I0312 14:36:14.195632 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef824102-83a5-4629-8057-d4f1a57a530d-webhook-cert\") pod \"packageserver-5957c5c5dc-njb8x\" (UID: \"ef824102-83a5-4629-8057-d4f1a57a530d\") " pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:36:14.195721 master-0 kubenswrapper[37036]: I0312 14:36:14.195681 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-bwl7h\" (UID: \"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" Mar 12 14:36:14.195782 master-0 kubenswrapper[37036]: I0312 14:36:14.195712 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9757edbb-8ce2-4513-9b32-a552df50634c-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-b7296\" (UID: \"9757edbb-8ce2-4513-9b32-a552df50634c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" Mar 12 14:36:14.195782 master-0 kubenswrapper[37036]: I0312 14:36:14.195727 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f569ed3b-924d-4829-b192-f508ee70658d-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-q29ch\" (UID: \"f569ed3b-924d-4829-b192-f508ee70658d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-q29ch" Mar 12 14:36:14.195782 master-0 kubenswrapper[37036]: I0312 14:36:14.195717 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df31c4c2-304e-4bad-8e6f-18c174eba675-config\") pod \"route-controller-manager-7f8bfc67b-pz8rc\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:36:14.195782 master-0 kubenswrapper[37036]: I0312 14:36:14.195771 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99433993-93cf-46cb-bb66-485672cb2554-serving-cert\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:36:14.195916 master-0 kubenswrapper[37036]: I0312 14:36:14.195795 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:36:14.195978 master-0 kubenswrapper[37036]: I0312 14:36:14.195949 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df31c4c2-304e-4bad-8e6f-18c174eba675-config\") pod \"route-controller-manager-7f8bfc67b-pz8rc\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:36:14.195978 master-0 kubenswrapper[37036]: I0312 14:36:14.195955 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-metrics-client-ca\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:36:14.196045 master-0 kubenswrapper[37036]: I0312 14:36:14.196010 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99433993-93cf-46cb-bb66-485672cb2554-serving-cert\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:36:14.196076 master-0 kubenswrapper[37036]: I0312 14:36:14.196052 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b77ad35-2fff-47bb-ad34-abb3868b09a9-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-lds9v\" (UID: \"6b77ad35-2fff-47bb-ad34-abb3868b09a9\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" Mar 12 14:36:14.196177 master-0 kubenswrapper[37036]: I0312 14:36:14.196150 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd29b21c-7a0e-4311-952f-427b00468e66-serving-cert\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:36:14.196284 master-0 kubenswrapper[37036]: I0312 14:36:14.196245 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-proxy-ca-bundles\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:36:14.196329 master-0 kubenswrapper[37036]: I0312 14:36:14.196309 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-pxgq9\" (UID: \"de61e1fe-294c-48a6-8cf3-aeb4637ef2cc\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" Mar 12 14:36:14.196362 master-0 kubenswrapper[37036]: I0312 14:36:14.196353 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df31c4c2-304e-4bad-8e6f-18c174eba675-serving-cert\") pod \"route-controller-manager-7f8bfc67b-pz8rc\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:36:14.196391 master-0 kubenswrapper[37036]: I0312 14:36:14.196378 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3edaa533-ecbb-443e-a270-4cb4f923daf6-config\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:36:14.196425 master-0 kubenswrapper[37036]: I0312 14:36:14.196407 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/61d829d7-38e1-4826-942c-f7317c4a4bec-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-vfsmf\" (UID: \"61d829d7-38e1-4826-942c-f7317c4a4bec\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" Mar 12 14:36:14.196458 master-0 kubenswrapper[37036]: I0312 14:36:14.196428 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3edaa533-ecbb-443e-a270-4cb4f923daf6-images\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:36:14.196531 master-0 kubenswrapper[37036]: I0312 14:36:14.196506 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-proxy-ca-bundles\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:36:14.196607 master-0 kubenswrapper[37036]: I0312 14:36:14.196582 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-pxgq9\" (UID: \"de61e1fe-294c-48a6-8cf3-aeb4637ef2cc\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" Mar 12 14:36:14.196663 master-0 kubenswrapper[37036]: I0312 14:36:14.196636 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df31c4c2-304e-4bad-8e6f-18c174eba675-serving-cert\") pod \"route-controller-manager-7f8bfc67b-pz8rc\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:36:14.196663 master-0 kubenswrapper[37036]: I0312 14:36:14.196642 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3edaa533-ecbb-443e-a270-4cb4f923daf6-config\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:36:14.196741 master-0 kubenswrapper[37036]: I0312 14:36:14.196649 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df31c4c2-304e-4bad-8e6f-18c174eba675-client-ca\") pod \"route-controller-manager-7f8bfc67b-pz8rc\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:36:14.196741 master-0 kubenswrapper[37036]: I0312 14:36:14.196698 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3edaa533-ecbb-443e-a270-4cb4f923daf6-images\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:36:14.196801 master-0 kubenswrapper[37036]: I0312 14:36:14.196735 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/59f21770-429b-4b63-82fd-50ce0daf698d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-jms82\" (UID: \"59f21770-429b-4b63-82fd-50ce0daf698d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" Mar 12 14:36:14.196801 master-0 kubenswrapper[37036]: I0312 14:36:14.196772 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df31c4c2-304e-4bad-8e6f-18c174eba675-client-ca\") pod \"route-controller-manager-7f8bfc67b-pz8rc\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:36:14.197374 master-0 kubenswrapper[37036]: I0312 14:36:14.197357 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/61d829d7-38e1-4826-942c-f7317c4a4bec-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-vfsmf\" (UID: \"61d829d7-38e1-4826-942c-f7317c4a4bec\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" Mar 12 14:36:14.201288 master-0 kubenswrapper[37036]: I0312 14:36:14.201266 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 12 14:36:14.206507 master-0 kubenswrapper[37036]: I0312 14:36:14.206473 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd29b21c-7a0e-4311-952f-427b00468e66-serving-cert\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:36:14.221776 master-0 kubenswrapper[37036]: I0312 14:36:14.221727 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 12 14:36:14.223073 master-0 kubenswrapper[37036]: I0312 14:36:14.223018 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b77ad35-2fff-47bb-ad34-abb3868b09a9-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-lds9v\" (UID: \"6b77ad35-2fff-47bb-ad34-abb3868b09a9\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" Mar 12 14:36:14.241649 master-0 kubenswrapper[37036]: I0312 14:36:14.241594 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 12 14:36:14.245517 master-0 kubenswrapper[37036]: I0312 14:36:14.245440 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd29b21c-7a0e-4311-952f-427b00468e66-service-ca-bundle\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:36:14.262045 master-0 kubenswrapper[37036]: I0312 14:36:14.261973 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 12 14:36:14.290751 master-0 kubenswrapper[37036]: I0312 14:36:14.289589 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 12 14:36:14.294678 master-0 kubenswrapper[37036]: I0312 14:36:14.294629 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd29b21c-7a0e-4311-952f-427b00468e66-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:36:14.301349 master-0 kubenswrapper[37036]: I0312 14:36:14.301297 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 12 14:36:14.321764 master-0 kubenswrapper[37036]: I0312 14:36:14.321699 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 12 14:36:14.341412 master-0 kubenswrapper[37036]: I0312 14:36:14.341302 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 12 14:36:14.342873 master-0 kubenswrapper[37036]: I0312 14:36:14.342824 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b77ad35-2fff-47bb-ad34-abb3868b09a9-images\") pod \"machine-config-operator-fdb5c78b5-lds9v\" (UID: \"6b77ad35-2fff-47bb-ad34-abb3868b09a9\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" Mar 12 14:36:14.361468 master-0 kubenswrapper[37036]: I0312 14:36:14.361406 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 12 14:36:14.364921 master-0 kubenswrapper[37036]: I0312 14:36:14.364851 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/06eb9f4b-167e-435b-8ef6-ae44fc0b85a9-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-xgrsw\" (UID: \"06eb9f4b-167e-435b-8ef6-ae44fc0b85a9\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw" Mar 12 14:36:14.381719 master-0 kubenswrapper[37036]: I0312 14:36:14.381666 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 12 14:36:14.385316 master-0 kubenswrapper[37036]: I0312 14:36:14.385275 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f5cd3ff-ced6-47e3-8054-d83053d87680-config\") pod \"machine-api-operator-84bf6db4f9-qtx2d\" (UID: \"6f5cd3ff-ced6-47e3-8054-d83053d87680\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" Mar 12 14:36:14.401759 master-0 kubenswrapper[37036]: I0312 14:36:14.401719 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 12 14:36:14.403930 master-0 kubenswrapper[37036]: I0312 14:36:14.403878 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6f5cd3ff-ced6-47e3-8054-d83053d87680-images\") pod \"machine-api-operator-84bf6db4f9-qtx2d\" (UID: \"6f5cd3ff-ced6-47e3-8054-d83053d87680\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" Mar 12 14:36:14.422000 master-0 kubenswrapper[37036]: I0312 14:36:14.421948 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 12 14:36:14.424318 master-0 kubenswrapper[37036]: I0312 14:36:14.424269 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef824102-83a5-4629-8057-d4f1a57a530d-apiservice-cert\") pod \"packageserver-5957c5c5dc-njb8x\" (UID: \"ef824102-83a5-4629-8057-d4f1a57a530d\") " pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:36:14.426073 master-0 kubenswrapper[37036]: I0312 14:36:14.426027 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef824102-83a5-4629-8057-d4f1a57a530d-webhook-cert\") pod \"packageserver-5957c5c5dc-njb8x\" (UID: \"ef824102-83a5-4629-8057-d4f1a57a530d\") " pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:36:14.441463 master-0 kubenswrapper[37036]: I0312 14:36:14.441392 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 12 14:36:14.446018 master-0 kubenswrapper[37036]: I0312 14:36:14.445976 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8dd912f8-2c4d-4a0a-ba41-918ab5c235ba-webhook-certs\") pod \"multus-admission-controller-7769569c45-s5wj4\" (UID: \"8dd912f8-2c4d-4a0a-ba41-918ab5c235ba\") " pod="openshift-multus/multus-admission-controller-7769569c45-s5wj4" Mar 12 14:36:14.462257 master-0 kubenswrapper[37036]: I0312 14:36:14.462184 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 12 14:36:14.465937 master-0 kubenswrapper[37036]: I0312 14:36:14.465880 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f5cd3ff-ced6-47e3-8054-d83053d87680-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-qtx2d\" (UID: \"6f5cd3ff-ced6-47e3-8054-d83053d87680\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" Mar 12 14:36:14.482599 master-0 kubenswrapper[37036]: I0312 14:36:14.482547 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 12 14:36:14.483749 master-0 kubenswrapper[37036]: I0312 14:36:14.483703 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/40912d56-8288-4d58-ad91-7455bd460887-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-44b6s\" (UID: \"40912d56-8288-4d58-ad91-7455bd460887\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" Mar 12 14:36:14.502426 master-0 kubenswrapper[37036]: I0312 14:36:14.502369 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-btzl2" Mar 12 14:36:14.523704 master-0 kubenswrapper[37036]: I0312 14:36:14.523616 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 12 14:36:14.527343 master-0 kubenswrapper[37036]: I0312 14:36:14.527299 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/40912d56-8288-4d58-ad91-7455bd460887-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-44b6s\" (UID: \"40912d56-8288-4d58-ad91-7455bd460887\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" Mar 12 14:36:14.542576 master-0 kubenswrapper[37036]: I0312 14:36:14.542501 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 12 14:36:14.560307 master-0 kubenswrapper[37036]: I0312 14:36:14.560232 37036 request.go:700] Waited for 2.000243406s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dmachine-approver-config&limit=500&resourceVersion=0 Mar 12 14:36:14.561584 master-0 kubenswrapper[37036]: I0312 14:36:14.561536 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 12 14:36:14.563752 master-0 kubenswrapper[37036]: I0312 14:36:14.563678 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40912d56-8288-4d58-ad91-7455bd460887-config\") pod \"machine-approver-754bdc9f9d-44b6s\" (UID: \"40912d56-8288-4d58-ad91-7455bd460887\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" Mar 12 14:36:14.581586 master-0 kubenswrapper[37036]: I0312 14:36:14.581517 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 12 14:36:14.602299 master-0 kubenswrapper[37036]: I0312 14:36:14.602187 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-mxcxn" Mar 12 14:36:14.621890 master-0 kubenswrapper[37036]: I0312 14:36:14.621815 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-pjs2n" Mar 12 14:36:14.641933 master-0 kubenswrapper[37036]: I0312 14:36:14.641853 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-q2s6g" Mar 12 14:36:14.667069 master-0 kubenswrapper[37036]: I0312 14:36:14.667013 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-rn7z4" Mar 12 14:36:14.682809 master-0 kubenswrapper[37036]: I0312 14:36:14.682733 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-cfs7s" Mar 12 14:36:14.702383 master-0 kubenswrapper[37036]: I0312 14:36:14.702302 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 12 14:36:14.706046 master-0 kubenswrapper[37036]: I0312 14:36:14.706007 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/1047bb4a-135f-488d-9399-0518cb3a827d-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:36:14.721721 master-0 kubenswrapper[37036]: I0312 14:36:14.721676 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 12 14:36:14.742127 master-0 kubenswrapper[37036]: I0312 14:36:14.742044 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 14:36:14.761047 master-0 kubenswrapper[37036]: I0312 14:36:14.760995 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 12 14:36:14.765165 master-0 kubenswrapper[37036]: I0312 14:36:14.765118 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1047bb4a-135f-488d-9399-0518cb3a827d-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:36:14.781878 master-0 kubenswrapper[37036]: I0312 14:36:14.781830 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 12 14:36:14.786696 master-0 kubenswrapper[37036]: I0312 14:36:14.786659 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1047bb4a-135f-488d-9399-0518cb3a827d-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:36:14.802471 master-0 kubenswrapper[37036]: I0312 14:36:14.802432 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 12 14:36:14.807319 master-0 kubenswrapper[37036]: I0312 14:36:14.807258 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/61d829d7-38e1-4826-942c-f7317c4a4bec-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-vfsmf\" (UID: \"61d829d7-38e1-4826-942c-f7317c4a4bec\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" Mar 12 14:36:14.821974 master-0 kubenswrapper[37036]: I0312 14:36:14.821919 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-kblmx" Mar 12 14:36:14.842125 master-0 kubenswrapper[37036]: I0312 14:36:14.842026 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-28q99" Mar 12 14:36:14.862503 master-0 kubenswrapper[37036]: I0312 14:36:14.862393 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 12 14:36:14.866186 master-0 kubenswrapper[37036]: I0312 14:36:14.866147 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a-node-bootstrap-token\") pod \"machine-config-server-nj7qg\" (UID: \"6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a\") " pod="openshift-machine-config-operator/machine-config-server-nj7qg" Mar 12 14:36:14.901924 master-0 kubenswrapper[37036]: I0312 14:36:14.901784 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 12 14:36:14.905473 master-0 kubenswrapper[37036]: I0312 14:36:14.905429 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a-certs\") pod \"machine-config-server-nj7qg\" (UID: \"6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a\") " pod="openshift-machine-config-operator/machine-config-server-nj7qg" Mar 12 14:36:14.921220 master-0 kubenswrapper[37036]: I0312 14:36:14.921187 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 12 14:36:14.924500 master-0 kubenswrapper[37036]: I0312 14:36:14.924467 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-metrics-client-ca\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:14.925572 master-0 kubenswrapper[37036]: I0312 14:36:14.925540 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/59f21770-429b-4b63-82fd-50ce0daf698d-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-jms82\" (UID: \"59f21770-429b-4b63-82fd-50ce0daf698d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" Mar 12 14:36:14.926888 master-0 kubenswrapper[37036]: I0312 14:36:14.926856 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a81be38f-e07e-4863-8d61-fdefc2713a6a-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:36:14.926961 master-0 kubenswrapper[37036]: I0312 14:36:14.926855 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-bwl7h\" (UID: \"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" Mar 12 14:36:14.927175 master-0 kubenswrapper[37036]: I0312 14:36:14.927094 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-metrics-client-ca\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:36:14.942052 master-0 kubenswrapper[37036]: I0312 14:36:14.941966 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-kl5h5" Mar 12 14:36:14.961889 master-0 kubenswrapper[37036]: I0312 14:36:14.961487 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 12 14:36:14.962684 master-0 kubenswrapper[37036]: I0312 14:36:14.962623 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-bwl7h\" (UID: \"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" Mar 12 14:36:14.981652 master-0 kubenswrapper[37036]: I0312 14:36:14.981596 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 12 14:36:14.984084 master-0 kubenswrapper[37036]: I0312 14:36:14.983794 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-bwl7h\" (UID: \"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" Mar 12 14:36:15.002767 master-0 kubenswrapper[37036]: I0312 14:36:15.002674 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 12 14:36:15.008055 master-0 kubenswrapper[37036]: I0312 14:36:15.007871 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/59f21770-429b-4b63-82fd-50ce0daf698d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-jms82\" (UID: \"59f21770-429b-4b63-82fd-50ce0daf698d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" Mar 12 14:36:15.023355 master-0 kubenswrapper[37036]: I0312 14:36:15.023289 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-7zf28" Mar 12 14:36:15.042123 master-0 kubenswrapper[37036]: I0312 14:36:15.042049 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 12 14:36:15.045345 master-0 kubenswrapper[37036]: I0312 14:36:15.045311 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/59f21770-429b-4b63-82fd-50ce0daf698d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-jms82\" (UID: \"59f21770-429b-4b63-82fd-50ce0daf698d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" Mar 12 14:36:15.062399 master-0 kubenswrapper[37036]: I0312 14:36:15.062343 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 12 14:36:15.063481 master-0 kubenswrapper[37036]: I0312 14:36:15.063445 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:36:15.082348 master-0 kubenswrapper[37036]: I0312 14:36:15.082236 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 12 14:36:15.086994 master-0 kubenswrapper[37036]: I0312 14:36:15.086946 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:36:15.102614 master-0 kubenswrapper[37036]: I0312 14:36:15.102561 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-8hbfc" Mar 12 14:36:15.122443 master-0 kubenswrapper[37036]: I0312 14:36:15.122320 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 12 14:36:15.126242 master-0 kubenswrapper[37036]: I0312 14:36:15.126193 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:36:15.142304 master-0 kubenswrapper[37036]: I0312 14:36:15.142244 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 12 14:36:15.146933 master-0 kubenswrapper[37036]: I0312 14:36:15.146852 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-telemeter-client-tls\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:15.162431 master-0 kubenswrapper[37036]: I0312 14:36:15.162361 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-ct5mf" Mar 12 14:36:15.181856 master-0 kubenswrapper[37036]: I0312 14:36:15.181792 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 12 14:36:15.182270 master-0 kubenswrapper[37036]: I0312 14:36:15.182228 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-secret-telemeter-client\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:15.192910 master-0 kubenswrapper[37036]: E0312 14:36:15.192313 37036 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:15.192910 master-0 kubenswrapper[37036]: E0312 14:36:15.192425 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-metrics-server-audit-profiles podName:addf66af-4d97-4c1e-960d-ace98c27961b nodeName:}" failed. No retries permitted until 2026-03-12 14:36:16.192400739 +0000 UTC m=+35.200141716 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-metrics-server-audit-profiles") pod "metrics-server-85b44c7984-pzbfq" (UID: "addf66af-4d97-4c1e-960d-ace98c27961b") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:15.192910 master-0 kubenswrapper[37036]: E0312 14:36:15.192330 37036 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:15.192910 master-0 kubenswrapper[37036]: E0312 14:36:15.192525 37036 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:15.192910 master-0 kubenswrapper[37036]: E0312 14:36:15.192549 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-server-tls podName:addf66af-4d97-4c1e-960d-ace98c27961b nodeName:}" failed. No retries permitted until 2026-03-12 14:36:16.192520602 +0000 UTC m=+35.200261699 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-server-tls") pod "metrics-server-85b44c7984-pzbfq" (UID: "addf66af-4d97-4c1e-960d-ace98c27961b") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:15.192910 master-0 kubenswrapper[37036]: E0312 14:36:15.192574 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-telemeter-trusted-ca-bundle podName:f9dfe48c-daa1-4c18-9cf5-7b4930a0e649 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:16.192561303 +0000 UTC m=+35.200302240 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-telemeter-trusted-ca-bundle") pod "telemeter-client-cbb5fd9f8-xq7vd" (UID: "f9dfe48c-daa1-4c18-9cf5-7b4930a0e649") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:15.192910 master-0 kubenswrapper[37036]: E0312 14:36:15.192586 37036 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:15.192910 master-0 kubenswrapper[37036]: E0312 14:36:15.192637 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef5679f7-5bf5-409d-b74b-64a9cbb6c701-cert podName:ef5679f7-5bf5-409d-b74b-64a9cbb6c701 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:16.192627375 +0000 UTC m=+35.200368412 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef5679f7-5bf5-409d-b74b-64a9cbb6c701-cert") pod "ingress-canary-dbdr9" (UID: "ef5679f7-5bf5-409d-b74b-64a9cbb6c701") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:15.193470 master-0 kubenswrapper[37036]: E0312 14:36:15.193431 37036 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:15.193560 master-0 kubenswrapper[37036]: E0312 14:36:15.193537 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-secret-telemeter-client-kube-rbac-proxy-config podName:f9dfe48c-daa1-4c18-9cf5-7b4930a0e649 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:16.193517627 +0000 UTC m=+35.201258564 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-cbb5fd9f8-xq7vd" (UID: "f9dfe48c-daa1-4c18-9cf5-7b4930a0e649") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:15.194574 master-0 kubenswrapper[37036]: E0312 14:36:15.194511 37036 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:15.194574 master-0 kubenswrapper[37036]: E0312 14:36:15.194532 37036 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:15.194574 master-0 kubenswrapper[37036]: E0312 14:36:15.194550 37036 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-7nn9s21bftmgp: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:15.194778 master-0 kubenswrapper[37036]: E0312 14:36:15.194555 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-tls podName:b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:16.194544003 +0000 UTC m=+35.202284940 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-tls") pod "node-exporter-5pkwh" (UID: "b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:15.194778 master-0 kubenswrapper[37036]: E0312 14:36:15.194624 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-kube-rbac-proxy-config podName:b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:16.194614874 +0000 UTC m=+35.202355811 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-kube-rbac-proxy-config") pod "node-exporter-5pkwh" (UID: "b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:15.194778 master-0 kubenswrapper[37036]: E0312 14:36:15.194636 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-client-ca-bundle podName:addf66af-4d97-4c1e-960d-ace98c27961b nodeName:}" failed. No retries permitted until 2026-03-12 14:36:16.194630825 +0000 UTC m=+35.202371762 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-client-ca-bundle") pod "metrics-server-85b44c7984-pzbfq" (UID: "addf66af-4d97-4c1e-960d-ace98c27961b") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:15.196168 master-0 kubenswrapper[37036]: E0312 14:36:15.196022 37036 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:15.196168 master-0 kubenswrapper[37036]: E0312 14:36:15.196059 37036 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:15.196168 master-0 kubenswrapper[37036]: E0312 14:36:15.196067 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-federate-client-tls podName:f9dfe48c-daa1-4c18-9cf5-7b4930a0e649 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:16.196055551 +0000 UTC m=+35.203796488 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-federate-client-tls") pod "telemeter-client-cbb5fd9f8-xq7vd" (UID: "f9dfe48c-daa1-4c18-9cf5-7b4930a0e649") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:15.196168 master-0 kubenswrapper[37036]: E0312 14:36:15.196099 37036 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:15.196168 master-0 kubenswrapper[37036]: E0312 14:36:15.196102 37036 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:15.196168 master-0 kubenswrapper[37036]: E0312 14:36:15.196116 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-client-certs podName:addf66af-4d97-4c1e-960d-ace98c27961b nodeName:}" failed. No retries permitted until 2026-03-12 14:36:16.196106942 +0000 UTC m=+35.203847879 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-client-certs") pod "metrics-server-85b44c7984-pzbfq" (UID: "addf66af-4d97-4c1e-960d-ace98c27961b") : failed to sync secret cache: timed out waiting for the condition Mar 12 14:36:15.196168 master-0 kubenswrapper[37036]: E0312 14:36:15.196136 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-serving-certs-ca-bundle podName:f9dfe48c-daa1-4c18-9cf5-7b4930a0e649 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:16.196128432 +0000 UTC m=+35.203869499 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-serving-certs-ca-bundle") pod "telemeter-client-cbb5fd9f8-xq7vd" (UID: "f9dfe48c-daa1-4c18-9cf5-7b4930a0e649") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:15.196168 master-0 kubenswrapper[37036]: E0312 14:36:15.196153 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-configmap-kubelet-serving-ca-bundle podName:addf66af-4d97-4c1e-960d-ace98c27961b nodeName:}" failed. No retries permitted until 2026-03-12 14:36:16.196144813 +0000 UTC m=+35.203885850 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-configmap-kubelet-serving-ca-bundle") pod "metrics-server-85b44c7984-pzbfq" (UID: "addf66af-4d97-4c1e-960d-ace98c27961b") : failed to sync configmap cache: timed out waiting for the condition Mar 12 14:36:15.201456 master-0 kubenswrapper[37036]: I0312 14:36:15.201434 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 12 14:36:15.221658 master-0 kubenswrapper[37036]: I0312 14:36:15.221592 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 12 14:36:15.241357 master-0 kubenswrapper[37036]: I0312 14:36:15.241287 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 12 14:36:15.261329 master-0 kubenswrapper[37036]: I0312 14:36:15.261278 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 12 14:36:15.281843 master-0 kubenswrapper[37036]: I0312 14:36:15.281781 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-gbqf6" Mar 12 14:36:15.302344 master-0 kubenswrapper[37036]: I0312 14:36:15.302306 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 12 14:36:15.335231 master-0 kubenswrapper[37036]: I0312 14:36:15.335178 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 12 14:36:15.341078 master-0 kubenswrapper[37036]: I0312 14:36:15.341026 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 12 14:36:15.361838 master-0 kubenswrapper[37036]: I0312 14:36:15.361773 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-7nn9s21bftmgp" Mar 12 14:36:15.381807 master-0 kubenswrapper[37036]: I0312 14:36:15.381676 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-787xq" Mar 12 14:36:15.402021 master-0 kubenswrapper[37036]: I0312 14:36:15.401966 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 12 14:36:15.422003 master-0 kubenswrapper[37036]: I0312 14:36:15.421939 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 12 14:36:15.441953 master-0 kubenswrapper[37036]: I0312 14:36:15.441880 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 12 14:36:15.463053 master-0 kubenswrapper[37036]: I0312 14:36:15.462991 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 12 14:36:15.481727 master-0 kubenswrapper[37036]: I0312 14:36:15.481676 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-vbgk2" Mar 12 14:36:15.502293 master-0 kubenswrapper[37036]: I0312 14:36:15.502237 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 12 14:36:15.521650 master-0 kubenswrapper[37036]: I0312 14:36:15.521591 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 12 14:36:15.553024 master-0 kubenswrapper[37036]: I0312 14:36:15.552966 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7krt\" (UniqueName: \"kubernetes.io/projected/a81be38f-e07e-4863-8d61-fdefc2713a6a-kube-api-access-b7krt\") pod \"kube-state-metrics-68b88f8cb5-vfvts\" (UID: \"a81be38f-e07e-4863-8d61-fdefc2713a6a\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vfvts" Mar 12 14:36:15.560360 master-0 kubenswrapper[37036]: I0312 14:36:15.560325 37036 request.go:700] Waited for 2.932498314s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/cluster-network-operator/token Mar 12 14:36:15.572457 master-0 kubenswrapper[37036]: I0312 14:36:15.572401 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4q4w\" (UniqueName: \"kubernetes.io/projected/7433d9bf-4edf-4787-a7a1-e5102c7264c7-kube-api-access-t4q4w\") pod \"network-operator-7c649bf6d4-ldxfn\" (UID: \"7433d9bf-4edf-4787-a7a1-e5102c7264c7\") " pod="openshift-network-operator/network-operator-7c649bf6d4-ldxfn" Mar 12 14:36:15.596482 master-0 kubenswrapper[37036]: I0312 14:36:15.596403 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhdq5\" (UniqueName: \"kubernetes.io/projected/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-kube-api-access-qhdq5\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:36:15.614788 master-0 kubenswrapper[37036]: I0312 14:36:15.614735 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmcz9\" (UniqueName: \"kubernetes.io/projected/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-kube-api-access-mmcz9\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:15.633451 master-0 kubenswrapper[37036]: I0312 14:36:15.633358 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvngn\" (UniqueName: \"kubernetes.io/projected/8e733069-752a-4140-83eb-8287f1bce1a7-kube-api-access-qvngn\") pod \"network-check-target-8q2fv\" (UID: \"8e733069-752a-4140-83eb-8287f1bce1a7\") " pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:36:15.652483 master-0 kubenswrapper[37036]: I0312 14:36:15.652430 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxt4g\" (UniqueName: \"kubernetes.io/projected/6defef79-6058-466a-ae0b-8eb9258126be-kube-api-access-zxt4g\") pod \"ovnkube-control-plane-66b55d57d-xpc82\" (UID: \"6defef79-6058-466a-ae0b-8eb9258126be\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-xpc82" Mar 12 14:36:15.675419 master-0 kubenswrapper[37036]: I0312 14:36:15.675357 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjsjh\" (UniqueName: \"kubernetes.io/projected/8e4d9407-ff79-4396-a37f-896617e024d4-kube-api-access-sjsjh\") pod \"machine-config-daemon-ngzc8\" (UID: \"8e4d9407-ff79-4396-a37f-896617e024d4\") " pod="openshift-machine-config-operator/machine-config-daemon-ngzc8" Mar 12 14:36:15.697557 master-0 kubenswrapper[37036]: I0312 14:36:15.697490 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxnzm\" (UniqueName: \"kubernetes.io/projected/9757756c-cb67-4b6f-99c3-dd63f904897a-kube-api-access-hxnzm\") pod \"multus-additional-cni-plugins-h868v\" (UID: \"9757756c-cb67-4b6f-99c3-dd63f904897a\") " pod="openshift-multus/multus-additional-cni-plugins-h868v" Mar 12 14:36:15.711792 master-0 kubenswrapper[37036]: I0312 14:36:15.711732 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2435b91-86d6-415b-a978-34cc859e74f2-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:36:15.733792 master-0 kubenswrapper[37036]: I0312 14:36:15.733742 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shknb\" (UniqueName: \"kubernetes.io/projected/3815db41-fe01-43f6-b75c-4ccca9124f51-kube-api-access-shknb\") pod \"node-resolver-nml4k\" (UID: \"3815db41-fe01-43f6-b75c-4ccca9124f51\") " pod="openshift-dns/node-resolver-nml4k" Mar 12 14:36:15.755680 master-0 kubenswrapper[37036]: I0312 14:36:15.755624 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg62n\" (UniqueName: \"kubernetes.io/projected/df31c4c2-304e-4bad-8e6f-18c174eba675-kube-api-access-gg62n\") pod \"route-controller-manager-7f8bfc67b-pz8rc\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:36:15.778921 master-0 kubenswrapper[37036]: I0312 14:36:15.777078 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m97zx\" (UniqueName: \"kubernetes.io/projected/6b77ad35-2fff-47bb-ad34-abb3868b09a9-kube-api-access-m97zx\") pod \"machine-config-operator-fdb5c78b5-lds9v\" (UID: \"6b77ad35-2fff-47bb-ad34-abb3868b09a9\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-lds9v" Mar 12 14:36:15.796928 master-0 kubenswrapper[37036]: I0312 14:36:15.793174 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vnhl\" (UniqueName: \"kubernetes.io/projected/8c6b9f13-4a3a-4920-a84b-f76516501f81-kube-api-access-2vnhl\") pod \"dns-operator-589895fbb7-q4wwv\" (UID: \"8c6b9f13-4a3a-4920-a84b-f76516501f81\") " pod="openshift-dns-operator/dns-operator-589895fbb7-q4wwv" Mar 12 14:36:15.817793 master-0 kubenswrapper[37036]: I0312 14:36:15.817742 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z8pd\" (UniqueName: \"kubernetes.io/projected/879e9bf1-ce4a-40b7-a72c-fe4c61e96cea-kube-api-access-2z8pd\") pod \"cluster-node-tuning-operator-66c7586884-zghs6\" (UID: \"879e9bf1-ce4a-40b7-a72c-fe4c61e96cea\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-zghs6" Mar 12 14:36:15.836910 master-0 kubenswrapper[37036]: I0312 14:36:15.836839 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62ptf\" (UniqueName: \"kubernetes.io/projected/f569ed3b-924d-4829-b192-f508ee70658d-kube-api-access-62ptf\") pod \"cluster-samples-operator-664cb58b85-q29ch\" (UID: \"f569ed3b-924d-4829-b192-f508ee70658d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-q29ch" Mar 12 14:36:15.861050 master-0 kubenswrapper[37036]: I0312 14:36:15.860917 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpq4d\" (UniqueName: \"kubernetes.io/projected/1bc0d552-01c7-4212-a551-d16419f2dc80-kube-api-access-vpq4d\") pod \"marketplace-operator-64bf9778cb-qzdff\" (UID: \"1bc0d552-01c7-4212-a551-d16419f2dc80\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:36:15.885568 master-0 kubenswrapper[37036]: I0312 14:36:15.885475 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkmrv\" (UniqueName: \"kubernetes.io/projected/a2435b91-86d6-415b-a978-34cc859e74f2-kube-api-access-qkmrv\") pod \"cluster-image-registry-operator-86d6d77c7c-54cr9\" (UID: \"a2435b91-86d6-415b-a978-34cc859e74f2\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-54cr9" Mar 12 14:36:15.892362 master-0 kubenswrapper[37036]: I0312 14:36:15.892312 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqh9t\" (UniqueName: \"kubernetes.io/projected/07a6a1d6-fecf-4847-b7c1-160d5d7320fb-kube-api-access-cqh9t\") pod \"olm-operator-d64cfc9db-f48hv\" (UID: \"07a6a1d6-fecf-4847-b7c1-160d5d7320fb\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:36:15.915659 master-0 kubenswrapper[37036]: I0312 14:36:15.915445 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08ea0d9f-0635-4759-803e-572eca2f2d34-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-vpn8v\" (UID: \"08ea0d9f-0635-4759-803e-572eca2f2d34\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-vpn8v" Mar 12 14:36:15.944317 master-0 kubenswrapper[37036]: I0312 14:36:15.944257 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27tm9\" (UniqueName: \"kubernetes.io/projected/8dd912f8-2c4d-4a0a-ba41-918ab5c235ba-kube-api-access-27tm9\") pod \"multus-admission-controller-7769569c45-s5wj4\" (UID: \"8dd912f8-2c4d-4a0a-ba41-918ab5c235ba\") " pod="openshift-multus/multus-admission-controller-7769569c45-s5wj4" Mar 12 14:36:16.013394 master-0 kubenswrapper[37036]: I0312 14:36:16.013335 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmrqg\" (UniqueName: \"kubernetes.io/projected/f3c13c5f-3d1f-4e0a-b77b-732255680086-kube-api-access-wmrqg\") pod \"control-plane-machine-set-operator-6686554ddc-7s8fj\" (UID: \"f3c13c5f-3d1f-4e0a-b77b-732255680086\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-7s8fj" Mar 12 14:36:16.022101 master-0 kubenswrapper[37036]: I0312 14:36:16.022058 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clj2j\" (UniqueName: \"kubernetes.io/projected/8660cef9-0ab3-453e-a4b9-c243daa6ddb0-kube-api-access-clj2j\") pod \"csi-snapshot-controller-operator-5685fbc7d-ckmlv\" (UID: \"8660cef9-0ab3-453e-a4b9-c243daa6ddb0\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-ckmlv" Mar 12 14:36:16.023131 master-0 kubenswrapper[37036]: I0312 14:36:16.023098 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pfns\" (UniqueName: \"kubernetes.io/projected/95c11263-0d68-4b11-bcfd-bcb0e96a6988-kube-api-access-6pfns\") pod \"multus-zttwz\" (UID: \"95c11263-0d68-4b11-bcfd-bcb0e96a6988\") " pod="openshift-multus/multus-zttwz" Mar 12 14:36:16.023937 master-0 kubenswrapper[37036]: I0312 14:36:16.023910 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbv7q\" (UniqueName: \"kubernetes.io/projected/d00a8cc7-7774-40bd-94a1-9ac2d0f63234-kube-api-access-bbv7q\") pod \"openshift-controller-manager-operator-8565d84698-zwdgk\" (UID: \"d00a8cc7-7774-40bd-94a1-9ac2d0f63234\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-zwdgk" Mar 12 14:36:16.047738 master-0 kubenswrapper[37036]: I0312 14:36:16.047705 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkgft\" (UniqueName: \"kubernetes.io/projected/3ec846db-e344-4f9e-95e6-7a0055f52766-kube-api-access-tkgft\") pod \"dns-default-fpjck\" (UID: \"3ec846db-e344-4f9e-95e6-7a0055f52766\") " pod="openshift-dns/dns-default-fpjck" Mar 12 14:36:16.076459 master-0 kubenswrapper[37036]: I0312 14:36:16.076425 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cqkl\" (UniqueName: \"kubernetes.io/projected/b9d51570-06dd-4e2f-9c19-07fb694279ae-kube-api-access-2cqkl\") pod \"iptables-alerter-vb4v5\" (UID: \"b9d51570-06dd-4e2f-9c19-07fb694279ae\") " pod="openshift-network-operator/iptables-alerter-vb4v5" Mar 12 14:36:16.077434 master-0 kubenswrapper[37036]: I0312 14:36:16.077414 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcq7v\" (UniqueName: \"kubernetes.io/projected/dd29b21c-7a0e-4311-952f-427b00468e66-kube-api-access-rcq7v\") pod \"insights-operator-8f89dfddd-gltz7\" (UID: \"dd29b21c-7a0e-4311-952f-427b00468e66\") " pod="openshift-insights/insights-operator-8f89dfddd-gltz7" Mar 12 14:36:16.094594 master-0 kubenswrapper[37036]: I0312 14:36:16.094527 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dlf2\" (UniqueName: \"kubernetes.io/projected/99433993-93cf-46cb-bb66-485672cb2554-kube-api-access-2dlf2\") pod \"controller-manager-6689dcd7fd-vw9vd\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:36:16.114789 master-0 kubenswrapper[37036]: I0312 14:36:16.114754 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwtr9\" (UniqueName: \"kubernetes.io/projected/e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9-kube-api-access-wwtr9\") pod \"network-node-identity-rqq4v\" (UID: \"e72c2e9c-978b-4f87-b6e3-6e20d82cc5e9\") " pod="openshift-network-node-identity/network-node-identity-rqq4v" Mar 12 14:36:16.134033 master-0 kubenswrapper[37036]: I0312 14:36:16.133980 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-276qm\" (UniqueName: \"kubernetes.io/projected/06eb9f4b-167e-435b-8ef6-ae44fc0b85a9-kube-api-access-276qm\") pod \"cluster-storage-operator-6fbfc8dc8f-xgrsw\" (UID: \"06eb9f4b-167e-435b-8ef6-ae44fc0b85a9\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-xgrsw" Mar 12 14:36:16.156403 master-0 kubenswrapper[37036]: I0312 14:36:16.156280 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qv7x\" (UniqueName: \"kubernetes.io/projected/cba33300-f7ef-4547-97ff-62e223da79cf-kube-api-access-6qv7x\") pod \"redhat-marketplace-vmhgb\" (UID: \"cba33300-f7ef-4547-97ff-62e223da79cf\") " pod="openshift-marketplace/redhat-marketplace-vmhgb" Mar 12 14:36:16.174291 master-0 kubenswrapper[37036]: I0312 14:36:16.174237 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56twk\" (UniqueName: \"kubernetes.io/projected/e7f6ebd3-98c8-457c-a88c-7e81270f01b5-kube-api-access-56twk\") pod \"router-default-79f8cd6fdd-gjwhp\" (UID: \"e7f6ebd3-98c8-457c-a88c-7e81270f01b5\") " pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:36:16.196168 master-0 kubenswrapper[37036]: I0312 14:36:16.194413 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9czc5\" (UniqueName: \"kubernetes.io/projected/61de099a-410b-4d30-83e8-19cf5901cb27-kube-api-access-9czc5\") pod \"service-ca-84bfdbbb7f-7lx8p\" (UID: \"61de099a-410b-4d30-83e8-19cf5901cb27\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-7lx8p" Mar 12 14:36:16.213469 master-0 kubenswrapper[37036]: I0312 14:36:16.213422 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2gnl\" (UniqueName: \"kubernetes.io/projected/d56089bf-177c-492d-8964-73a45574e7ed-kube-api-access-f2gnl\") pod \"csi-snapshot-controller-7577d6f48-z9hzg\" (UID: \"d56089bf-177c-492d-8964-73a45574e7ed\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-z9hzg" Mar 12 14:36:16.233025 master-0 kubenswrapper[37036]: I0312 14:36:16.232976 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flj9j\" (UniqueName: \"kubernetes.io/projected/1047bb4a-135f-488d-9399-0518cb3a827d-kube-api-access-flj9j\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5\" (UID: \"1047bb4a-135f-488d-9399-0518cb3a827d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5" Mar 12 14:36:16.233849 master-0 kubenswrapper[37036]: I0312 14:36:16.233500 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-server-tls\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:16.233849 master-0 kubenswrapper[37036]: I0312 14:36:16.233569 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-metrics-server-audit-profiles\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:16.233849 master-0 kubenswrapper[37036]: I0312 14:36:16.233785 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-telemeter-trusted-ca-bundle\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:16.233849 master-0 kubenswrapper[37036]: I0312 14:36:16.233845 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef5679f7-5bf5-409d-b74b-64a9cbb6c701-cert\") pod \"ingress-canary-dbdr9\" (UID: \"ef5679f7-5bf5-409d-b74b-64a9cbb6c701\") " pod="openshift-ingress-canary/ingress-canary-dbdr9" Mar 12 14:36:16.234099 master-0 kubenswrapper[37036]: I0312 14:36:16.233996 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-server-tls\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:16.234153 master-0 kubenswrapper[37036]: I0312 14:36:16.234135 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:16.234272 master-0 kubenswrapper[37036]: I0312 14:36:16.234241 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:36:16.234341 master-0 kubenswrapper[37036]: I0312 14:36:16.234281 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-tls\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:36:16.234341 master-0 kubenswrapper[37036]: I0312 14:36:16.234295 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-metrics-server-audit-profiles\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:16.234341 master-0 kubenswrapper[37036]: I0312 14:36:16.234316 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-telemeter-trusted-ca-bundle\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:16.234474 master-0 kubenswrapper[37036]: I0312 14:36:16.234301 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef5679f7-5bf5-409d-b74b-64a9cbb6c701-cert\") pod \"ingress-canary-dbdr9\" (UID: \"ef5679f7-5bf5-409d-b74b-64a9cbb6c701\") " pod="openshift-ingress-canary/ingress-canary-dbdr9" Mar 12 14:36:16.234567 master-0 kubenswrapper[37036]: I0312 14:36:16.234519 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:16.234567 master-0 kubenswrapper[37036]: I0312 14:36:16.234555 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:36:16.234669 master-0 kubenswrapper[37036]: I0312 14:36:16.234618 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-client-ca-bundle\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:16.234669 master-0 kubenswrapper[37036]: I0312 14:36:16.234664 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-client-certs\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:16.234857 master-0 kubenswrapper[37036]: I0312 14:36:16.234682 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:16.234857 master-0 kubenswrapper[37036]: I0312 14:36:16.234685 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-node-exporter-tls\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:36:16.234857 master-0 kubenswrapper[37036]: I0312 14:36:16.234843 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:16.235014 master-0 kubenswrapper[37036]: I0312 14:36:16.234952 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-client-certs\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:16.235115 master-0 kubenswrapper[37036]: I0312 14:36:16.235033 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-federate-client-tls\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:16.235161 master-0 kubenswrapper[37036]: I0312 14:36:16.235120 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-serving-certs-ca-bundle\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:16.235161 master-0 kubenswrapper[37036]: I0312 14:36:16.235146 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-client-ca-bundle\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:16.235462 master-0 kubenswrapper[37036]: I0312 14:36:16.235427 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-federate-client-tls\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:16.235462 master-0 kubenswrapper[37036]: I0312 14:36:16.235434 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9dfe48c-daa1-4c18-9cf5-7b4930a0e649-serving-certs-ca-bundle\") pod \"telemeter-client-cbb5fd9f8-xq7vd\" (UID: \"f9dfe48c-daa1-4c18-9cf5-7b4930a0e649\") " pod="openshift-monitoring/telemeter-client-cbb5fd9f8-xq7vd" Mar 12 14:36:16.251926 master-0 kubenswrapper[37036]: I0312 14:36:16.251849 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7nnk\" (UniqueName: \"kubernetes.io/projected/1f9b15c6-b4ee-4907-8daa-376e3b438896-kube-api-access-w7nnk\") pod \"operator-controller-controller-manager-6598bfb6c4-754hn\" (UID: \"1f9b15c6-b4ee-4907-8daa-376e3b438896\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:36:16.277702 master-0 kubenswrapper[37036]: I0312 14:36:16.277642 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms688\" (UniqueName: \"kubernetes.io/projected/4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba-kube-api-access-ms688\") pod \"prometheus-operator-5ff8674d55-bwl7h\" (UID: \"4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-bwl7h" Mar 12 14:36:16.292964 master-0 kubenswrapper[37036]: I0312 14:36:16.292873 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mbjg\" (UniqueName: \"kubernetes.io/projected/3f72fbbe-69f0-4622-be05-b839ff9b4d45-kube-api-access-2mbjg\") pod \"openshift-apiserver-operator-799b6db4d7-gt2tw\" (UID: \"3f72fbbe-69f0-4622-be05-b839ff9b4d45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gt2tw" Mar 12 14:36:16.314523 master-0 kubenswrapper[37036]: I0312 14:36:16.314477 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smwtd\" (UniqueName: \"kubernetes.io/projected/3edaa533-ecbb-443e-a270-4cb4f923daf6-kube-api-access-smwtd\") pod \"cluster-baremetal-operator-5cdb4c5598-hs6mc\" (UID: \"3edaa533-ecbb-443e-a270-4cb4f923daf6\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hs6mc" Mar 12 14:36:16.333072 master-0 kubenswrapper[37036]: I0312 14:36:16.333025 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbwl8\" (UniqueName: \"kubernetes.io/projected/2f59d485-9f69-4f36-836e-6338f84b7d69-kube-api-access-fbwl8\") pod \"redhat-operators-9bljc\" (UID: \"2f59d485-9f69-4f36-836e-6338f84b7d69\") " pod="openshift-marketplace/redhat-operators-9bljc" Mar 12 14:36:16.353719 master-0 kubenswrapper[37036]: I0312 14:36:16.353681 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcz8p\" (UniqueName: \"kubernetes.io/projected/6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a-kube-api-access-jcz8p\") pod \"machine-config-server-nj7qg\" (UID: \"6b66a2a2-4e14-4d24-b89c-b1e8bbcec92a\") " pod="openshift-machine-config-operator/machine-config-server-nj7qg" Mar 12 14:36:16.375108 master-0 kubenswrapper[37036]: I0312 14:36:16.375064 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67sxk\" (UniqueName: \"kubernetes.io/projected/b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7-kube-api-access-67sxk\") pod \"node-exporter-5pkwh\" (UID: \"b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7\") " pod="openshift-monitoring/node-exporter-5pkwh" Mar 12 14:36:16.395318 master-0 kubenswrapper[37036]: I0312 14:36:16.395246 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqqcc\" (UniqueName: \"kubernetes.io/projected/272b53c4-134c-404d-9a27-c7371415b1f7-kube-api-access-nqqcc\") pod \"catalog-operator-7d9c49f57b-whr79\" (UID: \"272b53c4-134c-404d-9a27-c7371415b1f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:36:16.416222 master-0 kubenswrapper[37036]: I0312 14:36:16.416115 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bdqv\" (UniqueName: \"kubernetes.io/projected/7fdce71e-8085-4316-be40-e535530c2ca4-kube-api-access-5bdqv\") pod \"network-metrics-daemon-n9v7g\" (UID: \"7fdce71e-8085-4316-be40-e535530c2ca4\") " pod="openshift-multus/network-metrics-daemon-n9v7g" Mar 12 14:36:16.432986 master-0 kubenswrapper[37036]: I0312 14:36:16.432906 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcwrv\" (UniqueName: \"kubernetes.io/projected/8d775283-2696-4411-8ddf-d4e6000f0a0c-kube-api-access-lcwrv\") pod \"etcd-operator-5884b9cd56-mjxsv\" (UID: \"8d775283-2696-4411-8ddf-d4e6000f0a0c\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-mjxsv" Mar 12 14:36:16.446340 master-0 kubenswrapper[37036]: I0312 14:36:16.446281 37036 scope.go:117] "RemoveContainer" containerID="8267e1775d4f1f71ce9ca7f7438e5d643c261adc1297b9c3415c07d0974bcee7" Mar 12 14:36:16.462760 master-0 kubenswrapper[37036]: I0312 14:36:16.462707 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kvhc\" (UniqueName: \"kubernetes.io/projected/ef824102-83a5-4629-8057-d4f1a57a530d-kube-api-access-5kvhc\") pod \"packageserver-5957c5c5dc-njb8x\" (UID: \"ef824102-83a5-4629-8057-d4f1a57a530d\") " pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:36:16.475462 master-0 kubenswrapper[37036]: I0312 14:36:16.475384 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc9zd\" (UniqueName: \"kubernetes.io/projected/3dc73c14-852d-4957-b6ac-84366ba0594f-kube-api-access-sc9zd\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hkf2t\" (UID: \"3dc73c14-852d-4957-b6ac-84366ba0594f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hkf2t" Mar 12 14:36:16.500335 master-0 kubenswrapper[37036]: I0312 14:36:16.500289 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9cfq\" (UniqueName: \"kubernetes.io/projected/70710a0b-8b5d-40f5-b726-fd5e2836ffbe-kube-api-access-b9cfq\") pod \"certified-operators-mgqz4\" (UID: \"70710a0b-8b5d-40f5-b726-fd5e2836ffbe\") " pod="openshift-marketplace/certified-operators-mgqz4" Mar 12 14:36:16.516100 master-0 kubenswrapper[37036]: I0312 14:36:16.516059 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqx42\" (UniqueName: \"kubernetes.io/projected/61d829d7-38e1-4826-942c-f7317c4a4bec-kube-api-access-zqx42\") pod \"machine-config-controller-ff46b7bdf-vfsmf\" (UID: \"61d829d7-38e1-4826-942c-f7317c4a4bec\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-vfsmf" Mar 12 14:36:16.535915 master-0 kubenswrapper[37036]: I0312 14:36:16.535851 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dkwb\" (UniqueName: \"kubernetes.io/projected/6f5cd3ff-ced6-47e3-8054-d83053d87680-kube-api-access-7dkwb\") pod \"machine-api-operator-84bf6db4f9-qtx2d\" (UID: \"6f5cd3ff-ced6-47e3-8054-d83053d87680\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-qtx2d" Mar 12 14:36:16.553736 master-0 kubenswrapper[37036]: I0312 14:36:16.553688 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k4mx\" (UniqueName: \"kubernetes.io/projected/761993bb-2cba-4e1a-b304-36a24817af94-kube-api-access-2k4mx\") pod \"ovnkube-node-h4b4k\" (UID: \"761993bb-2cba-4e1a-b304-36a24817af94\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:16.574606 master-0 kubenswrapper[37036]: I0312 14:36:16.574553 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4krm9\" (UniqueName: \"kubernetes.io/projected/39bda5b8-c748-4023-8680-8e8454512e5b-kube-api-access-4krm9\") pod \"apiserver-6b7d9dd778-7klpj\" (UID: \"39bda5b8-c748-4023-8680-8e8454512e5b\") " pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:16.580150 master-0 kubenswrapper[37036]: I0312 14:36:16.580121 37036 request.go:700] Waited for 3.936639139s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa/token Mar 12 14:36:16.595086 master-0 kubenswrapper[37036]: I0312 14:36:16.595023 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh2zk\" (UniqueName: \"kubernetes.io/projected/7420564a-dc9d-4a2e-b0fc-0cc01f115e3b-kube-api-access-jh2zk\") pod \"apiserver-794bf69795-vntlz\" (UID: \"7420564a-dc9d-4a2e-b0fc-0cc01f115e3b\") " pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:36:16.613922 master-0 kubenswrapper[37036]: I0312 14:36:16.613854 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv69s\" (UniqueName: \"kubernetes.io/projected/5fb06459-09da-4620-91cf-8c3fe8f425db-kube-api-access-zv69s\") pod \"tuned-btfvk\" (UID: \"5fb06459-09da-4620-91cf-8c3fe8f425db\") " pod="openshift-cluster-node-tuning-operator/tuned-btfvk" Mar 12 14:36:16.652820 master-0 kubenswrapper[37036]: I0312 14:36:16.652768 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rfxl\" (UniqueName: \"kubernetes.io/projected/0a898118-6d01-4211-92f0-43967b75405c-kube-api-access-8rfxl\") pod \"openshift-config-operator-64488f9d78-ljnjj\" (UID: \"0a898118-6d01-4211-92f0-43967b75405c\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:36:16.671709 master-0 kubenswrapper[37036]: I0312 14:36:16.671595 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2cq8\" (UniqueName: \"kubernetes.io/projected/9757edbb-8ce2-4513-9b32-a552df50634c-kube-api-access-m2cq8\") pod \"cluster-autoscaler-operator-69576476f7-b7296\" (UID: \"9757edbb-8ce2-4513-9b32-a552df50634c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-b7296" Mar 12 14:36:16.685964 master-0 kubenswrapper[37036]: I0312 14:36:16.685908 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdzwp\" (UniqueName: \"kubernetes.io/projected/4ef01b7f-f7cb-4fd4-a75d-fe7a657d68d4-kube-api-access-fdzwp\") pod \"migrator-57ccdf9b5-5zswp\" (UID: \"4ef01b7f-f7cb-4fd4-a75d-fe7a657d68d4\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zswp" Mar 12 14:36:16.704722 master-0 kubenswrapper[37036]: I0312 14:36:16.704655 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6-bound-sa-token\") pod \"ingress-operator-677db989d6-44hhf\" (UID: \"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" Mar 12 14:36:16.716379 master-0 kubenswrapper[37036]: I0312 14:36:16.716316 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a35674af-162c-4a4a-8605-158b2326267e-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-xxhhx\" (UID: \"a35674af-162c-4a4a-8605-158b2326267e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-xxhhx" Mar 12 14:36:16.741985 master-0 kubenswrapper[37036]: I0312 14:36:16.741935 37036 scope.go:117] "RemoveContainer" containerID="c16aee696a6ef88096dfa67f9116c7fd30990cd6603084cb800a4c732d12f445" Mar 12 14:36:16.742701 master-0 kubenswrapper[37036]: I0312 14:36:16.742639 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8tts\" (UniqueName: \"kubernetes.io/projected/85459175-2c9c-425d-bdfb-0a79c92ed110-kube-api-access-v8tts\") pod \"package-server-manager-854648ff6d-dvv78\" (UID: \"85459175-2c9c-425d-bdfb-0a79c92ed110\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:36:16.761225 master-0 kubenswrapper[37036]: I0312 14:36:16.761178 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfz8z\" (UniqueName: \"kubernetes.io/projected/e2742559-1f28-4f2c-a873-d6a9348972fb-kube-api-access-nfz8z\") pod \"community-operators-4gbmc\" (UID: \"e2742559-1f28-4f2c-a873-d6a9348972fb\") " pod="openshift-marketplace/community-operators-4gbmc" Mar 12 14:36:16.798722 master-0 kubenswrapper[37036]: I0312 14:36:16.796845 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4pvp\" (UniqueName: \"kubernetes.io/projected/76d596c0-6a41-43e1-9516-aee9ad834ec2-kube-api-access-c4pvp\") pod \"service-ca-operator-69b6fc6b88-fv6pp\" (UID: \"76d596c0-6a41-43e1-9516-aee9ad834ec2\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-fv6pp" Mar 12 14:36:16.798722 master-0 kubenswrapper[37036]: I0312 14:36:16.797529 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9gvf\" (UniqueName: \"kubernetes.io/projected/40912d56-8288-4d58-ad91-7455bd460887-kube-api-access-l9gvf\") pod \"machine-approver-754bdc9f9d-44b6s\" (UID: \"40912d56-8288-4d58-ad91-7455bd460887\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-44b6s" Mar 12 14:36:16.836136 master-0 kubenswrapper[37036]: I0312 14:36:16.836089 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv6gf\" (UniqueName: \"kubernetes.io/projected/ef5679f7-5bf5-409d-b74b-64a9cbb6c701-kube-api-access-vv6gf\") pod \"ingress-canary-dbdr9\" (UID: \"ef5679f7-5bf5-409d-b74b-64a9cbb6c701\") " pod="openshift-ingress-canary/ingress-canary-dbdr9" Mar 12 14:36:16.860940 master-0 kubenswrapper[37036]: I0312 14:36:16.858015 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktncx\" (UniqueName: \"kubernetes.io/projected/39252b5a-d014-4319-ad81-3c1bf2ef585e-kube-api-access-ktncx\") pod \"catalogd-controller-manager-7f8b8b6f4c-2pj4z\" (UID: \"39252b5a-d014-4319-ad81-3c1bf2ef585e\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:36:16.863972 master-0 kubenswrapper[37036]: I0312 14:36:16.862620 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxdqn\" (UniqueName: \"kubernetes.io/projected/59f21770-429b-4b63-82fd-50ce0daf698d-kube-api-access-qxdqn\") pod \"openshift-state-metrics-74cc79fd76-jms82\" (UID: \"59f21770-429b-4b63-82fd-50ce0daf698d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-jms82" Mar 12 14:36:16.880629 master-0 kubenswrapper[37036]: I0312 14:36:16.880249 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vntrg\" (UniqueName: \"kubernetes.io/projected/f7b68603-8af3-4a50-8d39-86bbcdf1c597-kube-api-access-vntrg\") pod \"network-check-source-7c67b67d47-wdt59\" (UID: \"f7b68603-8af3-4a50-8d39-86bbcdf1c597\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-wdt59" Mar 12 14:36:16.900804 master-0 kubenswrapper[37036]: I0312 14:36:16.900760 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j47xv\" (UniqueName: \"kubernetes.io/projected/42dbcb8f-e8c4-413e-977d-40aa6df226aa-kube-api-access-j47xv\") pod \"cluster-monitoring-operator-674cbfbd9d-6w5nv\" (UID: \"42dbcb8f-e8c4-413e-977d-40aa6df226aa\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-6w5nv" Mar 12 14:36:16.920279 master-0 kubenswrapper[37036]: I0312 14:36:16.919907 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtp2z\" (UniqueName: \"kubernetes.io/projected/de61e1fe-294c-48a6-8cf3-aeb4637ef2cc-kube-api-access-dtp2z\") pod \"cloud-credential-operator-55d85b7b47-pxgq9\" (UID: \"de61e1fe-294c-48a6-8cf3-aeb4637ef2cc\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-pxgq9" Mar 12 14:36:16.933869 master-0 kubenswrapper[37036]: I0312 14:36:16.933760 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6d7w\" (UniqueName: \"kubernetes.io/projected/addf66af-4d97-4c1e-960d-ace98c27961b-kube-api-access-l6d7w\") pod \"metrics-server-85b44c7984-pzbfq\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:16.954001 master-0 kubenswrapper[37036]: I0312 14:36:16.953956 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bba274a-38c7-4d13-88a5-6bc39228416c-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-qtql5\" (UID: \"1bba274a-38c7-4d13-88a5-6bc39228416c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-qtql5" Mar 12 14:36:16.976735 master-0 kubenswrapper[37036]: I0312 14:36:16.976672 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-smpl5\" (UID: \"a1ed125c-cbc0-4dfd-b006-f8d8bce3adb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-smpl5" Mar 12 14:36:17.006743 master-0 kubenswrapper[37036]: I0312 14:36:17.006684 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6z8v\" (UniqueName: \"kubernetes.io/projected/57930a54-89ab-4ec8-a504-74035bb74d63-kube-api-access-d6z8v\") pod \"authentication-operator-7c6989d6c4-jpf47\" (UID: \"57930a54-89ab-4ec8-a504-74035bb74d63\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-jpf47" Mar 12 14:36:17.008970 master-0 kubenswrapper[37036]: I0312 14:36:17.008949 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/5.log" Mar 12 14:36:17.020084 master-0 kubenswrapper[37036]: I0312 14:36:17.020022 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtqp6\" (UniqueName: \"kubernetes.io/projected/8106d14a-b448-4dd1-bccd-926f85394b5d-kube-api-access-jtqp6\") pod \"cluster-olm-operator-77899cf6d-h8sq4\" (UID: \"8106d14a-b448-4dd1-bccd-926f85394b5d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-h8sq4" Mar 12 14:36:17.034924 master-0 kubenswrapper[37036]: E0312 14:36:17.034185 37036 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 14:36:17.034924 master-0 kubenswrapper[37036]: E0312 14:36:17.034221 37036 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 14:36:17.034924 master-0 kubenswrapper[37036]: E0312 14:36:17.034279 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access podName:5a56d42a-efb4-4956-acab-d12c7ca5276e nodeName:}" failed. No retries permitted until 2026-03-12 14:36:17.534262407 +0000 UTC m=+36.542003344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access") pod "installer-4-master-0" (UID: "5a56d42a-efb4-4956-acab-d12c7ca5276e") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 14:36:17.053030 master-0 kubenswrapper[37036]: E0312 14:36:17.052972 37036 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.819s" Mar 12 14:36:17.053030 master-0 kubenswrapper[37036]: I0312 14:36:17.053013 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-4kp7s"] Mar 12 14:36:17.053502 master-0 kubenswrapper[37036]: E0312 14:36:17.053483 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a56d42a-efb4-4956-acab-d12c7ca5276e" containerName="installer" Mar 12 14:36:17.053502 master-0 kubenswrapper[37036]: I0312 14:36:17.053501 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a56d42a-efb4-4956-acab-d12c7ca5276e" containerName="installer" Mar 12 14:36:17.053589 master-0 kubenswrapper[37036]: E0312 14:36:17.053539 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efd52682-bf05-44fc-9790-8adfc87ca087" containerName="installer" Mar 12 14:36:17.053589 master-0 kubenswrapper[37036]: I0312 14:36:17.053546 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="efd52682-bf05-44fc-9790-8adfc87ca087" containerName="installer" Mar 12 14:36:17.053589 master-0 kubenswrapper[37036]: E0312 14:36:17.053570 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0743910-1ba7-490d-bc3e-5126562b04aa" containerName="installer" Mar 12 14:36:17.053589 master-0 kubenswrapper[37036]: I0312 14:36:17.053576 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0743910-1ba7-490d-bc3e-5126562b04aa" containerName="installer" Mar 12 14:36:17.053704 master-0 kubenswrapper[37036]: E0312 14:36:17.053600 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c3501c-0ebe-46d0-b2ed-540f96cd137c" containerName="installer" Mar 12 14:36:17.053704 master-0 kubenswrapper[37036]: I0312 14:36:17.053607 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c3501c-0ebe-46d0-b2ed-540f96cd137c" containerName="installer" Mar 12 14:36:17.053704 master-0 kubenswrapper[37036]: E0312 14:36:17.053617 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d8e6e9-c10f-4b43-8155-9addbfddba2e" containerName="installer" Mar 12 14:36:17.053704 master-0 kubenswrapper[37036]: I0312 14:36:17.053623 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d8e6e9-c10f-4b43-8155-9addbfddba2e" containerName="installer" Mar 12 14:36:17.053704 master-0 kubenswrapper[37036]: E0312 14:36:17.053655 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:36:17.053704 master-0 kubenswrapper[37036]: I0312 14:36:17.053661 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:36:17.053704 master-0 kubenswrapper[37036]: E0312 14:36:17.053675 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05fc4965-b390-4edc-a407-d431b06d7612" containerName="installer" Mar 12 14:36:17.053704 master-0 kubenswrapper[37036]: I0312 14:36:17.053681 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="05fc4965-b390-4edc-a407-d431b06d7612" containerName="installer" Mar 12 14:36:17.053704 master-0 kubenswrapper[37036]: E0312 14:36:17.053704 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="146495bf-0787-483f-a9fc-0e8925b89150" containerName="assisted-installer-controller" Mar 12 14:36:17.053704 master-0 kubenswrapper[37036]: I0312 14:36:17.053711 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="146495bf-0787-483f-a9fc-0e8925b89150" containerName="assisted-installer-controller" Mar 12 14:36:17.053989 master-0 kubenswrapper[37036]: E0312 14:36:17.053734 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a2b4b06-98cd-4ca3-aebe-d49651c6013f" containerName="installer" Mar 12 14:36:17.053989 master-0 kubenswrapper[37036]: I0312 14:36:17.053741 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a2b4b06-98cd-4ca3-aebe-d49651c6013f" containerName="installer" Mar 12 14:36:17.053989 master-0 kubenswrapper[37036]: E0312 14:36:17.053762 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="941c0808-bbfd-467e-b733-3a8294163ee5" containerName="installer" Mar 12 14:36:17.053989 master-0 kubenswrapper[37036]: I0312 14:36:17.053768 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="941c0808-bbfd-467e-b733-3a8294163ee5" containerName="installer" Mar 12 14:36:17.053989 master-0 kubenswrapper[37036]: E0312 14:36:17.053781 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8675d4-a0be-42a3-96af-e56f5fb02983" containerName="installer" Mar 12 14:36:17.053989 master-0 kubenswrapper[37036]: I0312 14:36:17.053787 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8675d4-a0be-42a3-96af-e56f5fb02983" containerName="installer" Mar 12 14:36:17.053989 master-0 kubenswrapper[37036]: E0312 14:36:17.053803 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b56974-d2b1-4205-af5a-70cc2b616d1a" containerName="installer" Mar 12 14:36:17.053989 master-0 kubenswrapper[37036]: I0312 14:36:17.053809 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b56974-d2b1-4205-af5a-70cc2b616d1a" containerName="installer" Mar 12 14:36:17.054282 master-0 kubenswrapper[37036]: I0312 14:36:17.054017 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8675d4-a0be-42a3-96af-e56f5fb02983" containerName="installer" Mar 12 14:36:17.054282 master-0 kubenswrapper[37036]: I0312 14:36:17.054044 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="05fc4965-b390-4edc-a407-d431b06d7612" containerName="installer" Mar 12 14:36:17.054282 master-0 kubenswrapper[37036]: I0312 14:36:17.054070 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a56d42a-efb4-4956-acab-d12c7ca5276e" containerName="installer" Mar 12 14:36:17.054282 master-0 kubenswrapper[37036]: I0312 14:36:17.054086 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2c3501c-0ebe-46d0-b2ed-540f96cd137c" containerName="installer" Mar 12 14:36:17.054282 master-0 kubenswrapper[37036]: I0312 14:36:17.054113 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a2b4b06-98cd-4ca3-aebe-d49651c6013f" containerName="installer" Mar 12 14:36:17.054282 master-0 kubenswrapper[37036]: I0312 14:36:17.054131 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="efd52682-bf05-44fc-9790-8adfc87ca087" containerName="installer" Mar 12 14:36:17.054282 master-0 kubenswrapper[37036]: I0312 14:36:17.054142 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="146495bf-0787-483f-a9fc-0e8925b89150" containerName="assisted-installer-controller" Mar 12 14:36:17.054282 master-0 kubenswrapper[37036]: I0312 14:36:17.054159 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="23b56974-d2b1-4205-af5a-70cc2b616d1a" containerName="installer" Mar 12 14:36:17.054282 master-0 kubenswrapper[37036]: I0312 14:36:17.054177 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fed292c3d5a90a99bfee43e89190405" containerName="cluster-policy-controller" Mar 12 14:36:17.054282 master-0 kubenswrapper[37036]: I0312 14:36:17.054198 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0743910-1ba7-490d-bc3e-5126562b04aa" containerName="installer" Mar 12 14:36:17.054282 master-0 kubenswrapper[37036]: I0312 14:36:17.054225 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="941c0808-bbfd-467e-b733-3a8294163ee5" containerName="installer" Mar 12 14:36:17.054282 master-0 kubenswrapper[37036]: I0312 14:36:17.054243 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d8e6e9-c10f-4b43-8155-9addbfddba2e" containerName="installer" Mar 12 14:36:17.059230 master-0 kubenswrapper[37036]: I0312 14:36:17.059184 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-6c7fb6b958-4kp7s" Mar 12 14:36:17.068933 master-0 kubenswrapper[37036]: I0312 14:36:17.067050 37036 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 12 14:36:17.082445 master-0 kubenswrapper[37036]: I0312 14:36:17.082328 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 12 14:36:17.102698 master-0 kubenswrapper[37036]: I0312 14:36:17.102640 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 12 14:36:17.127859 master-0 kubenswrapper[37036]: I0312 14:36:17.127789 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 12 14:36:17.141772 master-0 kubenswrapper[37036]: I0312 14:36:17.141725 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-t6rc9" Mar 12 14:36:17.158021 master-0 kubenswrapper[37036]: I0312 14:36:17.157982 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2f3fb87-655d-4622-b0c3-4288a9bb76d2-config\") pod \"console-operator-6c7fb6b958-4kp7s\" (UID: \"c2f3fb87-655d-4622-b0c3-4288a9bb76d2\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4kp7s" Mar 12 14:36:17.158305 master-0 kubenswrapper[37036]: I0312 14:36:17.158268 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2f3fb87-655d-4622-b0c3-4288a9bb76d2-serving-cert\") pod \"console-operator-6c7fb6b958-4kp7s\" (UID: \"c2f3fb87-655d-4622-b0c3-4288a9bb76d2\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4kp7s" Mar 12 14:36:17.158759 master-0 kubenswrapper[37036]: I0312 14:36:17.158728 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7xqj\" (UniqueName: \"kubernetes.io/projected/c2f3fb87-655d-4622-b0c3-4288a9bb76d2-kube-api-access-v7xqj\") pod \"console-operator-6c7fb6b958-4kp7s\" (UID: \"c2f3fb87-655d-4622-b0c3-4288a9bb76d2\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4kp7s" Mar 12 14:36:17.160967 master-0 kubenswrapper[37036]: I0312 14:36:17.160932 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2f3fb87-655d-4622-b0c3-4288a9bb76d2-trusted-ca\") pod \"console-operator-6c7fb6b958-4kp7s\" (UID: \"c2f3fb87-655d-4622-b0c3-4288a9bb76d2\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4kp7s" Mar 12 14:36:17.161151 master-0 kubenswrapper[37036]: I0312 14:36:17.161110 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 12 14:36:17.181583 master-0 kubenswrapper[37036]: I0312 14:36:17.181529 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 12 14:36:17.222559 master-0 kubenswrapper[37036]: I0312 14:36:17.222440 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-b5qg4" Mar 12 14:36:17.222559 master-0 kubenswrapper[37036]: I0312 14:36:17.222499 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-4kp7s"] Mar 12 14:36:17.222559 master-0 kubenswrapper[37036]: I0312 14:36:17.222526 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-b5qg4" Mar 12 14:36:17.222559 master-0 kubenswrapper[37036]: I0312 14:36:17.222534 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"b751bdf0e39401a4d13a469f6d8fde858fcfb6b8b01934e3aae4c85b3c34ac55"} Mar 12 14:36:17.222559 master-0 kubenswrapper[37036]: I0312 14:36:17.222554 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 12 14:36:17.222559 master-0 kubenswrapper[37036]: I0312 14:36:17.222565 37036 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="48455a35-581b-463e-bd51-87d671e06402" Mar 12 14:36:17.222899 master-0 kubenswrapper[37036]: I0312 14:36:17.222578 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 12 14:36:17.222899 master-0 kubenswrapper[37036]: I0312 14:36:17.222586 37036 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="48455a35-581b-463e-bd51-87d671e06402" Mar 12 14:36:17.223091 master-0 kubenswrapper[37036]: I0312 14:36:17.223056 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:36:17.223151 master-0 kubenswrapper[37036]: I0312 14:36:17.223132 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:36:17.223204 master-0 kubenswrapper[37036]: I0312 14:36:17.223160 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:36:17.223204 master-0 kubenswrapper[37036]: I0312 14:36:17.223175 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vmhgb" Mar 12 14:36:17.223278 master-0 kubenswrapper[37036]: I0312 14:36:17.223192 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-44hhf" event={"ID":"4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6","Type":"ContainerStarted","Data":"9227043cc31123f787743e9d7ffefea1c269203c8a6e94d806159d1b37819e4f"} Mar 12 14:36:17.223278 master-0 kubenswrapper[37036]: I0312 14:36:17.223224 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" event={"ID":"e7f6ebd3-98c8-457c-a88c-7e81270f01b5","Type":"ContainerStarted","Data":"aa2d02ba96811cb65a805c82c41959e750b355a5d970ef9c958ca5f750199bde"} Mar 12 14:36:17.223278 master-0 kubenswrapper[37036]: I0312 14:36:17.223251 37036 status_manager.go:379] "Container startup changed for unknown container" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" containerID="cri-o://8267e1775d4f1f71ce9ca7f7438e5d643c261adc1297b9c3415c07d0974bcee7" Mar 12 14:36:17.223278 master-0 kubenswrapper[37036]: I0312 14:36:17.223264 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:36:17.223417 master-0 kubenswrapper[37036]: I0312 14:36:17.223289 37036 status_manager.go:379] "Container startup changed for unknown container" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" containerID="cri-o://8267e1775d4f1f71ce9ca7f7438e5d643c261adc1297b9c3415c07d0974bcee7" Mar 12 14:36:17.223417 master-0 kubenswrapper[37036]: I0312 14:36:17.223303 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:36:17.223417 master-0 kubenswrapper[37036]: I0312 14:36:17.223326 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-8q2fv" Mar 12 14:36:17.223417 master-0 kubenswrapper[37036]: I0312 14:36:17.223341 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9bljc" Mar 12 14:36:17.223417 master-0 kubenswrapper[37036]: I0312 14:36:17.223360 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:36:17.223417 master-0 kubenswrapper[37036]: I0312 14:36:17.223375 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:17.223417 master-0 kubenswrapper[37036]: I0312 14:36:17.223399 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:36:17.223676 master-0 kubenswrapper[37036]: I0312 14:36:17.223551 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:36:17.223676 master-0 kubenswrapper[37036]: I0312 14:36:17.223570 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4gbmc" Mar 12 14:36:17.223676 master-0 kubenswrapper[37036]: I0312 14:36:17.223607 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mgqz4" Mar 12 14:36:17.223676 master-0 kubenswrapper[37036]: I0312 14:36:17.223647 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:17.223676 master-0 kubenswrapper[37036]: I0312 14:36:17.223668 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:36:17.223823 master-0 kubenswrapper[37036]: I0312 14:36:17.223687 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:36:17.223823 master-0 kubenswrapper[37036]: I0312 14:36:17.223709 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-qzdff" Mar 12 14:36:17.223823 master-0 kubenswrapper[37036]: I0312 14:36:17.223732 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-fpjck" Mar 12 14:36:17.223823 master-0 kubenswrapper[37036]: I0312 14:36:17.223742 37036 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:36:17.223823 master-0 kubenswrapper[37036]: I0312 14:36:17.223758 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-f48hv" Mar 12 14:36:17.223823 master-0 kubenswrapper[37036]: I0312 14:36:17.223774 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:36:17.223823 master-0 kubenswrapper[37036]: I0312 14:36:17.223791 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-fpjck" Mar 12 14:36:17.223823 master-0 kubenswrapper[37036]: I0312 14:36:17.223800 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vmhgb" Mar 12 14:36:17.223823 master-0 kubenswrapper[37036]: I0312 14:36:17.223823 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vmhgb" Mar 12 14:36:17.224101 master-0 kubenswrapper[37036]: I0312 14:36:17.223841 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:36:17.224101 master-0 kubenswrapper[37036]: I0312 14:36:17.223862 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:36:17.224101 master-0 kubenswrapper[37036]: I0312 14:36:17.223875 37036 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" containerID="cri-o://8267e1775d4f1f71ce9ca7f7438e5d643c261adc1297b9c3415c07d0974bcee7" Mar 12 14:36:17.224101 master-0 kubenswrapper[37036]: I0312 14:36:17.223883 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:36:17.224101 master-0 kubenswrapper[37036]: I0312 14:36:17.223906 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-754hn" Mar 12 14:36:17.224101 master-0 kubenswrapper[37036]: I0312 14:36:17.223936 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-whr79" Mar 12 14:36:17.224101 master-0 kubenswrapper[37036]: I0312 14:36:17.223945 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9bljc" Mar 12 14:36:17.224101 master-0 kubenswrapper[37036]: I0312 14:36:17.223955 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:36:17.224101 master-0 kubenswrapper[37036]: I0312 14:36:17.223966 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:36:17.224101 master-0 kubenswrapper[37036]: I0312 14:36:17.223976 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:17.224101 master-0 kubenswrapper[37036]: I0312 14:36:17.223986 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:17.224101 master-0 kubenswrapper[37036]: I0312 14:36:17.223994 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:17.224101 master-0 kubenswrapper[37036]: I0312 14:36:17.224003 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:17.224101 master-0 kubenswrapper[37036]: I0312 14:36:17.224012 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:36:17.224101 master-0 kubenswrapper[37036]: I0312 14:36:17.224022 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:36:17.224101 master-0 kubenswrapper[37036]: I0312 14:36:17.224032 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4gbmc" Mar 12 14:36:17.224101 master-0 kubenswrapper[37036]: I0312 14:36:17.224043 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mgqz4" Mar 12 14:36:17.224101 master-0 kubenswrapper[37036]: I0312 14:36:17.224071 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:36:17.224101 master-0 kubenswrapper[37036]: I0312 14:36:17.224094 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:17.237622 master-0 kubenswrapper[37036]: I0312 14:36:17.235854 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-2pj4z" Mar 12 14:36:17.253879 master-0 kubenswrapper[37036]: I0312 14:36:17.253828 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-dvv78" Mar 12 14:36:17.253879 master-0 kubenswrapper[37036]: I0312 14:36:17.253880 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:17.254163 master-0 kubenswrapper[37036]: I0312 14:36:17.253947 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-ljnjj" Mar 12 14:36:17.254163 master-0 kubenswrapper[37036]: I0312 14:36:17.253968 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-5957c5c5dc-njb8x" Mar 12 14:36:17.254163 master-0 kubenswrapper[37036]: I0312 14:36:17.253992 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:17.254163 master-0 kubenswrapper[37036]: I0312 14:36:17.254016 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:36:17.282938 master-0 kubenswrapper[37036]: I0312 14:36:17.277355 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2f3fb87-655d-4622-b0c3-4288a9bb76d2-config\") pod \"console-operator-6c7fb6b958-4kp7s\" (UID: \"c2f3fb87-655d-4622-b0c3-4288a9bb76d2\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4kp7s" Mar 12 14:36:17.282938 master-0 kubenswrapper[37036]: I0312 14:36:17.277394 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2f3fb87-655d-4622-b0c3-4288a9bb76d2-serving-cert\") pod \"console-operator-6c7fb6b958-4kp7s\" (UID: \"c2f3fb87-655d-4622-b0c3-4288a9bb76d2\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4kp7s" Mar 12 14:36:17.282938 master-0 kubenswrapper[37036]: I0312 14:36:17.277883 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7xqj\" (UniqueName: \"kubernetes.io/projected/c2f3fb87-655d-4622-b0c3-4288a9bb76d2-kube-api-access-v7xqj\") pod \"console-operator-6c7fb6b958-4kp7s\" (UID: \"c2f3fb87-655d-4622-b0c3-4288a9bb76d2\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4kp7s" Mar 12 14:36:17.282938 master-0 kubenswrapper[37036]: I0312 14:36:17.277964 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2f3fb87-655d-4622-b0c3-4288a9bb76d2-trusted-ca\") pod \"console-operator-6c7fb6b958-4kp7s\" (UID: \"c2f3fb87-655d-4622-b0c3-4288a9bb76d2\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4kp7s" Mar 12 14:36:17.282938 master-0 kubenswrapper[37036]: I0312 14:36:17.278290 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2f3fb87-655d-4622-b0c3-4288a9bb76d2-config\") pod \"console-operator-6c7fb6b958-4kp7s\" (UID: \"c2f3fb87-655d-4622-b0c3-4288a9bb76d2\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4kp7s" Mar 12 14:36:17.282938 master-0 kubenswrapper[37036]: I0312 14:36:17.278970 37036 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 12 14:36:17.282938 master-0 kubenswrapper[37036]: I0312 14:36:17.279588 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2f3fb87-655d-4622-b0c3-4288a9bb76d2-trusted-ca\") pod \"console-operator-6c7fb6b958-4kp7s\" (UID: \"c2f3fb87-655d-4622-b0c3-4288a9bb76d2\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4kp7s" Mar 12 14:36:17.295935 master-0 kubenswrapper[37036]: I0312 14:36:17.284697 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mgqz4" Mar 12 14:36:17.295935 master-0 kubenswrapper[37036]: I0312 14:36:17.285221 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2f3fb87-655d-4622-b0c3-4288a9bb76d2-serving-cert\") pod \"console-operator-6c7fb6b958-4kp7s\" (UID: \"c2f3fb87-655d-4622-b0c3-4288a9bb76d2\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4kp7s" Mar 12 14:36:17.295935 master-0 kubenswrapper[37036]: I0312 14:36:17.287264 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:17.321933 master-0 kubenswrapper[37036]: I0312 14:36:17.310200 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4gbmc" Mar 12 14:36:17.321933 master-0 kubenswrapper[37036]: I0312 14:36:17.310379 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:17.321933 master-0 kubenswrapper[37036]: I0312 14:36:17.316314 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h4b4k" Mar 12 14:36:17.321933 master-0 kubenswrapper[37036]: I0312 14:36:17.320766 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7xqj\" (UniqueName: \"kubernetes.io/projected/c2f3fb87-655d-4622-b0c3-4288a9bb76d2-kube-api-access-v7xqj\") pod \"console-operator-6c7fb6b958-4kp7s\" (UID: \"c2f3fb87-655d-4622-b0c3-4288a9bb76d2\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4kp7s" Mar 12 14:36:17.329240 master-0 kubenswrapper[37036]: I0312 14:36:17.323123 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9bljc" Mar 12 14:36:17.383928 master-0 kubenswrapper[37036]: I0312 14:36:17.381230 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-6c7fb6b958-4kp7s" Mar 12 14:36:17.449738 master-0 kubenswrapper[37036]: I0312 14:36:17.447619 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:36:17.459977 master-0 kubenswrapper[37036]: I0312 14:36:17.458930 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:36:17.584603 master-0 kubenswrapper[37036]: I0312 14:36:17.584365 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:36:17.584603 master-0 kubenswrapper[37036]: E0312 14:36:17.584536 37036 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 14:36:17.584603 master-0 kubenswrapper[37036]: E0312 14:36:17.584551 37036 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 14:36:17.584603 master-0 kubenswrapper[37036]: E0312 14:36:17.584584 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access podName:5a56d42a-efb4-4956-acab-d12c7ca5276e nodeName:}" failed. No retries permitted until 2026-03-12 14:36:18.58457133 +0000 UTC m=+37.592312267 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access") pod "installer-4-master-0" (UID: "5a56d42a-efb4-4956-acab-d12c7ca5276e") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 14:36:17.676989 master-0 kubenswrapper[37036]: I0312 14:36:17.675598 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=5.675577336 podStartE2EDuration="5.675577336s" podCreationTimestamp="2026-03-12 14:36:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:36:17.675365261 +0000 UTC m=+36.683106198" watchObservedRunningTime="2026-03-12 14:36:17.675577336 +0000 UTC m=+36.683318273" Mar 12 14:36:17.809932 master-0 kubenswrapper[37036]: I0312 14:36:17.809692 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-4kp7s"] Mar 12 14:36:17.846464 master-0 kubenswrapper[37036]: I0312 14:36:17.845724 37036 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 14:36:17.890790 master-0 kubenswrapper[37036]: I0312 14:36:17.890727 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:36:18.029864 master-0 kubenswrapper[37036]: I0312 14:36:18.029619 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-4kp7s" event={"ID":"c2f3fb87-655d-4622-b0c3-4288a9bb76d2","Type":"ContainerStarted","Data":"bf662646e0a51564f9a8b178a7511458cdd6df48b2a387f7fe5f1e4df9b3c9e3"} Mar 12 14:36:18.034105 master-0 kubenswrapper[37036]: I0312 14:36:18.034066 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:36:18.037328 master-0 kubenswrapper[37036]: I0312 14:36:18.037280 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-79f8cd6fdd-gjwhp" Mar 12 14:36:18.608059 master-0 kubenswrapper[37036]: I0312 14:36:18.603976 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:36:18.608059 master-0 kubenswrapper[37036]: E0312 14:36:18.604348 37036 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 14:36:18.608059 master-0 kubenswrapper[37036]: E0312 14:36:18.604369 37036 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 14:36:18.608059 master-0 kubenswrapper[37036]: E0312 14:36:18.604541 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access podName:5a56d42a-efb4-4956-acab-d12c7ca5276e nodeName:}" failed. No retries permitted until 2026-03-12 14:36:20.604522602 +0000 UTC m=+39.612263539 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access") pod "installer-4-master-0" (UID: "5a56d42a-efb4-4956-acab-d12c7ca5276e") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 14:36:19.058330 master-0 kubenswrapper[37036]: I0312 14:36:19.058246 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=7.058224779 podStartE2EDuration="7.058224779s" podCreationTimestamp="2026-03-12 14:36:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:36:19.057398448 +0000 UTC m=+38.065139385" watchObservedRunningTime="2026-03-12 14:36:19.058224779 +0000 UTC m=+38.065965716" Mar 12 14:36:20.673724 master-0 kubenswrapper[37036]: I0312 14:36:20.673512 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:36:20.674261 master-0 kubenswrapper[37036]: E0312 14:36:20.673819 37036 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 14:36:20.674261 master-0 kubenswrapper[37036]: E0312 14:36:20.673846 37036 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 14:36:20.674261 master-0 kubenswrapper[37036]: E0312 14:36:20.673933 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access podName:5a56d42a-efb4-4956-acab-d12c7ca5276e nodeName:}" failed. No retries permitted until 2026-03-12 14:36:24.673898124 +0000 UTC m=+43.681639061 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access") pod "installer-4-master-0" (UID: "5a56d42a-efb4-4956-acab-d12c7ca5276e") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 14:36:21.056810 master-0 kubenswrapper[37036]: I0312 14:36:21.056730 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-d94bf6d99-jdmf7"] Mar 12 14:36:21.058294 master-0 kubenswrapper[37036]: I0312 14:36:21.058225 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.061145 master-0 kubenswrapper[37036]: I0312 14:36:21.061080 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 12 14:36:21.061350 master-0 kubenswrapper[37036]: I0312 14:36:21.061183 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-fsj54" Mar 12 14:36:21.061350 master-0 kubenswrapper[37036]: I0312 14:36:21.061082 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 12 14:36:21.061567 master-0 kubenswrapper[37036]: I0312 14:36:21.061525 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 12 14:36:21.061819 master-0 kubenswrapper[37036]: I0312 14:36:21.061773 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 12 14:36:21.064154 master-0 kubenswrapper[37036]: I0312 14:36:21.064106 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 12 14:36:21.064335 master-0 kubenswrapper[37036]: I0312 14:36:21.064303 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 12 14:36:21.064592 master-0 kubenswrapper[37036]: I0312 14:36:21.064560 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 12 14:36:21.065739 master-0 kubenswrapper[37036]: I0312 14:36:21.065665 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 12 14:36:21.065883 master-0 kubenswrapper[37036]: I0312 14:36:21.065840 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 12 14:36:21.066015 master-0 kubenswrapper[37036]: I0312 14:36:21.065976 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 12 14:36:21.066142 master-0 kubenswrapper[37036]: I0312 14:36:21.066112 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 12 14:36:21.072814 master-0 kubenswrapper[37036]: I0312 14:36:21.072661 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 12 14:36:21.081191 master-0 kubenswrapper[37036]: I0312 14:36:21.080874 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.081191 master-0 kubenswrapper[37036]: I0312 14:36:21.080941 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-service-ca\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.081191 master-0 kubenswrapper[37036]: I0312 14:36:21.080974 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-audit-dir\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.081191 master-0 kubenswrapper[37036]: I0312 14:36:21.081008 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.081191 master-0 kubenswrapper[37036]: I0312 14:36:21.081047 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.081191 master-0 kubenswrapper[37036]: I0312 14:36:21.081073 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-session\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.081191 master-0 kubenswrapper[37036]: I0312 14:36:21.081097 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.081191 master-0 kubenswrapper[37036]: I0312 14:36:21.081124 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-user-template-login\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.081191 master-0 kubenswrapper[37036]: I0312 14:36:21.081155 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-user-template-error\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.081191 master-0 kubenswrapper[37036]: I0312 14:36:21.081197 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-router-certs\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.081649 master-0 kubenswrapper[37036]: I0312 14:36:21.081229 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-audit-policies\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.081649 master-0 kubenswrapper[37036]: I0312 14:36:21.081254 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8tp9\" (UniqueName: \"kubernetes.io/projected/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-kube-api-access-t8tp9\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.081649 master-0 kubenswrapper[37036]: I0312 14:36:21.081281 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.084935 master-0 kubenswrapper[37036]: I0312 14:36:21.082673 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 12 14:36:21.099763 master-0 kubenswrapper[37036]: I0312 14:36:21.093510 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-d94bf6d99-jdmf7"] Mar 12 14:36:21.181695 master-0 kubenswrapper[37036]: I0312 14:36:21.181637 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.181958 master-0 kubenswrapper[37036]: I0312 14:36:21.181811 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-session\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.181958 master-0 kubenswrapper[37036]: I0312 14:36:21.181847 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.181958 master-0 kubenswrapper[37036]: I0312 14:36:21.181871 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-user-template-login\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.181958 master-0 kubenswrapper[37036]: I0312 14:36:21.181892 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-user-template-error\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.181958 master-0 kubenswrapper[37036]: I0312 14:36:21.181956 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-router-certs\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.182296 master-0 kubenswrapper[37036]: I0312 14:36:21.181983 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-audit-policies\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.182296 master-0 kubenswrapper[37036]: E0312 14:36:21.181978 37036 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 12 14:36:21.182296 master-0 kubenswrapper[37036]: E0312 14:36:21.182079 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-cliconfig podName:bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:21.682054919 +0000 UTC m=+40.689795936 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-cliconfig") pod "oauth-openshift-d94bf6d99-jdmf7" (UID: "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34") : configmap "v4-0-config-system-cliconfig" not found Mar 12 14:36:21.182296 master-0 kubenswrapper[37036]: I0312 14:36:21.182001 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8tp9\" (UniqueName: \"kubernetes.io/projected/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-kube-api-access-t8tp9\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.182562 master-0 kubenswrapper[37036]: E0312 14:36:21.182403 37036 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: secret "v4-0-config-system-session" not found Mar 12 14:36:21.182562 master-0 kubenswrapper[37036]: E0312 14:36:21.182451 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-session podName:bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:21.682435929 +0000 UTC m=+40.690176956 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-session") pod "oauth-openshift-d94bf6d99-jdmf7" (UID: "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34") : secret "v4-0-config-system-session" not found Mar 12 14:36:21.182685 master-0 kubenswrapper[37036]: I0312 14:36:21.182591 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.182685 master-0 kubenswrapper[37036]: I0312 14:36:21.182647 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.182685 master-0 kubenswrapper[37036]: I0312 14:36:21.182667 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-service-ca\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.182685 master-0 kubenswrapper[37036]: I0312 14:36:21.182687 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-audit-dir\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.183066 master-0 kubenswrapper[37036]: I0312 14:36:21.182705 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.183417 master-0 kubenswrapper[37036]: I0312 14:36:21.183336 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-audit-policies\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.183614 master-0 kubenswrapper[37036]: I0312 14:36:21.183577 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.183733 master-0 kubenswrapper[37036]: I0312 14:36:21.183639 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-audit-dir\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.183733 master-0 kubenswrapper[37036]: I0312 14:36:21.183711 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-service-ca\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.186828 master-0 kubenswrapper[37036]: I0312 14:36:21.186760 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-router-certs\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.186828 master-0 kubenswrapper[37036]: I0312 14:36:21.186825 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-user-template-error\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.187205 master-0 kubenswrapper[37036]: I0312 14:36:21.187167 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-user-template-login\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.188279 master-0 kubenswrapper[37036]: I0312 14:36:21.188227 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.189867 master-0 kubenswrapper[37036]: I0312 14:36:21.189795 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.201021 master-0 kubenswrapper[37036]: I0312 14:36:21.199339 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8tp9\" (UniqueName: \"kubernetes.io/projected/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-kube-api-access-t8tp9\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.201021 master-0 kubenswrapper[37036]: I0312 14:36:21.199587 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.520040 master-0 kubenswrapper[37036]: I0312 14:36:21.519989 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-64948d9545-xshsb"] Mar 12 14:36:21.521172 master-0 kubenswrapper[37036]: I0312 14:36:21.521151 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-64948d9545-xshsb" Mar 12 14:36:21.522669 master-0 kubenswrapper[37036]: I0312 14:36:21.522621 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-d48x8" Mar 12 14:36:21.523376 master-0 kubenswrapper[37036]: I0312 14:36:21.523338 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 12 14:36:21.533419 master-0 kubenswrapper[37036]: I0312 14:36:21.533359 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-64948d9545-xshsb"] Mar 12 14:36:21.688183 master-0 kubenswrapper[37036]: I0312 14:36:21.688107 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-session\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.688183 master-0 kubenswrapper[37036]: I0312 14:36:21.688161 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.688804 master-0 kubenswrapper[37036]: I0312 14:36:21.688759 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a725ec48-e77d-4fce-957a-67abe8712193-monitoring-plugin-cert\") pod \"monitoring-plugin-64948d9545-xshsb\" (UID: \"a725ec48-e77d-4fce-957a-67abe8712193\") " pod="openshift-monitoring/monitoring-plugin-64948d9545-xshsb" Mar 12 14:36:21.689125 master-0 kubenswrapper[37036]: E0312 14:36:21.689096 37036 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 12 14:36:21.689189 master-0 kubenswrapper[37036]: E0312 14:36:21.689165 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-cliconfig podName:bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:22.689145368 +0000 UTC m=+41.696886305 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-cliconfig") pod "oauth-openshift-d94bf6d99-jdmf7" (UID: "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34") : configmap "v4-0-config-system-cliconfig" not found Mar 12 14:36:21.707862 master-0 kubenswrapper[37036]: I0312 14:36:21.707779 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-session\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:21.755247 master-0 kubenswrapper[37036]: I0312 14:36:21.755199 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-6b7d9dd778-7klpj" Mar 12 14:36:21.759383 master-0 kubenswrapper[37036]: I0312 14:36:21.759336 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-794bf69795-vntlz" Mar 12 14:36:21.790338 master-0 kubenswrapper[37036]: I0312 14:36:21.790191 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a725ec48-e77d-4fce-957a-67abe8712193-monitoring-plugin-cert\") pod \"monitoring-plugin-64948d9545-xshsb\" (UID: \"a725ec48-e77d-4fce-957a-67abe8712193\") " pod="openshift-monitoring/monitoring-plugin-64948d9545-xshsb" Mar 12 14:36:21.795390 master-0 kubenswrapper[37036]: I0312 14:36:21.794091 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a725ec48-e77d-4fce-957a-67abe8712193-monitoring-plugin-cert\") pod \"monitoring-plugin-64948d9545-xshsb\" (UID: \"a725ec48-e77d-4fce-957a-67abe8712193\") " pod="openshift-monitoring/monitoring-plugin-64948d9545-xshsb" Mar 12 14:36:21.797493 master-0 kubenswrapper[37036]: I0312 14:36:21.797445 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-84f57b9877-ftnvc"] Mar 12 14:36:21.803006 master-0 kubenswrapper[37036]: I0312 14:36:21.801492 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-84f57b9877-ftnvc" Mar 12 14:36:21.804127 master-0 kubenswrapper[37036]: I0312 14:36:21.803966 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-jggtb" Mar 12 14:36:21.804224 master-0 kubenswrapper[37036]: I0312 14:36:21.804127 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 12 14:36:21.804276 master-0 kubenswrapper[37036]: I0312 14:36:21.804218 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 12 14:36:21.815518 master-0 kubenswrapper[37036]: I0312 14:36:21.815460 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-84f57b9877-ftnvc"] Mar 12 14:36:21.848929 master-0 kubenswrapper[37036]: I0312 14:36:21.847188 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-64948d9545-xshsb" Mar 12 14:36:22.004851 master-0 kubenswrapper[37036]: I0312 14:36:22.004784 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zhqn\" (UniqueName: \"kubernetes.io/projected/70c26392-cfee-4dc3-9c71-4684558daa07-kube-api-access-8zhqn\") pod \"downloads-84f57b9877-ftnvc\" (UID: \"70c26392-cfee-4dc3-9c71-4684558daa07\") " pod="openshift-console/downloads-84f57b9877-ftnvc" Mar 12 14:36:22.139930 master-0 kubenswrapper[37036]: I0312 14:36:22.126841 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zhqn\" (UniqueName: \"kubernetes.io/projected/70c26392-cfee-4dc3-9c71-4684558daa07-kube-api-access-8zhqn\") pod \"downloads-84f57b9877-ftnvc\" (UID: \"70c26392-cfee-4dc3-9c71-4684558daa07\") " pod="openshift-console/downloads-84f57b9877-ftnvc" Mar 12 14:36:22.139930 master-0 kubenswrapper[37036]: I0312 14:36:22.127209 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-4kp7s" event={"ID":"c2f3fb87-655d-4622-b0c3-4288a9bb76d2","Type":"ContainerStarted","Data":"0e6782648b839c3cc2f6d9b4b8283f81e030c6b84cc4fdc8b0e5394f1de0060d"} Mar 12 14:36:22.139930 master-0 kubenswrapper[37036]: I0312 14:36:22.127635 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-6c7fb6b958-4kp7s" Mar 12 14:36:22.159372 master-0 kubenswrapper[37036]: I0312 14:36:22.147617 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-6c7fb6b958-4kp7s" Mar 12 14:36:22.159372 master-0 kubenswrapper[37036]: I0312 14:36:22.154486 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zhqn\" (UniqueName: \"kubernetes.io/projected/70c26392-cfee-4dc3-9c71-4684558daa07-kube-api-access-8zhqn\") pod \"downloads-84f57b9877-ftnvc\" (UID: \"70c26392-cfee-4dc3-9c71-4684558daa07\") " pod="openshift-console/downloads-84f57b9877-ftnvc" Mar 12 14:36:22.165985 master-0 kubenswrapper[37036]: I0312 14:36:22.163542 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-6c7fb6b958-4kp7s" podStartSLOduration=4.052490584 podStartE2EDuration="7.163521433s" podCreationTimestamp="2026-03-12 14:36:15 +0000 UTC" firstStartedPulling="2026-03-12 14:36:17.845600637 +0000 UTC m=+36.853341584" lastFinishedPulling="2026-03-12 14:36:20.956631496 +0000 UTC m=+39.964372433" observedRunningTime="2026-03-12 14:36:22.162250011 +0000 UTC m=+41.169990958" watchObservedRunningTime="2026-03-12 14:36:22.163521433 +0000 UTC m=+41.171262370" Mar 12 14:36:22.190729 master-0 kubenswrapper[37036]: I0312 14:36:22.190656 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-64948d9545-xshsb"] Mar 12 14:36:22.439679 master-0 kubenswrapper[37036]: I0312 14:36:22.434919 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-84f57b9877-ftnvc" Mar 12 14:36:22.732339 master-0 kubenswrapper[37036]: I0312 14:36:22.732217 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:22.732848 master-0 kubenswrapper[37036]: E0312 14:36:22.732369 37036 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 12 14:36:22.732848 master-0 kubenswrapper[37036]: E0312 14:36:22.732440 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-cliconfig podName:bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:24.732422075 +0000 UTC m=+43.740163012 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-cliconfig") pod "oauth-openshift-d94bf6d99-jdmf7" (UID: "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34") : configmap "v4-0-config-system-cliconfig" not found Mar 12 14:36:22.890658 master-0 kubenswrapper[37036]: I0312 14:36:22.890602 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:36:22.891283 master-0 kubenswrapper[37036]: I0312 14:36:22.891246 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:36:22.942414 master-0 kubenswrapper[37036]: I0312 14:36:22.941185 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-84f57b9877-ftnvc"] Mar 12 14:36:22.945692 master-0 kubenswrapper[37036]: W0312 14:36:22.945615 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70c26392_cfee_4dc3_9c71_4684558daa07.slice/crio-6fe0d8884d1afc0ca0f0c0eba25cdf74bda4d8a2320b9bf38893163008aa2415 WatchSource:0}: Error finding container 6fe0d8884d1afc0ca0f0c0eba25cdf74bda4d8a2320b9bf38893163008aa2415: Status 404 returned error can't find the container with id 6fe0d8884d1afc0ca0f0c0eba25cdf74bda4d8a2320b9bf38893163008aa2415 Mar 12 14:36:22.969558 master-0 kubenswrapper[37036]: I0312 14:36:22.969510 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 12 14:36:23.137329 master-0 kubenswrapper[37036]: I0312 14:36:23.136633 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-84f57b9877-ftnvc" event={"ID":"70c26392-cfee-4dc3-9c71-4684558daa07","Type":"ContainerStarted","Data":"6fe0d8884d1afc0ca0f0c0eba25cdf74bda4d8a2320b9bf38893163008aa2415"} Mar 12 14:36:23.142346 master-0 kubenswrapper[37036]: I0312 14:36:23.142294 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-64948d9545-xshsb" event={"ID":"a725ec48-e77d-4fce-957a-67abe8712193","Type":"ContainerStarted","Data":"c4088be5e4786afce40bffbe7c0dfa10843cd3ab548fee4e86065e7eea8aa10a"} Mar 12 14:36:24.149351 master-0 kubenswrapper[37036]: I0312 14:36:24.149216 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-64948d9545-xshsb" event={"ID":"a725ec48-e77d-4fce-957a-67abe8712193","Type":"ContainerStarted","Data":"947f8e05e7cb6069be6368196720193c5b5745f4f521e279e940ac7fbd6bbebf"} Mar 12 14:36:24.187755 master-0 kubenswrapper[37036]: I0312 14:36:24.187641 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-64948d9545-xshsb" podStartSLOduration=1.567564223 podStartE2EDuration="3.187625069s" podCreationTimestamp="2026-03-12 14:36:21 +0000 UTC" firstStartedPulling="2026-03-12 14:36:22.205499778 +0000 UTC m=+41.213240715" lastFinishedPulling="2026-03-12 14:36:23.825560624 +0000 UTC m=+42.833301561" observedRunningTime="2026-03-12 14:36:24.187181399 +0000 UTC m=+43.194922336" watchObservedRunningTime="2026-03-12 14:36:24.187625069 +0000 UTC m=+43.195366006" Mar 12 14:36:24.775861 master-0 kubenswrapper[37036]: I0312 14:36:24.775790 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:36:24.776091 master-0 kubenswrapper[37036]: I0312 14:36:24.775878 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:24.776091 master-0 kubenswrapper[37036]: E0312 14:36:24.776050 37036 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 12 14:36:24.776091 master-0 kubenswrapper[37036]: E0312 14:36:24.776097 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-cliconfig podName:bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34 nodeName:}" failed. No retries permitted until 2026-03-12 14:36:28.776083791 +0000 UTC m=+47.783824728 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-cliconfig") pod "oauth-openshift-d94bf6d99-jdmf7" (UID: "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34") : configmap "v4-0-config-system-cliconfig" not found Mar 12 14:36:24.776296 master-0 kubenswrapper[37036]: E0312 14:36:24.776250 37036 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 14:36:24.776382 master-0 kubenswrapper[37036]: E0312 14:36:24.776298 37036 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 14:36:24.776382 master-0 kubenswrapper[37036]: E0312 14:36:24.776363 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access podName:5a56d42a-efb4-4956-acab-d12c7ca5276e nodeName:}" failed. No retries permitted until 2026-03-12 14:36:32.776341448 +0000 UTC m=+51.784082385 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access") pod "installer-4-master-0" (UID: "5a56d42a-efb4-4956-acab-d12c7ca5276e") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 14:36:25.155951 master-0 kubenswrapper[37036]: I0312 14:36:25.155744 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-64948d9545-xshsb" Mar 12 14:36:25.161325 master-0 kubenswrapper[37036]: I0312 14:36:25.161284 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-64948d9545-xshsb" Mar 12 14:36:26.213395 master-0 kubenswrapper[37036]: I0312 14:36:26.213323 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vmhgb" Mar 12 14:36:26.513669 master-0 kubenswrapper[37036]: I0312 14:36:26.513617 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9bljc" Mar 12 14:36:26.808309 master-0 kubenswrapper[37036]: I0312 14:36:26.808059 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mgqz4" Mar 12 14:36:26.817297 master-0 kubenswrapper[37036]: I0312 14:36:26.816109 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4gbmc" Mar 12 14:36:28.843713 master-0 kubenswrapper[37036]: I0312 14:36:28.843621 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:28.844697 master-0 kubenswrapper[37036]: I0312 14:36:28.844525 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d94bf6d99-jdmf7\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:28.897470 master-0 kubenswrapper[37036]: I0312 14:36:28.897404 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:29.293501 master-0 kubenswrapper[37036]: I0312 14:36:29.293425 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-d94bf6d99-jdmf7"] Mar 12 14:36:30.194543 master-0 kubenswrapper[37036]: I0312 14:36:30.194500 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" event={"ID":"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34","Type":"ContainerStarted","Data":"23f2052c5fbcc22cd38a431c5e2ac5863a96ce6b483b26a6af986d36abbcbca8"} Mar 12 14:36:31.758203 master-0 kubenswrapper[37036]: I0312 14:36:31.758128 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd"] Mar 12 14:36:31.758751 master-0 kubenswrapper[37036]: I0312 14:36:31.758382 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" podUID="99433993-93cf-46cb-bb66-485672cb2554" containerName="controller-manager" containerID="cri-o://80852c13a84697f07d1a8ca8a4892c3fa3a6416ed1dfca07e537b2d4c816a13a" gracePeriod=30 Mar 12 14:36:31.799913 master-0 kubenswrapper[37036]: I0312 14:36:31.799841 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc"] Mar 12 14:36:31.800556 master-0 kubenswrapper[37036]: I0312 14:36:31.800229 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" containerID="cri-o://e05cf7c7c58106dc1c6b46b6d00fbb76e60bbaa968f5d7f6eb52040b9ee4fd95" gracePeriod=30 Mar 12 14:36:31.947700 master-0 kubenswrapper[37036]: E0312 14:36:31.947645 37036 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99433993_93cf_46cb_bb66_485672cb2554.slice/crio-conmon-80852c13a84697f07d1a8ca8a4892c3fa3a6416ed1dfca07e537b2d4c816a13a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf31c4c2_304e_4bad_8e6f_18c174eba675.slice/crio-conmon-e05cf7c7c58106dc1c6b46b6d00fbb76e60bbaa968f5d7f6eb52040b9ee4fd95.scope\": RecentStats: unable to find data in memory cache]" Mar 12 14:36:32.208616 master-0 kubenswrapper[37036]: I0312 14:36:32.208564 37036 generic.go:334] "Generic (PLEG): container finished" podID="99433993-93cf-46cb-bb66-485672cb2554" containerID="80852c13a84697f07d1a8ca8a4892c3fa3a6416ed1dfca07e537b2d4c816a13a" exitCode=0 Mar 12 14:36:32.208839 master-0 kubenswrapper[37036]: I0312 14:36:32.208673 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" event={"ID":"99433993-93cf-46cb-bb66-485672cb2554","Type":"ContainerDied","Data":"80852c13a84697f07d1a8ca8a4892c3fa3a6416ed1dfca07e537b2d4c816a13a"} Mar 12 14:36:32.208839 master-0 kubenswrapper[37036]: I0312 14:36:32.208715 37036 scope.go:117] "RemoveContainer" containerID="942edb2086b196730f2050c8c10e7943616ea284812689341f08412925b12705" Mar 12 14:36:32.211364 master-0 kubenswrapper[37036]: I0312 14:36:32.210743 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" event={"ID":"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34","Type":"ContainerStarted","Data":"739211b307257fb02a5e37ad3189305b807bf3aad4882366592f2eb9bde6dea0"} Mar 12 14:36:32.212472 master-0 kubenswrapper[37036]: I0312 14:36:32.212440 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7f8bfc67b-pz8rc_df31c4c2-304e-4bad-8e6f-18c174eba675/route-controller-manager/3.log" Mar 12 14:36:32.212532 master-0 kubenswrapper[37036]: I0312 14:36:32.212489 37036 generic.go:334] "Generic (PLEG): container finished" podID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerID="e05cf7c7c58106dc1c6b46b6d00fbb76e60bbaa968f5d7f6eb52040b9ee4fd95" exitCode=0 Mar 12 14:36:32.212570 master-0 kubenswrapper[37036]: I0312 14:36:32.212544 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" event={"ID":"df31c4c2-304e-4bad-8e6f-18c174eba675","Type":"ContainerDied","Data":"e05cf7c7c58106dc1c6b46b6d00fbb76e60bbaa968f5d7f6eb52040b9ee4fd95"} Mar 12 14:36:32.371648 master-0 kubenswrapper[37036]: I0312 14:36:32.371612 37036 scope.go:117] "RemoveContainer" containerID="61400ed5c81e00b9e0a4acdbab9426e759da65e0bd1381d3d70a790a5d50716c" Mar 12 14:36:32.470107 master-0 kubenswrapper[37036]: I0312 14:36:32.470048 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:36:32.474206 master-0 kubenswrapper[37036]: I0312 14:36:32.474161 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:36:32.522835 master-0 kubenswrapper[37036]: I0312 14:36:32.522780 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99433993-93cf-46cb-bb66-485672cb2554-serving-cert\") pod \"99433993-93cf-46cb-bb66-485672cb2554\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " Mar 12 14:36:32.523130 master-0 kubenswrapper[37036]: I0312 14:36:32.522842 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df31c4c2-304e-4bad-8e6f-18c174eba675-config\") pod \"df31c4c2-304e-4bad-8e6f-18c174eba675\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " Mar 12 14:36:32.523130 master-0 kubenswrapper[37036]: I0312 14:36:32.522932 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg62n\" (UniqueName: \"kubernetes.io/projected/df31c4c2-304e-4bad-8e6f-18c174eba675-kube-api-access-gg62n\") pod \"df31c4c2-304e-4bad-8e6f-18c174eba675\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " Mar 12 14:36:32.523130 master-0 kubenswrapper[37036]: I0312 14:36:32.522980 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dlf2\" (UniqueName: \"kubernetes.io/projected/99433993-93cf-46cb-bb66-485672cb2554-kube-api-access-2dlf2\") pod \"99433993-93cf-46cb-bb66-485672cb2554\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " Mar 12 14:36:32.523130 master-0 kubenswrapper[37036]: I0312 14:36:32.523008 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-config\") pod \"99433993-93cf-46cb-bb66-485672cb2554\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " Mar 12 14:36:32.523130 master-0 kubenswrapper[37036]: I0312 14:36:32.523025 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df31c4c2-304e-4bad-8e6f-18c174eba675-serving-cert\") pod \"df31c4c2-304e-4bad-8e6f-18c174eba675\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " Mar 12 14:36:32.523130 master-0 kubenswrapper[37036]: I0312 14:36:32.523097 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df31c4c2-304e-4bad-8e6f-18c174eba675-client-ca\") pod \"df31c4c2-304e-4bad-8e6f-18c174eba675\" (UID: \"df31c4c2-304e-4bad-8e6f-18c174eba675\") " Mar 12 14:36:32.523358 master-0 kubenswrapper[37036]: I0312 14:36:32.523148 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-proxy-ca-bundles\") pod \"99433993-93cf-46cb-bb66-485672cb2554\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " Mar 12 14:36:32.523358 master-0 kubenswrapper[37036]: I0312 14:36:32.523183 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-client-ca\") pod \"99433993-93cf-46cb-bb66-485672cb2554\" (UID: \"99433993-93cf-46cb-bb66-485672cb2554\") " Mar 12 14:36:32.524025 master-0 kubenswrapper[37036]: I0312 14:36:32.524000 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-client-ca" (OuterVolumeSpecName: "client-ca") pod "99433993-93cf-46cb-bb66-485672cb2554" (UID: "99433993-93cf-46cb-bb66-485672cb2554"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:36:32.524824 master-0 kubenswrapper[37036]: I0312 14:36:32.524801 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df31c4c2-304e-4bad-8e6f-18c174eba675-client-ca" (OuterVolumeSpecName: "client-ca") pod "df31c4c2-304e-4bad-8e6f-18c174eba675" (UID: "df31c4c2-304e-4bad-8e6f-18c174eba675"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:36:32.525358 master-0 kubenswrapper[37036]: I0312 14:36:32.525301 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df31c4c2-304e-4bad-8e6f-18c174eba675-config" (OuterVolumeSpecName: "config") pod "df31c4c2-304e-4bad-8e6f-18c174eba675" (UID: "df31c4c2-304e-4bad-8e6f-18c174eba675"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:36:32.525429 master-0 kubenswrapper[37036]: I0312 14:36:32.525405 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "99433993-93cf-46cb-bb66-485672cb2554" (UID: "99433993-93cf-46cb-bb66-485672cb2554"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:36:32.525532 master-0 kubenswrapper[37036]: I0312 14:36:32.525500 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-config" (OuterVolumeSpecName: "config") pod "99433993-93cf-46cb-bb66-485672cb2554" (UID: "99433993-93cf-46cb-bb66-485672cb2554"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:36:32.527487 master-0 kubenswrapper[37036]: I0312 14:36:32.527345 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99433993-93cf-46cb-bb66-485672cb2554-kube-api-access-2dlf2" (OuterVolumeSpecName: "kube-api-access-2dlf2") pod "99433993-93cf-46cb-bb66-485672cb2554" (UID: "99433993-93cf-46cb-bb66-485672cb2554"). InnerVolumeSpecName "kube-api-access-2dlf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:36:32.527650 master-0 kubenswrapper[37036]: I0312 14:36:32.527491 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df31c4c2-304e-4bad-8e6f-18c174eba675-kube-api-access-gg62n" (OuterVolumeSpecName: "kube-api-access-gg62n") pod "df31c4c2-304e-4bad-8e6f-18c174eba675" (UID: "df31c4c2-304e-4bad-8e6f-18c174eba675"). InnerVolumeSpecName "kube-api-access-gg62n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:36:32.527650 master-0 kubenswrapper[37036]: I0312 14:36:32.527621 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99433993-93cf-46cb-bb66-485672cb2554-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "99433993-93cf-46cb-bb66-485672cb2554" (UID: "99433993-93cf-46cb-bb66-485672cb2554"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:36:32.528998 master-0 kubenswrapper[37036]: I0312 14:36:32.528936 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df31c4c2-304e-4bad-8e6f-18c174eba675-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "df31c4c2-304e-4bad-8e6f-18c174eba675" (UID: "df31c4c2-304e-4bad-8e6f-18c174eba675"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:36:32.624768 master-0 kubenswrapper[37036]: I0312 14:36:32.624698 37036 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:36:32.624768 master-0 kubenswrapper[37036]: I0312 14:36:32.624745 37036 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df31c4c2-304e-4bad-8e6f-18c174eba675-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:36:32.624768 master-0 kubenswrapper[37036]: I0312 14:36:32.624758 37036 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df31c4c2-304e-4bad-8e6f-18c174eba675-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:36:32.624768 master-0 kubenswrapper[37036]: I0312 14:36:32.624771 37036 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 12 14:36:32.624768 master-0 kubenswrapper[37036]: I0312 14:36:32.624784 37036 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99433993-93cf-46cb-bb66-485672cb2554-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:36:32.625353 master-0 kubenswrapper[37036]: I0312 14:36:32.624795 37036 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99433993-93cf-46cb-bb66-485672cb2554-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:36:32.625353 master-0 kubenswrapper[37036]: I0312 14:36:32.624809 37036 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df31c4c2-304e-4bad-8e6f-18c174eba675-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:36:32.625353 master-0 kubenswrapper[37036]: I0312 14:36:32.624825 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg62n\" (UniqueName: \"kubernetes.io/projected/df31c4c2-304e-4bad-8e6f-18c174eba675-kube-api-access-gg62n\") on node \"master-0\" DevicePath \"\"" Mar 12 14:36:32.625353 master-0 kubenswrapper[37036]: I0312 14:36:32.624838 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dlf2\" (UniqueName: \"kubernetes.io/projected/99433993-93cf-46cb-bb66-485672cb2554-kube-api-access-2dlf2\") on node \"master-0\" DevicePath \"\"" Mar 12 14:36:32.827281 master-0 kubenswrapper[37036]: I0312 14:36:32.827190 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:36:32.827852 master-0 kubenswrapper[37036]: E0312 14:36:32.827401 37036 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 14:36:32.827852 master-0 kubenswrapper[37036]: E0312 14:36:32.827424 37036 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 14:36:32.827852 master-0 kubenswrapper[37036]: E0312 14:36:32.827483 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access podName:5a56d42a-efb4-4956-acab-d12c7ca5276e nodeName:}" failed. No retries permitted until 2026-03-12 14:36:48.827465693 +0000 UTC m=+67.835206630 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access") pod "installer-4-master-0" (UID: "5a56d42a-efb4-4956-acab-d12c7ca5276e") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 14:36:32.893151 master-0 kubenswrapper[37036]: I0312 14:36:32.891876 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:36:33.234033 master-0 kubenswrapper[37036]: I0312 14:36:33.232939 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6b77f48c6d-w6489"] Mar 12 14:36:33.234033 master-0 kubenswrapper[37036]: E0312 14:36:33.233218 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99433993-93cf-46cb-bb66-485672cb2554" containerName="controller-manager" Mar 12 14:36:33.234033 master-0 kubenswrapper[37036]: I0312 14:36:33.233230 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="99433993-93cf-46cb-bb66-485672cb2554" containerName="controller-manager" Mar 12 14:36:33.234033 master-0 kubenswrapper[37036]: E0312 14:36:33.233252 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" Mar 12 14:36:33.234033 master-0 kubenswrapper[37036]: I0312 14:36:33.233258 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" Mar 12 14:36:33.234033 master-0 kubenswrapper[37036]: E0312 14:36:33.233266 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" Mar 12 14:36:33.234033 master-0 kubenswrapper[37036]: I0312 14:36:33.233285 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" Mar 12 14:36:33.234033 master-0 kubenswrapper[37036]: I0312 14:36:33.233417 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" Mar 12 14:36:33.234033 master-0 kubenswrapper[37036]: I0312 14:36:33.233460 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="99433993-93cf-46cb-bb66-485672cb2554" containerName="controller-manager" Mar 12 14:36:33.234033 master-0 kubenswrapper[37036]: I0312 14:36:33.233474 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="99433993-93cf-46cb-bb66-485672cb2554" containerName="controller-manager" Mar 12 14:36:33.234603 master-0 kubenswrapper[37036]: I0312 14:36:33.234053 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b77f48c6d-w6489" Mar 12 14:36:33.235989 master-0 kubenswrapper[37036]: I0312 14:36:33.235079 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" Mar 12 14:36:33.238940 master-0 kubenswrapper[37036]: I0312 14:36:33.237360 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-d28jx" Mar 12 14:36:33.238940 master-0 kubenswrapper[37036]: I0312 14:36:33.237451 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 12 14:36:33.238940 master-0 kubenswrapper[37036]: I0312 14:36:33.237544 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 12 14:36:33.238940 master-0 kubenswrapper[37036]: I0312 14:36:33.237972 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 12 14:36:33.238940 master-0 kubenswrapper[37036]: I0312 14:36:33.238319 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 12 14:36:33.238940 master-0 kubenswrapper[37036]: I0312 14:36:33.238342 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 12 14:36:33.241304 master-0 kubenswrapper[37036]: I0312 14:36:33.241274 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" Mar 12 14:36:33.276963 master-0 kubenswrapper[37036]: I0312 14:36:33.276558 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:33.276963 master-0 kubenswrapper[37036]: I0312 14:36:33.276598 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc" event={"ID":"df31c4c2-304e-4bad-8e6f-18c174eba675","Type":"ContainerDied","Data":"0797fe88dc9adea8392e9b93088b1a0313bddd85f5318d3039e5b08dcf043b58"} Mar 12 14:36:33.276963 master-0 kubenswrapper[37036]: I0312 14:36:33.276623 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b77f48c6d-w6489"] Mar 12 14:36:33.276963 master-0 kubenswrapper[37036]: I0312 14:36:33.276659 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:36:33.276963 master-0 kubenswrapper[37036]: I0312 14:36:33.276669 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd" event={"ID":"99433993-93cf-46cb-bb66-485672cb2554","Type":"ContainerDied","Data":"2e21aa41c709714c621e81f34dd2940d383309852477d3447a69f2b11767e16e"} Mar 12 14:36:33.276963 master-0 kubenswrapper[37036]: I0312 14:36:33.276689 37036 scope.go:117] "RemoveContainer" containerID="e05cf7c7c58106dc1c6b46b6d00fbb76e60bbaa968f5d7f6eb52040b9ee4fd95" Mar 12 14:36:33.322079 master-0 kubenswrapper[37036]: I0312 14:36:33.322012 37036 scope.go:117] "RemoveContainer" containerID="80852c13a84697f07d1a8ca8a4892c3fa3a6416ed1dfca07e537b2d4c816a13a" Mar 12 14:36:33.329362 master-0 kubenswrapper[37036]: I0312 14:36:33.329192 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" podStartSLOduration=9.999021351 podStartE2EDuration="12.329177505s" podCreationTimestamp="2026-03-12 14:36:21 +0000 UTC" firstStartedPulling="2026-03-12 14:36:29.301591892 +0000 UTC m=+48.309332819" lastFinishedPulling="2026-03-12 14:36:31.631748036 +0000 UTC m=+50.639488973" observedRunningTime="2026-03-12 14:36:33.328622592 +0000 UTC m=+52.336363549" watchObservedRunningTime="2026-03-12 14:36:33.329177505 +0000 UTC m=+52.336918442" Mar 12 14:36:33.335352 master-0 kubenswrapper[37036]: I0312 14:36:33.335298 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1dd55143-3e81-4eb5-9f83-b4c13614dd69-service-ca\") pod \"console-6b77f48c6d-w6489\" (UID: \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\") " pod="openshift-console/console-6b77f48c6d-w6489" Mar 12 14:36:33.335550 master-0 kubenswrapper[37036]: I0312 14:36:33.335362 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfvdn\" (UniqueName: \"kubernetes.io/projected/1dd55143-3e81-4eb5-9f83-b4c13614dd69-kube-api-access-qfvdn\") pod \"console-6b77f48c6d-w6489\" (UID: \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\") " pod="openshift-console/console-6b77f48c6d-w6489" Mar 12 14:36:33.335550 master-0 kubenswrapper[37036]: I0312 14:36:33.335412 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1dd55143-3e81-4eb5-9f83-b4c13614dd69-console-config\") pod \"console-6b77f48c6d-w6489\" (UID: \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\") " pod="openshift-console/console-6b77f48c6d-w6489" Mar 12 14:36:33.335550 master-0 kubenswrapper[37036]: I0312 14:36:33.335532 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1dd55143-3e81-4eb5-9f83-b4c13614dd69-oauth-serving-cert\") pod \"console-6b77f48c6d-w6489\" (UID: \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\") " pod="openshift-console/console-6b77f48c6d-w6489" Mar 12 14:36:33.335647 master-0 kubenswrapper[37036]: I0312 14:36:33.335597 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1dd55143-3e81-4eb5-9f83-b4c13614dd69-console-serving-cert\") pod \"console-6b77f48c6d-w6489\" (UID: \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\") " pod="openshift-console/console-6b77f48c6d-w6489" Mar 12 14:36:33.335647 master-0 kubenswrapper[37036]: I0312 14:36:33.335619 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1dd55143-3e81-4eb5-9f83-b4c13614dd69-console-oauth-config\") pod \"console-6b77f48c6d-w6489\" (UID: \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\") " pod="openshift-console/console-6b77f48c6d-w6489" Mar 12 14:36:33.426543 master-0 kubenswrapper[37036]: I0312 14:36:33.425719 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd"] Mar 12 14:36:33.431742 master-0 kubenswrapper[37036]: I0312 14:36:33.431690 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6689dcd7fd-vw9vd"] Mar 12 14:36:33.472016 master-0 kubenswrapper[37036]: I0312 14:36:33.471968 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1dd55143-3e81-4eb5-9f83-b4c13614dd69-oauth-serving-cert\") pod \"console-6b77f48c6d-w6489\" (UID: \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\") " pod="openshift-console/console-6b77f48c6d-w6489" Mar 12 14:36:33.472284 master-0 kubenswrapper[37036]: I0312 14:36:33.472267 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1dd55143-3e81-4eb5-9f83-b4c13614dd69-console-serving-cert\") pod \"console-6b77f48c6d-w6489\" (UID: \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\") " pod="openshift-console/console-6b77f48c6d-w6489" Mar 12 14:36:33.472419 master-0 kubenswrapper[37036]: I0312 14:36:33.472404 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1dd55143-3e81-4eb5-9f83-b4c13614dd69-console-oauth-config\") pod \"console-6b77f48c6d-w6489\" (UID: \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\") " pod="openshift-console/console-6b77f48c6d-w6489" Mar 12 14:36:33.472581 master-0 kubenswrapper[37036]: I0312 14:36:33.472565 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1dd55143-3e81-4eb5-9f83-b4c13614dd69-service-ca\") pod \"console-6b77f48c6d-w6489\" (UID: \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\") " pod="openshift-console/console-6b77f48c6d-w6489" Mar 12 14:36:33.472753 master-0 kubenswrapper[37036]: I0312 14:36:33.472734 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfvdn\" (UniqueName: \"kubernetes.io/projected/1dd55143-3e81-4eb5-9f83-b4c13614dd69-kube-api-access-qfvdn\") pod \"console-6b77f48c6d-w6489\" (UID: \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\") " pod="openshift-console/console-6b77f48c6d-w6489" Mar 12 14:36:33.472873 master-0 kubenswrapper[37036]: I0312 14:36:33.472858 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1dd55143-3e81-4eb5-9f83-b4c13614dd69-console-config\") pod \"console-6b77f48c6d-w6489\" (UID: \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\") " pod="openshift-console/console-6b77f48c6d-w6489" Mar 12 14:36:33.488988 master-0 kubenswrapper[37036]: I0312 14:36:33.473879 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1dd55143-3e81-4eb5-9f83-b4c13614dd69-console-config\") pod \"console-6b77f48c6d-w6489\" (UID: \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\") " pod="openshift-console/console-6b77f48c6d-w6489" Mar 12 14:36:33.507144 master-0 kubenswrapper[37036]: I0312 14:36:33.479696 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1dd55143-3e81-4eb5-9f83-b4c13614dd69-service-ca\") pod \"console-6b77f48c6d-w6489\" (UID: \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\") " pod="openshift-console/console-6b77f48c6d-w6489" Mar 12 14:36:33.507473 master-0 kubenswrapper[37036]: I0312 14:36:33.479817 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1dd55143-3e81-4eb5-9f83-b4c13614dd69-oauth-serving-cert\") pod \"console-6b77f48c6d-w6489\" (UID: \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\") " pod="openshift-console/console-6b77f48c6d-w6489" Mar 12 14:36:33.507598 master-0 kubenswrapper[37036]: I0312 14:36:33.482641 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1dd55143-3e81-4eb5-9f83-b4c13614dd69-console-serving-cert\") pod \"console-6b77f48c6d-w6489\" (UID: \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\") " pod="openshift-console/console-6b77f48c6d-w6489" Mar 12 14:36:33.507786 master-0 kubenswrapper[37036]: I0312 14:36:33.495602 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1dd55143-3e81-4eb5-9f83-b4c13614dd69-console-oauth-config\") pod \"console-6b77f48c6d-w6489\" (UID: \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\") " pod="openshift-console/console-6b77f48c6d-w6489" Mar 12 14:36:33.507882 master-0 kubenswrapper[37036]: I0312 14:36:33.502168 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc"] Mar 12 14:36:33.512733 master-0 kubenswrapper[37036]: I0312 14:36:33.509840 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8bfc67b-pz8rc"] Mar 12 14:36:33.512733 master-0 kubenswrapper[37036]: I0312 14:36:33.511779 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfvdn\" (UniqueName: \"kubernetes.io/projected/1dd55143-3e81-4eb5-9f83-b4c13614dd69-kube-api-access-qfvdn\") pod \"console-6b77f48c6d-w6489\" (UID: \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\") " pod="openshift-console/console-6b77f48c6d-w6489" Mar 12 14:36:33.613134 master-0 kubenswrapper[37036]: I0312 14:36:33.612647 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b77f48c6d-w6489" Mar 12 14:36:33.943237 master-0 kubenswrapper[37036]: I0312 14:36:33.942476 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb"] Mar 12 14:36:33.943237 master-0 kubenswrapper[37036]: E0312 14:36:33.942915 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99433993-93cf-46cb-bb66-485672cb2554" containerName="controller-manager" Mar 12 14:36:33.943237 master-0 kubenswrapper[37036]: I0312 14:36:33.942932 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="99433993-93cf-46cb-bb66-485672cb2554" containerName="controller-manager" Mar 12 14:36:33.943237 master-0 kubenswrapper[37036]: I0312 14:36:33.943134 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" containerName="route-controller-manager" Mar 12 14:36:33.943886 master-0 kubenswrapper[37036]: I0312 14:36:33.943866 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb" Mar 12 14:36:33.946760 master-0 kubenswrapper[37036]: I0312 14:36:33.946551 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 14:36:33.946977 master-0 kubenswrapper[37036]: I0312 14:36:33.946795 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 14:36:33.946977 master-0 kubenswrapper[37036]: I0312 14:36:33.946807 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-82tbw" Mar 12 14:36:33.947107 master-0 kubenswrapper[37036]: I0312 14:36:33.947082 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 14:36:33.947300 master-0 kubenswrapper[37036]: I0312 14:36:33.947279 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 14:36:33.947388 master-0 kubenswrapper[37036]: I0312 14:36:33.947373 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 14:36:33.957380 master-0 kubenswrapper[37036]: I0312 14:36:33.952835 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb"] Mar 12 14:36:34.044052 master-0 kubenswrapper[37036]: I0312 14:36:34.043279 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b77f48c6d-w6489"] Mar 12 14:36:34.082403 master-0 kubenswrapper[37036]: I0312 14:36:34.082324 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f18c7090-21d2-45e8-abf1-7ebf7e151c77-client-ca\") pod \"route-controller-manager-7fbf8bd5df-vsfpb\" (UID: \"f18c7090-21d2-45e8-abf1-7ebf7e151c77\") " pod="openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb" Mar 12 14:36:34.082726 master-0 kubenswrapper[37036]: I0312 14:36:34.082413 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f18c7090-21d2-45e8-abf1-7ebf7e151c77-config\") pod \"route-controller-manager-7fbf8bd5df-vsfpb\" (UID: \"f18c7090-21d2-45e8-abf1-7ebf7e151c77\") " pod="openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb" Mar 12 14:36:34.082726 master-0 kubenswrapper[37036]: I0312 14:36:34.082457 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f18c7090-21d2-45e8-abf1-7ebf7e151c77-serving-cert\") pod \"route-controller-manager-7fbf8bd5df-vsfpb\" (UID: \"f18c7090-21d2-45e8-abf1-7ebf7e151c77\") " pod="openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb" Mar 12 14:36:34.082726 master-0 kubenswrapper[37036]: I0312 14:36:34.082537 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xzc5\" (UniqueName: \"kubernetes.io/projected/f18c7090-21d2-45e8-abf1-7ebf7e151c77-kube-api-access-7xzc5\") pod \"route-controller-manager-7fbf8bd5df-vsfpb\" (UID: \"f18c7090-21d2-45e8-abf1-7ebf7e151c77\") " pod="openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb" Mar 12 14:36:34.183328 master-0 kubenswrapper[37036]: I0312 14:36:34.183258 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xzc5\" (UniqueName: \"kubernetes.io/projected/f18c7090-21d2-45e8-abf1-7ebf7e151c77-kube-api-access-7xzc5\") pod \"route-controller-manager-7fbf8bd5df-vsfpb\" (UID: \"f18c7090-21d2-45e8-abf1-7ebf7e151c77\") " pod="openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb" Mar 12 14:36:34.183547 master-0 kubenswrapper[37036]: I0312 14:36:34.183346 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f18c7090-21d2-45e8-abf1-7ebf7e151c77-client-ca\") pod \"route-controller-manager-7fbf8bd5df-vsfpb\" (UID: \"f18c7090-21d2-45e8-abf1-7ebf7e151c77\") " pod="openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb" Mar 12 14:36:34.183547 master-0 kubenswrapper[37036]: I0312 14:36:34.183400 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f18c7090-21d2-45e8-abf1-7ebf7e151c77-config\") pod \"route-controller-manager-7fbf8bd5df-vsfpb\" (UID: \"f18c7090-21d2-45e8-abf1-7ebf7e151c77\") " pod="openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb" Mar 12 14:36:34.184058 master-0 kubenswrapper[37036]: I0312 14:36:34.184002 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f18c7090-21d2-45e8-abf1-7ebf7e151c77-serving-cert\") pod \"route-controller-manager-7fbf8bd5df-vsfpb\" (UID: \"f18c7090-21d2-45e8-abf1-7ebf7e151c77\") " pod="openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb" Mar 12 14:36:34.184382 master-0 kubenswrapper[37036]: I0312 14:36:34.184350 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f18c7090-21d2-45e8-abf1-7ebf7e151c77-client-ca\") pod \"route-controller-manager-7fbf8bd5df-vsfpb\" (UID: \"f18c7090-21d2-45e8-abf1-7ebf7e151c77\") " pod="openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb" Mar 12 14:36:34.184963 master-0 kubenswrapper[37036]: I0312 14:36:34.184935 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f18c7090-21d2-45e8-abf1-7ebf7e151c77-config\") pod \"route-controller-manager-7fbf8bd5df-vsfpb\" (UID: \"f18c7090-21d2-45e8-abf1-7ebf7e151c77\") " pod="openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb" Mar 12 14:36:34.197767 master-0 kubenswrapper[37036]: I0312 14:36:34.197727 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f18c7090-21d2-45e8-abf1-7ebf7e151c77-serving-cert\") pod \"route-controller-manager-7fbf8bd5df-vsfpb\" (UID: \"f18c7090-21d2-45e8-abf1-7ebf7e151c77\") " pod="openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb" Mar 12 14:36:34.200470 master-0 kubenswrapper[37036]: I0312 14:36:34.200417 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xzc5\" (UniqueName: \"kubernetes.io/projected/f18c7090-21d2-45e8-abf1-7ebf7e151c77-kube-api-access-7xzc5\") pod \"route-controller-manager-7fbf8bd5df-vsfpb\" (UID: \"f18c7090-21d2-45e8-abf1-7ebf7e151c77\") " pod="openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb" Mar 12 14:36:34.247987 master-0 kubenswrapper[37036]: I0312 14:36:34.247930 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b77f48c6d-w6489" event={"ID":"1dd55143-3e81-4eb5-9f83-b4c13614dd69","Type":"ContainerStarted","Data":"acf7593e35971481ff36e3b3d9c788080b6be43257a0f65baadbb95e0371defe"} Mar 12 14:36:34.269457 master-0 kubenswrapper[37036]: I0312 14:36:34.269411 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb" Mar 12 14:36:34.620570 master-0 kubenswrapper[37036]: I0312 14:36:34.620453 37036 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 14:36:34.620791 master-0 kubenswrapper[37036]: I0312 14:36:34.620715 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="3a18cac8a90d6913a6a0391d805cddc9" containerName="startup-monitor" containerID="cri-o://e9fc6346a6da6119c81346ba303c8b5290b20fcbd3042c75e28a3ab7c8620e35" gracePeriod=5 Mar 12 14:36:34.741103 master-0 kubenswrapper[37036]: I0312 14:36:34.741051 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb"] Mar 12 14:36:34.753565 master-0 kubenswrapper[37036]: W0312 14:36:34.753520 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf18c7090_21d2_45e8_abf1_7ebf7e151c77.slice/crio-36f9994625807a2f2debb395abd66fc1d434681fc029a6cf92a88a6d8628134e WatchSource:0}: Error finding container 36f9994625807a2f2debb395abd66fc1d434681fc029a6cf92a88a6d8628134e: Status 404 returned error can't find the container with id 36f9994625807a2f2debb395abd66fc1d434681fc029a6cf92a88a6d8628134e Mar 12 14:36:35.014926 master-0 kubenswrapper[37036]: I0312 14:36:35.013987 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-667cf89f7-gvhgl"] Mar 12 14:36:35.014926 master-0 kubenswrapper[37036]: E0312 14:36:35.014303 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a18cac8a90d6913a6a0391d805cddc9" containerName="startup-monitor" Mar 12 14:36:35.014926 master-0 kubenswrapper[37036]: I0312 14:36:35.014318 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a18cac8a90d6913a6a0391d805cddc9" containerName="startup-monitor" Mar 12 14:36:35.014926 master-0 kubenswrapper[37036]: I0312 14:36:35.014485 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a18cac8a90d6913a6a0391d805cddc9" containerName="startup-monitor" Mar 12 14:36:35.015607 master-0 kubenswrapper[37036]: I0312 14:36:35.014947 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" Mar 12 14:36:35.017594 master-0 kubenswrapper[37036]: I0312 14:36:35.017539 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 14:36:35.018041 master-0 kubenswrapper[37036]: I0312 14:36:35.017805 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-2qg98" Mar 12 14:36:35.018041 master-0 kubenswrapper[37036]: I0312 14:36:35.017826 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 14:36:35.018374 master-0 kubenswrapper[37036]: I0312 14:36:35.018243 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 14:36:35.018374 master-0 kubenswrapper[37036]: I0312 14:36:35.018276 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 14:36:35.029923 master-0 kubenswrapper[37036]: I0312 14:36:35.024847 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 14:36:35.029923 master-0 kubenswrapper[37036]: I0312 14:36:35.027069 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 14:36:35.033921 master-0 kubenswrapper[37036]: I0312 14:36:35.032628 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-667cf89f7-gvhgl"] Mar 12 14:36:35.198722 master-0 kubenswrapper[37036]: I0312 14:36:35.198588 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckqrh\" (UniqueName: \"kubernetes.io/projected/a335739e-a77f-4315-9aa8-4eb3361acd6a-kube-api-access-ckqrh\") pod \"controller-manager-667cf89f7-gvhgl\" (UID: \"a335739e-a77f-4315-9aa8-4eb3361acd6a\") " pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" Mar 12 14:36:35.198951 master-0 kubenswrapper[37036]: I0312 14:36:35.198795 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a335739e-a77f-4315-9aa8-4eb3361acd6a-serving-cert\") pod \"controller-manager-667cf89f7-gvhgl\" (UID: \"a335739e-a77f-4315-9aa8-4eb3361acd6a\") " pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" Mar 12 14:36:35.198951 master-0 kubenswrapper[37036]: I0312 14:36:35.198872 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a335739e-a77f-4315-9aa8-4eb3361acd6a-config\") pod \"controller-manager-667cf89f7-gvhgl\" (UID: \"a335739e-a77f-4315-9aa8-4eb3361acd6a\") " pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" Mar 12 14:36:35.198951 master-0 kubenswrapper[37036]: I0312 14:36:35.198924 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a335739e-a77f-4315-9aa8-4eb3361acd6a-client-ca\") pod \"controller-manager-667cf89f7-gvhgl\" (UID: \"a335739e-a77f-4315-9aa8-4eb3361acd6a\") " pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" Mar 12 14:36:35.199098 master-0 kubenswrapper[37036]: I0312 14:36:35.198965 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a335739e-a77f-4315-9aa8-4eb3361acd6a-proxy-ca-bundles\") pod \"controller-manager-667cf89f7-gvhgl\" (UID: \"a335739e-a77f-4315-9aa8-4eb3361acd6a\") " pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" Mar 12 14:36:35.243658 master-0 kubenswrapper[37036]: I0312 14:36:35.243598 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99433993-93cf-46cb-bb66-485672cb2554" path="/var/lib/kubelet/pods/99433993-93cf-46cb-bb66-485672cb2554/volumes" Mar 12 14:36:35.244437 master-0 kubenswrapper[37036]: I0312 14:36:35.244399 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df31c4c2-304e-4bad-8e6f-18c174eba675" path="/var/lib/kubelet/pods/df31c4c2-304e-4bad-8e6f-18c174eba675/volumes" Mar 12 14:36:35.257849 master-0 kubenswrapper[37036]: I0312 14:36:35.257803 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb" event={"ID":"f18c7090-21d2-45e8-abf1-7ebf7e151c77","Type":"ContainerStarted","Data":"2243679fdb8cbfc8adabe8fffbf5e9e8b0f7cbed367675c1529d2c11dc32c0d9"} Mar 12 14:36:35.257849 master-0 kubenswrapper[37036]: I0312 14:36:35.257847 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb" event={"ID":"f18c7090-21d2-45e8-abf1-7ebf7e151c77","Type":"ContainerStarted","Data":"36f9994625807a2f2debb395abd66fc1d434681fc029a6cf92a88a6d8628134e"} Mar 12 14:36:35.258144 master-0 kubenswrapper[37036]: I0312 14:36:35.258098 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb" Mar 12 14:36:35.278538 master-0 kubenswrapper[37036]: I0312 14:36:35.278467 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb" podStartSLOduration=4.27844831 podStartE2EDuration="4.27844831s" podCreationTimestamp="2026-03-12 14:36:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:36:35.27288701 +0000 UTC m=+54.280627947" watchObservedRunningTime="2026-03-12 14:36:35.27844831 +0000 UTC m=+54.286189247" Mar 12 14:36:35.300781 master-0 kubenswrapper[37036]: I0312 14:36:35.300730 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a335739e-a77f-4315-9aa8-4eb3361acd6a-config\") pod \"controller-manager-667cf89f7-gvhgl\" (UID: \"a335739e-a77f-4315-9aa8-4eb3361acd6a\") " pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" Mar 12 14:36:35.300781 master-0 kubenswrapper[37036]: I0312 14:36:35.300781 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a335739e-a77f-4315-9aa8-4eb3361acd6a-client-ca\") pod \"controller-manager-667cf89f7-gvhgl\" (UID: \"a335739e-a77f-4315-9aa8-4eb3361acd6a\") " pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" Mar 12 14:36:35.301050 master-0 kubenswrapper[37036]: I0312 14:36:35.300954 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a335739e-a77f-4315-9aa8-4eb3361acd6a-proxy-ca-bundles\") pod \"controller-manager-667cf89f7-gvhgl\" (UID: \"a335739e-a77f-4315-9aa8-4eb3361acd6a\") " pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" Mar 12 14:36:35.301228 master-0 kubenswrapper[37036]: I0312 14:36:35.301203 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckqrh\" (UniqueName: \"kubernetes.io/projected/a335739e-a77f-4315-9aa8-4eb3361acd6a-kube-api-access-ckqrh\") pod \"controller-manager-667cf89f7-gvhgl\" (UID: \"a335739e-a77f-4315-9aa8-4eb3361acd6a\") " pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" Mar 12 14:36:35.301316 master-0 kubenswrapper[37036]: I0312 14:36:35.301284 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a335739e-a77f-4315-9aa8-4eb3361acd6a-serving-cert\") pod \"controller-manager-667cf89f7-gvhgl\" (UID: \"a335739e-a77f-4315-9aa8-4eb3361acd6a\") " pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" Mar 12 14:36:35.301706 master-0 kubenswrapper[37036]: I0312 14:36:35.301669 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a335739e-a77f-4315-9aa8-4eb3361acd6a-client-ca\") pod \"controller-manager-667cf89f7-gvhgl\" (UID: \"a335739e-a77f-4315-9aa8-4eb3361acd6a\") " pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" Mar 12 14:36:35.302686 master-0 kubenswrapper[37036]: I0312 14:36:35.302644 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a335739e-a77f-4315-9aa8-4eb3361acd6a-proxy-ca-bundles\") pod \"controller-manager-667cf89f7-gvhgl\" (UID: \"a335739e-a77f-4315-9aa8-4eb3361acd6a\") " pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" Mar 12 14:36:35.302754 master-0 kubenswrapper[37036]: I0312 14:36:35.302726 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a335739e-a77f-4315-9aa8-4eb3361acd6a-config\") pod \"controller-manager-667cf89f7-gvhgl\" (UID: \"a335739e-a77f-4315-9aa8-4eb3361acd6a\") " pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" Mar 12 14:36:35.304443 master-0 kubenswrapper[37036]: I0312 14:36:35.304396 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a335739e-a77f-4315-9aa8-4eb3361acd6a-serving-cert\") pod \"controller-manager-667cf89f7-gvhgl\" (UID: \"a335739e-a77f-4315-9aa8-4eb3361acd6a\") " pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" Mar 12 14:36:35.318543 master-0 kubenswrapper[37036]: I0312 14:36:35.318505 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckqrh\" (UniqueName: \"kubernetes.io/projected/a335739e-a77f-4315-9aa8-4eb3361acd6a-kube-api-access-ckqrh\") pod \"controller-manager-667cf89f7-gvhgl\" (UID: \"a335739e-a77f-4315-9aa8-4eb3361acd6a\") " pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" Mar 12 14:36:35.347283 master-0 kubenswrapper[37036]: I0312 14:36:35.347250 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" Mar 12 14:36:35.403310 master-0 kubenswrapper[37036]: I0312 14:36:35.401353 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb" Mar 12 14:36:35.797573 master-0 kubenswrapper[37036]: I0312 14:36:35.797521 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-667cf89f7-gvhgl"] Mar 12 14:36:35.807589 master-0 kubenswrapper[37036]: W0312 14:36:35.807522 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda335739e_a77f_4315_9aa8_4eb3361acd6a.slice/crio-a85406199e50819889fdf742efed88428e1e7f4b54ae87737a0836191f2ab799 WatchSource:0}: Error finding container a85406199e50819889fdf742efed88428e1e7f4b54ae87737a0836191f2ab799: Status 404 returned error can't find the container with id a85406199e50819889fdf742efed88428e1e7f4b54ae87737a0836191f2ab799 Mar 12 14:36:36.269875 master-0 kubenswrapper[37036]: I0312 14:36:36.268921 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" event={"ID":"a335739e-a77f-4315-9aa8-4eb3361acd6a","Type":"ContainerStarted","Data":"1efa1711c0380030e152f693704bb3bbf7059dc65c603dfb7c3a615d9d088285"} Mar 12 14:36:36.269875 master-0 kubenswrapper[37036]: I0312 14:36:36.269006 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" event={"ID":"a335739e-a77f-4315-9aa8-4eb3361acd6a","Type":"ContainerStarted","Data":"a85406199e50819889fdf742efed88428e1e7f4b54ae87737a0836191f2ab799"} Mar 12 14:36:36.269875 master-0 kubenswrapper[37036]: I0312 14:36:36.269509 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" Mar 12 14:36:36.276430 master-0 kubenswrapper[37036]: I0312 14:36:36.276191 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" Mar 12 14:36:36.289020 master-0 kubenswrapper[37036]: I0312 14:36:36.288947 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" podStartSLOduration=5.288930074 podStartE2EDuration="5.288930074s" podCreationTimestamp="2026-03-12 14:36:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:36:36.287194211 +0000 UTC m=+55.294935158" watchObservedRunningTime="2026-03-12 14:36:36.288930074 +0000 UTC m=+55.296671011" Mar 12 14:36:37.092089 master-0 kubenswrapper[37036]: I0312 14:36:37.091860 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:36:38.613578 master-0 kubenswrapper[37036]: I0312 14:36:38.613454 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-c847675b7-vfq5t"] Mar 12 14:36:38.614487 master-0 kubenswrapper[37036]: I0312 14:36:38.614252 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:36:38.626660 master-0 kubenswrapper[37036]: I0312 14:36:38.626627 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 12 14:36:38.637861 master-0 kubenswrapper[37036]: I0312 14:36:38.637788 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-c847675b7-vfq5t"] Mar 12 14:36:38.771805 master-0 kubenswrapper[37036]: I0312 14:36:38.771676 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0323a60d-acb9-4209-a5a5-9b45cc819ac5-console-config\") pod \"console-c847675b7-vfq5t\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:36:38.771805 master-0 kubenswrapper[37036]: I0312 14:36:38.771739 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0323a60d-acb9-4209-a5a5-9b45cc819ac5-service-ca\") pod \"console-c847675b7-vfq5t\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:36:38.771805 master-0 kubenswrapper[37036]: I0312 14:36:38.771783 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0323a60d-acb9-4209-a5a5-9b45cc819ac5-trusted-ca-bundle\") pod \"console-c847675b7-vfq5t\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:36:38.772119 master-0 kubenswrapper[37036]: I0312 14:36:38.771964 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtdbd\" (UniqueName: \"kubernetes.io/projected/0323a60d-acb9-4209-a5a5-9b45cc819ac5-kube-api-access-xtdbd\") pod \"console-c847675b7-vfq5t\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:36:38.772119 master-0 kubenswrapper[37036]: I0312 14:36:38.772072 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0323a60d-acb9-4209-a5a5-9b45cc819ac5-oauth-serving-cert\") pod \"console-c847675b7-vfq5t\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:36:38.772186 master-0 kubenswrapper[37036]: I0312 14:36:38.772131 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0323a60d-acb9-4209-a5a5-9b45cc819ac5-console-serving-cert\") pod \"console-c847675b7-vfq5t\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:36:38.772186 master-0 kubenswrapper[37036]: I0312 14:36:38.772155 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0323a60d-acb9-4209-a5a5-9b45cc819ac5-console-oauth-config\") pod \"console-c847675b7-vfq5t\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:36:38.873110 master-0 kubenswrapper[37036]: I0312 14:36:38.873055 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0323a60d-acb9-4209-a5a5-9b45cc819ac5-console-config\") pod \"console-c847675b7-vfq5t\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:36:38.873110 master-0 kubenswrapper[37036]: I0312 14:36:38.873095 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0323a60d-acb9-4209-a5a5-9b45cc819ac5-service-ca\") pod \"console-c847675b7-vfq5t\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:36:38.873329 master-0 kubenswrapper[37036]: I0312 14:36:38.873222 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0323a60d-acb9-4209-a5a5-9b45cc819ac5-trusted-ca-bundle\") pod \"console-c847675b7-vfq5t\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:36:38.873329 master-0 kubenswrapper[37036]: I0312 14:36:38.873275 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtdbd\" (UniqueName: \"kubernetes.io/projected/0323a60d-acb9-4209-a5a5-9b45cc819ac5-kube-api-access-xtdbd\") pod \"console-c847675b7-vfq5t\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:36:38.873329 master-0 kubenswrapper[37036]: I0312 14:36:38.873298 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0323a60d-acb9-4209-a5a5-9b45cc819ac5-oauth-serving-cert\") pod \"console-c847675b7-vfq5t\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:36:38.873329 master-0 kubenswrapper[37036]: I0312 14:36:38.873319 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0323a60d-acb9-4209-a5a5-9b45cc819ac5-console-serving-cert\") pod \"console-c847675b7-vfq5t\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:36:38.873451 master-0 kubenswrapper[37036]: I0312 14:36:38.873333 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0323a60d-acb9-4209-a5a5-9b45cc819ac5-console-oauth-config\") pod \"console-c847675b7-vfq5t\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:36:38.873964 master-0 kubenswrapper[37036]: I0312 14:36:38.873926 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0323a60d-acb9-4209-a5a5-9b45cc819ac5-service-ca\") pod \"console-c847675b7-vfq5t\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:36:38.874002 master-0 kubenswrapper[37036]: I0312 14:36:38.873926 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0323a60d-acb9-4209-a5a5-9b45cc819ac5-console-config\") pod \"console-c847675b7-vfq5t\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:36:38.874394 master-0 kubenswrapper[37036]: I0312 14:36:38.874369 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0323a60d-acb9-4209-a5a5-9b45cc819ac5-trusted-ca-bundle\") pod \"console-c847675b7-vfq5t\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:36:38.874632 master-0 kubenswrapper[37036]: I0312 14:36:38.874599 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0323a60d-acb9-4209-a5a5-9b45cc819ac5-oauth-serving-cert\") pod \"console-c847675b7-vfq5t\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:36:38.876808 master-0 kubenswrapper[37036]: I0312 14:36:38.876767 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0323a60d-acb9-4209-a5a5-9b45cc819ac5-console-oauth-config\") pod \"console-c847675b7-vfq5t\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:36:38.883075 master-0 kubenswrapper[37036]: I0312 14:36:38.883022 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0323a60d-acb9-4209-a5a5-9b45cc819ac5-console-serving-cert\") pod \"console-c847675b7-vfq5t\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:36:38.889574 master-0 kubenswrapper[37036]: I0312 14:36:38.889532 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtdbd\" (UniqueName: \"kubernetes.io/projected/0323a60d-acb9-4209-a5a5-9b45cc819ac5-kube-api-access-xtdbd\") pod \"console-c847675b7-vfq5t\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:36:38.930481 master-0 kubenswrapper[37036]: I0312 14:36:38.930394 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:36:39.305338 master-0 kubenswrapper[37036]: I0312 14:36:39.305222 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b77f48c6d-w6489" event={"ID":"1dd55143-3e81-4eb5-9f83-b4c13614dd69","Type":"ContainerStarted","Data":"228bb983396bf00758302746e1baf37b799848dbac21045f7d8e5330914695fb"} Mar 12 14:36:39.318663 master-0 kubenswrapper[37036]: I0312 14:36:39.318604 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-c847675b7-vfq5t"] Mar 12 14:36:39.329068 master-0 kubenswrapper[37036]: I0312 14:36:39.328992 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6b77f48c6d-w6489" podStartSLOduration=1.885799529 podStartE2EDuration="6.32897169s" podCreationTimestamp="2026-03-12 14:36:33 +0000 UTC" firstStartedPulling="2026-03-12 14:36:34.052407074 +0000 UTC m=+53.060148011" lastFinishedPulling="2026-03-12 14:36:38.495579235 +0000 UTC m=+57.503320172" observedRunningTime="2026-03-12 14:36:39.326880438 +0000 UTC m=+58.334621395" watchObservedRunningTime="2026-03-12 14:36:39.32897169 +0000 UTC m=+58.336712627" Mar 12 14:36:39.331965 master-0 kubenswrapper[37036]: W0312 14:36:39.331884 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0323a60d_acb9_4209_a5a5_9b45cc819ac5.slice/crio-77c6c832f2a05f89db685a86302c19f4dd2903a2a825c503719fb08eb2eb2a1d WatchSource:0}: Error finding container 77c6c832f2a05f89db685a86302c19f4dd2903a2a825c503719fb08eb2eb2a1d: Status 404 returned error can't find the container with id 77c6c832f2a05f89db685a86302c19f4dd2903a2a825c503719fb08eb2eb2a1d Mar 12 14:36:40.186689 master-0 kubenswrapper[37036]: I0312 14:36:40.186648 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_3a18cac8a90d6913a6a0391d805cddc9/startup-monitor/0.log" Mar 12 14:36:40.187284 master-0 kubenswrapper[37036]: I0312 14:36:40.186725 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:36:40.293506 master-0 kubenswrapper[37036]: I0312 14:36:40.293457 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"3a18cac8a90d6913a6a0391d805cddc9\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " Mar 12 14:36:40.293506 master-0 kubenswrapper[37036]: I0312 14:36:40.293508 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"3a18cac8a90d6913a6a0391d805cddc9\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " Mar 12 14:36:40.293727 master-0 kubenswrapper[37036]: I0312 14:36:40.293581 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"3a18cac8a90d6913a6a0391d805cddc9\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " Mar 12 14:36:40.293727 master-0 kubenswrapper[37036]: I0312 14:36:40.293612 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"3a18cac8a90d6913a6a0391d805cddc9\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " Mar 12 14:36:40.293727 master-0 kubenswrapper[37036]: I0312 14:36:40.293672 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"3a18cac8a90d6913a6a0391d805cddc9\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " Mar 12 14:36:40.294067 master-0 kubenswrapper[37036]: I0312 14:36:40.294039 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log" (OuterVolumeSpecName: "var-log") pod "3a18cac8a90d6913a6a0391d805cddc9" (UID: "3a18cac8a90d6913a6a0391d805cddc9"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:36:40.294116 master-0 kubenswrapper[37036]: I0312 14:36:40.294082 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a18cac8a90d6913a6a0391d805cddc9" (UID: "3a18cac8a90d6913a6a0391d805cddc9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:36:40.294116 master-0 kubenswrapper[37036]: I0312 14:36:40.294100 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests" (OuterVolumeSpecName: "manifests") pod "3a18cac8a90d6913a6a0391d805cddc9" (UID: "3a18cac8a90d6913a6a0391d805cddc9"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:36:40.294376 master-0 kubenswrapper[37036]: I0312 14:36:40.294345 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock" (OuterVolumeSpecName: "var-lock") pod "3a18cac8a90d6913a6a0391d805cddc9" (UID: "3a18cac8a90d6913a6a0391d805cddc9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:36:40.299066 master-0 kubenswrapper[37036]: I0312 14:36:40.299029 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "3a18cac8a90d6913a6a0391d805cddc9" (UID: "3a18cac8a90d6913a6a0391d805cddc9"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:36:40.324621 master-0 kubenswrapper[37036]: I0312 14:36:40.324333 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_3a18cac8a90d6913a6a0391d805cddc9/startup-monitor/0.log" Mar 12 14:36:40.324621 master-0 kubenswrapper[37036]: I0312 14:36:40.324388 37036 generic.go:334] "Generic (PLEG): container finished" podID="3a18cac8a90d6913a6a0391d805cddc9" containerID="e9fc6346a6da6119c81346ba303c8b5290b20fcbd3042c75e28a3ab7c8620e35" exitCode=137 Mar 12 14:36:40.324621 master-0 kubenswrapper[37036]: I0312 14:36:40.324488 37036 scope.go:117] "RemoveContainer" containerID="e9fc6346a6da6119c81346ba303c8b5290b20fcbd3042c75e28a3ab7c8620e35" Mar 12 14:36:40.324621 master-0 kubenswrapper[37036]: I0312 14:36:40.324588 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:36:40.330223 master-0 kubenswrapper[37036]: I0312 14:36:40.330160 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c847675b7-vfq5t" event={"ID":"0323a60d-acb9-4209-a5a5-9b45cc819ac5","Type":"ContainerStarted","Data":"e2ba9e53767ff486d4655064bbffedba7e2ebc32e2cab581b35941369717ec49"} Mar 12 14:36:40.330329 master-0 kubenswrapper[37036]: I0312 14:36:40.330231 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c847675b7-vfq5t" event={"ID":"0323a60d-acb9-4209-a5a5-9b45cc819ac5","Type":"ContainerStarted","Data":"77c6c832f2a05f89db685a86302c19f4dd2903a2a825c503719fb08eb2eb2a1d"} Mar 12 14:36:40.349242 master-0 kubenswrapper[37036]: I0312 14:36:40.349181 37036 scope.go:117] "RemoveContainer" containerID="e9fc6346a6da6119c81346ba303c8b5290b20fcbd3042c75e28a3ab7c8620e35" Mar 12 14:36:40.350550 master-0 kubenswrapper[37036]: E0312 14:36:40.350142 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9fc6346a6da6119c81346ba303c8b5290b20fcbd3042c75e28a3ab7c8620e35\": container with ID starting with e9fc6346a6da6119c81346ba303c8b5290b20fcbd3042c75e28a3ab7c8620e35 not found: ID does not exist" containerID="e9fc6346a6da6119c81346ba303c8b5290b20fcbd3042c75e28a3ab7c8620e35" Mar 12 14:36:40.350550 master-0 kubenswrapper[37036]: I0312 14:36:40.350215 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9fc6346a6da6119c81346ba303c8b5290b20fcbd3042c75e28a3ab7c8620e35"} err="failed to get container status \"e9fc6346a6da6119c81346ba303c8b5290b20fcbd3042c75e28a3ab7c8620e35\": rpc error: code = NotFound desc = could not find container \"e9fc6346a6da6119c81346ba303c8b5290b20fcbd3042c75e28a3ab7c8620e35\": container with ID starting with e9fc6346a6da6119c81346ba303c8b5290b20fcbd3042c75e28a3ab7c8620e35 not found: ID does not exist" Mar 12 14:36:40.357284 master-0 kubenswrapper[37036]: I0312 14:36:40.355615 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-c847675b7-vfq5t" podStartSLOduration=2.355594378 podStartE2EDuration="2.355594378s" podCreationTimestamp="2026-03-12 14:36:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:36:40.353447035 +0000 UTC m=+59.361187982" watchObservedRunningTime="2026-03-12 14:36:40.355594378 +0000 UTC m=+59.363335315" Mar 12 14:36:40.375645 master-0 kubenswrapper[37036]: I0312 14:36:40.375578 37036 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="be0ec060-e830-40bd-bdcd-58aa9a293d52" Mar 12 14:36:40.394887 master-0 kubenswrapper[37036]: I0312 14:36:40.394768 37036 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:36:40.394887 master-0 kubenswrapper[37036]: I0312 14:36:40.394809 37036 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 14:36:40.394887 master-0 kubenswrapper[37036]: I0312 14:36:40.394818 37036 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") on node \"master-0\" DevicePath \"\"" Mar 12 14:36:40.394887 master-0 kubenswrapper[37036]: I0312 14:36:40.394826 37036 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:36:40.394887 master-0 kubenswrapper[37036]: I0312 14:36:40.394835 37036 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") on node \"master-0\" DevicePath \"\"" Mar 12 14:36:41.243235 master-0 kubenswrapper[37036]: I0312 14:36:41.243187 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a18cac8a90d6913a6a0391d805cddc9" path="/var/lib/kubelet/pods/3a18cac8a90d6913a6a0391d805cddc9/volumes" Mar 12 14:36:41.243670 master-0 kubenswrapper[37036]: I0312 14:36:41.243434 37036 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 12 14:36:41.262434 master-0 kubenswrapper[37036]: I0312 14:36:41.262398 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 14:36:41.262552 master-0 kubenswrapper[37036]: I0312 14:36:41.262529 37036 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="be0ec060-e830-40bd-bdcd-58aa9a293d52" Mar 12 14:36:41.265401 master-0 kubenswrapper[37036]: I0312 14:36:41.265331 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 14:36:41.265401 master-0 kubenswrapper[37036]: I0312 14:36:41.265397 37036 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="be0ec060-e830-40bd-bdcd-58aa9a293d52" Mar 12 14:36:43.613291 master-0 kubenswrapper[37036]: I0312 14:36:43.613188 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6b77f48c6d-w6489" Mar 12 14:36:43.613291 master-0 kubenswrapper[37036]: I0312 14:36:43.613278 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6b77f48c6d-w6489" Mar 12 14:36:43.615261 master-0 kubenswrapper[37036]: I0312 14:36:43.615207 37036 patch_prober.go:28] interesting pod/console-6b77f48c6d-w6489 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 12 14:36:43.615941 master-0 kubenswrapper[37036]: I0312 14:36:43.615291 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6b77f48c6d-w6489" podUID="1dd55143-3e81-4eb5-9f83-b4c13614dd69" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 12 14:36:45.213812 master-0 kubenswrapper[37036]: I0312 14:36:45.212849 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-d94bf6d99-jdmf7"] Mar 12 14:36:48.006671 master-0 kubenswrapper[37036]: I0312 14:36:48.006571 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-nn5f6"] Mar 12 14:36:48.007630 master-0 kubenswrapper[37036]: I0312 14:36:48.007609 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-nn5f6" Mar 12 14:36:48.013721 master-0 kubenswrapper[37036]: I0312 14:36:48.012546 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 12 14:36:48.013721 master-0 kubenswrapper[37036]: I0312 14:36:48.013373 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-x42kh" Mar 12 14:36:48.118841 master-0 kubenswrapper[37036]: I0312 14:36:48.118420 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmrk7\" (UniqueName: \"kubernetes.io/projected/4e5aaf2a-7df5-464b-b7c1-5a0e696eff02-kube-api-access-kmrk7\") pod \"node-ca-nn5f6\" (UID: \"4e5aaf2a-7df5-464b-b7c1-5a0e696eff02\") " pod="openshift-image-registry/node-ca-nn5f6" Mar 12 14:36:48.118841 master-0 kubenswrapper[37036]: I0312 14:36:48.118488 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4e5aaf2a-7df5-464b-b7c1-5a0e696eff02-serviceca\") pod \"node-ca-nn5f6\" (UID: \"4e5aaf2a-7df5-464b-b7c1-5a0e696eff02\") " pod="openshift-image-registry/node-ca-nn5f6" Mar 12 14:36:48.118841 master-0 kubenswrapper[37036]: I0312 14:36:48.118521 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4e5aaf2a-7df5-464b-b7c1-5a0e696eff02-host\") pod \"node-ca-nn5f6\" (UID: \"4e5aaf2a-7df5-464b-b7c1-5a0e696eff02\") " pod="openshift-image-registry/node-ca-nn5f6" Mar 12 14:36:48.219888 master-0 kubenswrapper[37036]: I0312 14:36:48.219804 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4e5aaf2a-7df5-464b-b7c1-5a0e696eff02-serviceca\") pod \"node-ca-nn5f6\" (UID: \"4e5aaf2a-7df5-464b-b7c1-5a0e696eff02\") " pod="openshift-image-registry/node-ca-nn5f6" Mar 12 14:36:48.220551 master-0 kubenswrapper[37036]: I0312 14:36:48.220141 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4e5aaf2a-7df5-464b-b7c1-5a0e696eff02-host\") pod \"node-ca-nn5f6\" (UID: \"4e5aaf2a-7df5-464b-b7c1-5a0e696eff02\") " pod="openshift-image-registry/node-ca-nn5f6" Mar 12 14:36:48.220551 master-0 kubenswrapper[37036]: I0312 14:36:48.220252 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4e5aaf2a-7df5-464b-b7c1-5a0e696eff02-host\") pod \"node-ca-nn5f6\" (UID: \"4e5aaf2a-7df5-464b-b7c1-5a0e696eff02\") " pod="openshift-image-registry/node-ca-nn5f6" Mar 12 14:36:48.220551 master-0 kubenswrapper[37036]: I0312 14:36:48.220256 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmrk7\" (UniqueName: \"kubernetes.io/projected/4e5aaf2a-7df5-464b-b7c1-5a0e696eff02-kube-api-access-kmrk7\") pod \"node-ca-nn5f6\" (UID: \"4e5aaf2a-7df5-464b-b7c1-5a0e696eff02\") " pod="openshift-image-registry/node-ca-nn5f6" Mar 12 14:36:48.220551 master-0 kubenswrapper[37036]: I0312 14:36:48.220432 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4e5aaf2a-7df5-464b-b7c1-5a0e696eff02-serviceca\") pod \"node-ca-nn5f6\" (UID: \"4e5aaf2a-7df5-464b-b7c1-5a0e696eff02\") " pod="openshift-image-registry/node-ca-nn5f6" Mar 12 14:36:48.237759 master-0 kubenswrapper[37036]: I0312 14:36:48.237721 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmrk7\" (UniqueName: \"kubernetes.io/projected/4e5aaf2a-7df5-464b-b7c1-5a0e696eff02-kube-api-access-kmrk7\") pod \"node-ca-nn5f6\" (UID: \"4e5aaf2a-7df5-464b-b7c1-5a0e696eff02\") " pod="openshift-image-registry/node-ca-nn5f6" Mar 12 14:36:48.339973 master-0 kubenswrapper[37036]: I0312 14:36:48.339836 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-nn5f6" Mar 12 14:36:48.929177 master-0 kubenswrapper[37036]: I0312 14:36:48.928871 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:36:48.929177 master-0 kubenswrapper[37036]: E0312 14:36:48.929049 37036 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 14:36:48.929177 master-0 kubenswrapper[37036]: E0312 14:36:48.929077 37036 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 14:36:48.929177 master-0 kubenswrapper[37036]: E0312 14:36:48.929138 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access podName:5a56d42a-efb4-4956-acab-d12c7ca5276e nodeName:}" failed. No retries permitted until 2026-03-12 14:37:20.929118946 +0000 UTC m=+99.936859883 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access") pod "installer-4-master-0" (UID: "5a56d42a-efb4-4956-acab-d12c7ca5276e") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 14:36:48.931135 master-0 kubenswrapper[37036]: I0312 14:36:48.931101 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:36:48.932966 master-0 kubenswrapper[37036]: I0312 14:36:48.932882 37036 patch_prober.go:28] interesting pod/console-c847675b7-vfq5t container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Mar 12 14:36:48.933162 master-0 kubenswrapper[37036]: I0312 14:36:48.932880 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:36:48.933162 master-0 kubenswrapper[37036]: I0312 14:36:48.932940 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-c847675b7-vfq5t" podUID="0323a60d-acb9-4209-a5a5-9b45cc819ac5" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Mar 12 14:36:53.614099 master-0 kubenswrapper[37036]: I0312 14:36:53.614043 37036 patch_prober.go:28] interesting pod/console-6b77f48c6d-w6489 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 12 14:36:53.614686 master-0 kubenswrapper[37036]: I0312 14:36:53.614116 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6b77f48c6d-w6489" podUID="1dd55143-3e81-4eb5-9f83-b4c13614dd69" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 12 14:36:58.632722 master-0 kubenswrapper[37036]: W0312 14:36:58.632175 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e5aaf2a_7df5_464b_b7c1_5a0e696eff02.slice/crio-0e5e1a53e45bb38f45a74d48c089b0a81e4f27dceb169f2d802e8f0d6e821789 WatchSource:0}: Error finding container 0e5e1a53e45bb38f45a74d48c089b0a81e4f27dceb169f2d802e8f0d6e821789: Status 404 returned error can't find the container with id 0e5e1a53e45bb38f45a74d48c089b0a81e4f27dceb169f2d802e8f0d6e821789 Mar 12 14:36:58.931603 master-0 kubenswrapper[37036]: I0312 14:36:58.931573 37036 patch_prober.go:28] interesting pod/console-c847675b7-vfq5t container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Mar 12 14:36:58.931748 master-0 kubenswrapper[37036]: I0312 14:36:58.931721 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-c847675b7-vfq5t" podUID="0323a60d-acb9-4209-a5a5-9b45cc819ac5" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Mar 12 14:36:59.498810 master-0 kubenswrapper[37036]: I0312 14:36:59.498754 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-nn5f6" event={"ID":"4e5aaf2a-7df5-464b-b7c1-5a0e696eff02","Type":"ContainerStarted","Data":"0e5e1a53e45bb38f45a74d48c089b0a81e4f27dceb169f2d802e8f0d6e821789"} Mar 12 14:36:59.500065 master-0 kubenswrapper[37036]: I0312 14:36:59.500041 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-84f57b9877-ftnvc" event={"ID":"70c26392-cfee-4dc3-9c71-4684558daa07","Type":"ContainerStarted","Data":"26d0652f73bd17ab889998b8a885f596f880ded08c35aabc2c6ad33d7c987aea"} Mar 12 14:36:59.501136 master-0 kubenswrapper[37036]: I0312 14:36:59.501102 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-84f57b9877-ftnvc" Mar 12 14:36:59.503015 master-0 kubenswrapper[37036]: I0312 14:36:59.502976 37036 patch_prober.go:28] interesting pod/downloads-84f57b9877-ftnvc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" start-of-body= Mar 12 14:36:59.503099 master-0 kubenswrapper[37036]: I0312 14:36:59.503021 37036 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-ftnvc" podUID="70c26392-cfee-4dc3-9c71-4684558daa07" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" Mar 12 14:36:59.518352 master-0 kubenswrapper[37036]: I0312 14:36:59.518255 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-84f57b9877-ftnvc" podStartSLOduration=2.627555191 podStartE2EDuration="38.518231215s" podCreationTimestamp="2026-03-12 14:36:21 +0000 UTC" firstStartedPulling="2026-03-12 14:36:22.94761749 +0000 UTC m=+41.955358427" lastFinishedPulling="2026-03-12 14:36:58.838293514 +0000 UTC m=+77.846034451" observedRunningTime="2026-03-12 14:36:59.514581833 +0000 UTC m=+78.522322780" watchObservedRunningTime="2026-03-12 14:36:59.518231215 +0000 UTC m=+78.525972152" Mar 12 14:37:00.506072 master-0 kubenswrapper[37036]: I0312 14:37:00.506011 37036 patch_prober.go:28] interesting pod/downloads-84f57b9877-ftnvc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" start-of-body= Mar 12 14:37:00.506072 master-0 kubenswrapper[37036]: I0312 14:37:00.506063 37036 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-ftnvc" podUID="70c26392-cfee-4dc3-9c71-4684558daa07" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" Mar 12 14:37:01.512765 master-0 kubenswrapper[37036]: I0312 14:37:01.512707 37036 patch_prober.go:28] interesting pod/downloads-84f57b9877-ftnvc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" start-of-body= Mar 12 14:37:01.513338 master-0 kubenswrapper[37036]: I0312 14:37:01.512775 37036 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-ftnvc" podUID="70c26392-cfee-4dc3-9c71-4684558daa07" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" Mar 12 14:37:02.436138 master-0 kubenswrapper[37036]: I0312 14:37:02.436038 37036 patch_prober.go:28] interesting pod/downloads-84f57b9877-ftnvc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" start-of-body= Mar 12 14:37:02.436441 master-0 kubenswrapper[37036]: I0312 14:37:02.436140 37036 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-ftnvc" podUID="70c26392-cfee-4dc3-9c71-4684558daa07" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" Mar 12 14:37:02.436441 master-0 kubenswrapper[37036]: I0312 14:37:02.436038 37036 patch_prober.go:28] interesting pod/downloads-84f57b9877-ftnvc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" start-of-body= Mar 12 14:37:02.436441 master-0 kubenswrapper[37036]: I0312 14:37:02.436220 37036 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-84f57b9877-ftnvc" podUID="70c26392-cfee-4dc3-9c71-4684558daa07" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" Mar 12 14:37:03.614302 master-0 kubenswrapper[37036]: I0312 14:37:03.614164 37036 patch_prober.go:28] interesting pod/console-6b77f48c6d-w6489 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 12 14:37:03.614302 master-0 kubenswrapper[37036]: I0312 14:37:03.614224 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6b77f48c6d-w6489" podUID="1dd55143-3e81-4eb5-9f83-b4c13614dd69" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 12 14:37:06.275971 master-0 kubenswrapper[37036]: I0312 14:37:06.272488 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 12 14:37:06.275971 master-0 kubenswrapper[37036]: I0312 14:37:06.273545 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 14:37:06.275971 master-0 kubenswrapper[37036]: I0312 14:37:06.275634 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-9xvhv" Mar 12 14:37:06.275971 master-0 kubenswrapper[37036]: I0312 14:37:06.275892 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 12 14:37:06.399809 master-0 kubenswrapper[37036]: I0312 14:37:06.399742 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3b3151f-a9b1-43e7-9aec-96d4ff896bf2-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"e3b3151f-a9b1-43e7-9aec-96d4ff896bf2\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 14:37:06.400065 master-0 kubenswrapper[37036]: I0312 14:37:06.399854 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e3b3151f-a9b1-43e7-9aec-96d4ff896bf2-var-lock\") pod \"installer-5-master-0\" (UID: \"e3b3151f-a9b1-43e7-9aec-96d4ff896bf2\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 14:37:06.400065 master-0 kubenswrapper[37036]: I0312 14:37:06.399976 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3b3151f-a9b1-43e7-9aec-96d4ff896bf2-kube-api-access\") pod \"installer-5-master-0\" (UID: \"e3b3151f-a9b1-43e7-9aec-96d4ff896bf2\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 14:37:06.501110 master-0 kubenswrapper[37036]: I0312 14:37:06.501052 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e3b3151f-a9b1-43e7-9aec-96d4ff896bf2-var-lock\") pod \"installer-5-master-0\" (UID: \"e3b3151f-a9b1-43e7-9aec-96d4ff896bf2\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 14:37:06.501312 master-0 kubenswrapper[37036]: I0312 14:37:06.501144 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3b3151f-a9b1-43e7-9aec-96d4ff896bf2-kube-api-access\") pod \"installer-5-master-0\" (UID: \"e3b3151f-a9b1-43e7-9aec-96d4ff896bf2\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 14:37:06.501312 master-0 kubenswrapper[37036]: I0312 14:37:06.501155 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e3b3151f-a9b1-43e7-9aec-96d4ff896bf2-var-lock\") pod \"installer-5-master-0\" (UID: \"e3b3151f-a9b1-43e7-9aec-96d4ff896bf2\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 14:37:06.501312 master-0 kubenswrapper[37036]: I0312 14:37:06.501190 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3b3151f-a9b1-43e7-9aec-96d4ff896bf2-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"e3b3151f-a9b1-43e7-9aec-96d4ff896bf2\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 14:37:06.501312 master-0 kubenswrapper[37036]: I0312 14:37:06.501284 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3b3151f-a9b1-43e7-9aec-96d4ff896bf2-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"e3b3151f-a9b1-43e7-9aec-96d4ff896bf2\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 14:37:06.543585 master-0 kubenswrapper[37036]: I0312 14:37:06.543470 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-nn5f6" event={"ID":"4e5aaf2a-7df5-464b-b7c1-5a0e696eff02","Type":"ContainerStarted","Data":"c50e2f196b53d1c114e2e6e6ef6178196b44b90c7995423cf4e69eb97527916b"} Mar 12 14:37:06.565377 master-0 kubenswrapper[37036]: I0312 14:37:06.565128 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 12 14:37:06.797501 master-0 kubenswrapper[37036]: I0312 14:37:06.797331 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3b3151f-a9b1-43e7-9aec-96d4ff896bf2-kube-api-access\") pod \"installer-5-master-0\" (UID: \"e3b3151f-a9b1-43e7-9aec-96d4ff896bf2\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 14:37:06.891515 master-0 kubenswrapper[37036]: I0312 14:37:06.891431 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 14:37:07.323338 master-0 kubenswrapper[37036]: I0312 14:37:07.323260 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-nn5f6" podStartSLOduration=13.502516739 podStartE2EDuration="20.323242026s" podCreationTimestamp="2026-03-12 14:36:47 +0000 UTC" firstStartedPulling="2026-03-12 14:36:58.634945896 +0000 UTC m=+77.642686833" lastFinishedPulling="2026-03-12 14:37:05.455671183 +0000 UTC m=+84.463412120" observedRunningTime="2026-03-12 14:37:06.805254784 +0000 UTC m=+85.812995751" watchObservedRunningTime="2026-03-12 14:37:07.323242026 +0000 UTC m=+86.330982963" Mar 12 14:37:07.323855 master-0 kubenswrapper[37036]: I0312 14:37:07.323818 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 12 14:37:07.336625 master-0 kubenswrapper[37036]: W0312 14:37:07.336580 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode3b3151f_a9b1_43e7_9aec_96d4ff896bf2.slice/crio-58d256f4853a92fa41f8baf1824657f6a653651e0d010b3ecaaf171e7a124d11 WatchSource:0}: Error finding container 58d256f4853a92fa41f8baf1824657f6a653651e0d010b3ecaaf171e7a124d11: Status 404 returned error can't find the container with id 58d256f4853a92fa41f8baf1824657f6a653651e0d010b3ecaaf171e7a124d11 Mar 12 14:37:07.551700 master-0 kubenswrapper[37036]: I0312 14:37:07.551597 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"e3b3151f-a9b1-43e7-9aec-96d4ff896bf2","Type":"ContainerStarted","Data":"58d256f4853a92fa41f8baf1824657f6a653651e0d010b3ecaaf171e7a124d11"} Mar 12 14:37:08.559760 master-0 kubenswrapper[37036]: I0312 14:37:08.559668 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"e3b3151f-a9b1-43e7-9aec-96d4ff896bf2","Type":"ContainerStarted","Data":"852a885b7715f01e617c3f371756c175c3c437f2c97c3223d69b9af5d6424ea5"} Mar 12 14:37:08.578692 master-0 kubenswrapper[37036]: I0312 14:37:08.578605 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=3.578585871 podStartE2EDuration="3.578585871s" podCreationTimestamp="2026-03-12 14:37:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:37:08.576081468 +0000 UTC m=+87.583822405" watchObservedRunningTime="2026-03-12 14:37:08.578585871 +0000 UTC m=+87.586326808" Mar 12 14:37:08.931458 master-0 kubenswrapper[37036]: I0312 14:37:08.931319 37036 patch_prober.go:28] interesting pod/console-c847675b7-vfq5t container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Mar 12 14:37:08.931458 master-0 kubenswrapper[37036]: I0312 14:37:08.931385 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-c847675b7-vfq5t" podUID="0323a60d-acb9-4209-a5a5-9b45cc819ac5" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Mar 12 14:37:10.248185 master-0 kubenswrapper[37036]: I0312 14:37:10.248083 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" podUID="bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34" containerName="oauth-openshift" containerID="cri-o://739211b307257fb02a5e37ad3189305b807bf3aad4882366592f2eb9bde6dea0" gracePeriod=15 Mar 12 14:37:10.577760 master-0 kubenswrapper[37036]: I0312 14:37:10.577628 37036 generic.go:334] "Generic (PLEG): container finished" podID="bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34" containerID="739211b307257fb02a5e37ad3189305b807bf3aad4882366592f2eb9bde6dea0" exitCode=0 Mar 12 14:37:10.577760 master-0 kubenswrapper[37036]: I0312 14:37:10.577682 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" event={"ID":"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34","Type":"ContainerDied","Data":"739211b307257fb02a5e37ad3189305b807bf3aad4882366592f2eb9bde6dea0"} Mar 12 14:37:11.130078 master-0 kubenswrapper[37036]: I0312 14:37:11.130028 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:37:11.267645 master-0 kubenswrapper[37036]: I0312 14:37:11.267576 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-user-template-provider-selection\") pod \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " Mar 12 14:37:11.267645 master-0 kubenswrapper[37036]: I0312 14:37:11.267659 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-audit-dir\") pod \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " Mar 12 14:37:11.268362 master-0 kubenswrapper[37036]: I0312 14:37:11.267724 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-cliconfig\") pod \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " Mar 12 14:37:11.268362 master-0 kubenswrapper[37036]: I0312 14:37:11.267784 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34" (UID: "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:37:11.268362 master-0 kubenswrapper[37036]: I0312 14:37:11.268252 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34" (UID: "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:37:11.268362 master-0 kubenswrapper[37036]: I0312 14:37:11.268333 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-audit-policies\") pod \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " Mar 12 14:37:11.268862 master-0 kubenswrapper[37036]: I0312 14:37:11.268726 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34" (UID: "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:37:11.268862 master-0 kubenswrapper[37036]: I0312 14:37:11.268832 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-ocp-branding-template\") pod \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " Mar 12 14:37:11.269483 master-0 kubenswrapper[37036]: I0312 14:37:11.269318 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-session\") pod \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " Mar 12 14:37:11.269483 master-0 kubenswrapper[37036]: I0312 14:37:11.269369 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8tp9\" (UniqueName: \"kubernetes.io/projected/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-kube-api-access-t8tp9\") pod \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " Mar 12 14:37:11.269483 master-0 kubenswrapper[37036]: I0312 14:37:11.269399 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-service-ca\") pod \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " Mar 12 14:37:11.269483 master-0 kubenswrapper[37036]: I0312 14:37:11.269441 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-user-template-error\") pod \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " Mar 12 14:37:11.269483 master-0 kubenswrapper[37036]: I0312 14:37:11.269467 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-user-template-login\") pod \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " Mar 12 14:37:11.269724 master-0 kubenswrapper[37036]: I0312 14:37:11.269525 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-router-certs\") pod \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " Mar 12 14:37:11.269724 master-0 kubenswrapper[37036]: I0312 14:37:11.269558 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-serving-cert\") pod \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " Mar 12 14:37:11.269724 master-0 kubenswrapper[37036]: I0312 14:37:11.269589 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-trusted-ca-bundle\") pod \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\" (UID: \"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34\") " Mar 12 14:37:11.270017 master-0 kubenswrapper[37036]: I0312 14:37:11.269918 37036 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:11.270017 master-0 kubenswrapper[37036]: I0312 14:37:11.269943 37036 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:11.270017 master-0 kubenswrapper[37036]: I0312 14:37:11.269957 37036 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:11.270767 master-0 kubenswrapper[37036]: I0312 14:37:11.270714 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34" (UID: "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:37:11.270767 master-0 kubenswrapper[37036]: I0312 14:37:11.270734 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34" (UID: "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:37:11.271225 master-0 kubenswrapper[37036]: I0312 14:37:11.271167 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34" (UID: "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:37:11.272796 master-0 kubenswrapper[37036]: I0312 14:37:11.272755 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34" (UID: "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:37:11.272916 master-0 kubenswrapper[37036]: I0312 14:37:11.272861 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34" (UID: "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:37:11.273059 master-0 kubenswrapper[37036]: I0312 14:37:11.273021 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34" (UID: "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:37:11.273471 master-0 kubenswrapper[37036]: I0312 14:37:11.273429 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34" (UID: "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:37:11.273578 master-0 kubenswrapper[37036]: I0312 14:37:11.273528 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-kube-api-access-t8tp9" (OuterVolumeSpecName: "kube-api-access-t8tp9") pod "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34" (UID: "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34"). InnerVolumeSpecName "kube-api-access-t8tp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:37:11.273859 master-0 kubenswrapper[37036]: I0312 14:37:11.273799 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34" (UID: "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:37:11.273859 master-0 kubenswrapper[37036]: I0312 14:37:11.273834 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34" (UID: "bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:37:11.371228 master-0 kubenswrapper[37036]: I0312 14:37:11.371105 37036 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:11.371228 master-0 kubenswrapper[37036]: I0312 14:37:11.371153 37036 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:11.371228 master-0 kubenswrapper[37036]: I0312 14:37:11.371167 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8tp9\" (UniqueName: \"kubernetes.io/projected/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-kube-api-access-t8tp9\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:11.371228 master-0 kubenswrapper[37036]: I0312 14:37:11.371176 37036 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:11.371228 master-0 kubenswrapper[37036]: I0312 14:37:11.371187 37036 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:11.371228 master-0 kubenswrapper[37036]: I0312 14:37:11.371197 37036 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:11.371228 master-0 kubenswrapper[37036]: I0312 14:37:11.371206 37036 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:11.371228 master-0 kubenswrapper[37036]: I0312 14:37:11.371216 37036 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:11.371228 master-0 kubenswrapper[37036]: I0312 14:37:11.371225 37036 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:11.371228 master-0 kubenswrapper[37036]: I0312 14:37:11.371235 37036 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:11.596564 master-0 kubenswrapper[37036]: I0312 14:37:11.596494 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" event={"ID":"bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34","Type":"ContainerDied","Data":"23f2052c5fbcc22cd38a431c5e2ac5863a96ce6b483b26a6af986d36abbcbca8"} Mar 12 14:37:11.596564 master-0 kubenswrapper[37036]: I0312 14:37:11.596561 37036 scope.go:117] "RemoveContainer" containerID="739211b307257fb02a5e37ad3189305b807bf3aad4882366592f2eb9bde6dea0" Mar 12 14:37:11.597256 master-0 kubenswrapper[37036]: I0312 14:37:11.596590 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d94bf6d99-jdmf7" Mar 12 14:37:12.434220 master-0 kubenswrapper[37036]: E0312 14:37:12.434139 37036 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbff9beb6_f6cc_4fd0_9d22_aaf1221c8b34.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbff9beb6_f6cc_4fd0_9d22_aaf1221c8b34.slice/crio-23f2052c5fbcc22cd38a431c5e2ac5863a96ce6b483b26a6af986d36abbcbca8\": RecentStats: unable to find data in memory cache]" Mar 12 14:37:12.444377 master-0 kubenswrapper[37036]: I0312 14:37:12.444330 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-84f57b9877-ftnvc" Mar 12 14:37:12.531323 master-0 kubenswrapper[37036]: I0312 14:37:12.529302 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-859898ff78-qv7v9"] Mar 12 14:37:12.531323 master-0 kubenswrapper[37036]: E0312 14:37:12.529749 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34" containerName="oauth-openshift" Mar 12 14:37:12.531323 master-0 kubenswrapper[37036]: I0312 14:37:12.529776 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34" containerName="oauth-openshift" Mar 12 14:37:12.531323 master-0 kubenswrapper[37036]: I0312 14:37:12.530104 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34" containerName="oauth-openshift" Mar 12 14:37:12.531323 master-0 kubenswrapper[37036]: I0312 14:37:12.530791 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.545960 master-0 kubenswrapper[37036]: I0312 14:37:12.544345 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 12 14:37:12.545960 master-0 kubenswrapper[37036]: I0312 14:37:12.544765 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 12 14:37:12.545960 master-0 kubenswrapper[37036]: I0312 14:37:12.545023 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 12 14:37:12.546639 master-0 kubenswrapper[37036]: I0312 14:37:12.546584 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 12 14:37:12.547340 master-0 kubenswrapper[37036]: I0312 14:37:12.547309 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-fsj54" Mar 12 14:37:12.547620 master-0 kubenswrapper[37036]: I0312 14:37:12.547587 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 12 14:37:12.547852 master-0 kubenswrapper[37036]: I0312 14:37:12.547822 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 12 14:37:12.549288 master-0 kubenswrapper[37036]: I0312 14:37:12.549247 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 12 14:37:12.549538 master-0 kubenswrapper[37036]: I0312 14:37:12.549516 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 12 14:37:12.549708 master-0 kubenswrapper[37036]: I0312 14:37:12.549688 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 12 14:37:12.549872 master-0 kubenswrapper[37036]: I0312 14:37:12.549851 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 12 14:37:12.550135 master-0 kubenswrapper[37036]: I0312 14:37:12.550111 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 12 14:37:12.557742 master-0 kubenswrapper[37036]: I0312 14:37:12.557683 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-d94bf6d99-jdmf7"] Mar 12 14:37:12.561098 master-0 kubenswrapper[37036]: I0312 14:37:12.558634 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-859898ff78-qv7v9"] Mar 12 14:37:12.572870 master-0 kubenswrapper[37036]: I0312 14:37:12.572779 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-d94bf6d99-jdmf7"] Mar 12 14:37:12.590538 master-0 kubenswrapper[37036]: I0312 14:37:12.577685 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 12 14:37:12.592720 master-0 kubenswrapper[37036]: I0312 14:37:12.592624 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 12 14:37:12.596576 master-0 kubenswrapper[37036]: I0312 14:37:12.593174 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.596576 master-0 kubenswrapper[37036]: I0312 14:37:12.593217 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-system-service-ca\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.596576 master-0 kubenswrapper[37036]: I0312 14:37:12.593250 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1172bb7b-c430-4011-b869-0f6ba03987d5-audit-policies\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.596576 master-0 kubenswrapper[37036]: I0312 14:37:12.593277 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crv76\" (UniqueName: \"kubernetes.io/projected/1172bb7b-c430-4011-b869-0f6ba03987d5-kube-api-access-crv76\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.596576 master-0 kubenswrapper[37036]: I0312 14:37:12.593300 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1172bb7b-c430-4011-b869-0f6ba03987d5-audit-dir\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.596576 master-0 kubenswrapper[37036]: I0312 14:37:12.593335 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-system-router-certs\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.596576 master-0 kubenswrapper[37036]: I0312 14:37:12.593366 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.596576 master-0 kubenswrapper[37036]: I0312 14:37:12.593427 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-user-template-login\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.596576 master-0 kubenswrapper[37036]: I0312 14:37:12.593457 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.596576 master-0 kubenswrapper[37036]: I0312 14:37:12.593485 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-user-template-error\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.596576 master-0 kubenswrapper[37036]: I0312 14:37:12.593515 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-system-session\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.596576 master-0 kubenswrapper[37036]: I0312 14:37:12.593557 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.596576 master-0 kubenswrapper[37036]: I0312 14:37:12.593580 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.695570 master-0 kubenswrapper[37036]: I0312 14:37:12.695431 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-user-template-error\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.695570 master-0 kubenswrapper[37036]: I0312 14:37:12.695500 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-system-session\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.695570 master-0 kubenswrapper[37036]: I0312 14:37:12.695535 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.695570 master-0 kubenswrapper[37036]: I0312 14:37:12.695562 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.695847 master-0 kubenswrapper[37036]: I0312 14:37:12.695593 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.695847 master-0 kubenswrapper[37036]: I0312 14:37:12.695610 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-system-service-ca\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.695847 master-0 kubenswrapper[37036]: I0312 14:37:12.695642 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1172bb7b-c430-4011-b869-0f6ba03987d5-audit-policies\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.695847 master-0 kubenswrapper[37036]: I0312 14:37:12.695664 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crv76\" (UniqueName: \"kubernetes.io/projected/1172bb7b-c430-4011-b869-0f6ba03987d5-kube-api-access-crv76\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.695847 master-0 kubenswrapper[37036]: I0312 14:37:12.695683 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1172bb7b-c430-4011-b869-0f6ba03987d5-audit-dir\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.695847 master-0 kubenswrapper[37036]: I0312 14:37:12.695717 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-system-router-certs\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.695847 master-0 kubenswrapper[37036]: I0312 14:37:12.695760 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.695847 master-0 kubenswrapper[37036]: I0312 14:37:12.695802 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-user-template-login\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.695847 master-0 kubenswrapper[37036]: I0312 14:37:12.695819 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.696485 master-0 kubenswrapper[37036]: I0312 14:37:12.696436 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.696704 master-0 kubenswrapper[37036]: I0312 14:37:12.696678 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.697184 master-0 kubenswrapper[37036]: I0312 14:37:12.697105 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1172bb7b-c430-4011-b869-0f6ba03987d5-audit-dir\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.697673 master-0 kubenswrapper[37036]: I0312 14:37:12.697639 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-system-service-ca\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.699444 master-0 kubenswrapper[37036]: I0312 14:37:12.699397 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1172bb7b-c430-4011-b869-0f6ba03987d5-audit-policies\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.699731 master-0 kubenswrapper[37036]: I0312 14:37:12.699695 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-user-template-error\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.699955 master-0 kubenswrapper[37036]: I0312 14:37:12.699919 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.700169 master-0 kubenswrapper[37036]: I0312 14:37:12.700104 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-user-template-login\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.700342 master-0 kubenswrapper[37036]: I0312 14:37:12.700308 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.702621 master-0 kubenswrapper[37036]: I0312 14:37:12.702051 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-system-session\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.707027 master-0 kubenswrapper[37036]: I0312 14:37:12.706917 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.719546 master-0 kubenswrapper[37036]: I0312 14:37:12.719440 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1172bb7b-c430-4011-b869-0f6ba03987d5-v4-0-config-system-router-certs\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.722972 master-0 kubenswrapper[37036]: I0312 14:37:12.722538 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crv76\" (UniqueName: \"kubernetes.io/projected/1172bb7b-c430-4011-b869-0f6ba03987d5-kube-api-access-crv76\") pod \"oauth-openshift-859898ff78-qv7v9\" (UID: \"1172bb7b-c430-4011-b869-0f6ba03987d5\") " pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:12.732860 master-0 kubenswrapper[37036]: I0312 14:37:12.732683 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-667cf89f7-gvhgl"] Mar 12 14:37:12.733434 master-0 kubenswrapper[37036]: I0312 14:37:12.733380 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" podUID="a335739e-a77f-4315-9aa8-4eb3361acd6a" containerName="controller-manager" containerID="cri-o://1efa1711c0380030e152f693704bb3bbf7059dc65c603dfb7c3a615d9d088285" gracePeriod=30 Mar 12 14:37:12.761786 master-0 kubenswrapper[37036]: I0312 14:37:12.761671 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb"] Mar 12 14:37:12.761997 master-0 kubenswrapper[37036]: I0312 14:37:12.761942 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb" podUID="f18c7090-21d2-45e8-abf1-7ebf7e151c77" containerName="route-controller-manager" containerID="cri-o://2243679fdb8cbfc8adabe8fffbf5e9e8b0f7cbed367675c1529d2c11dc32c0d9" gracePeriod=30 Mar 12 14:37:12.932861 master-0 kubenswrapper[37036]: I0312 14:37:12.932803 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:13.256237 master-0 kubenswrapper[37036]: I0312 14:37:13.256171 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34" path="/var/lib/kubelet/pods/bff9beb6-f6cc-4fd0-9d22-aaf1221c8b34/volumes" Mar 12 14:37:13.344930 master-0 kubenswrapper[37036]: I0312 14:37:13.342862 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb" Mar 12 14:37:13.349933 master-0 kubenswrapper[37036]: I0312 14:37:13.349839 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" Mar 12 14:37:13.455252 master-0 kubenswrapper[37036]: I0312 14:37:13.455212 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-859898ff78-qv7v9"] Mar 12 14:37:13.459664 master-0 kubenswrapper[37036]: W0312 14:37:13.459604 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1172bb7b_c430_4011_b869_0f6ba03987d5.slice/crio-1ab42e86eeaf506701f99cdefb452b00138c66a4007aa873eef1897e401e1a34 WatchSource:0}: Error finding container 1ab42e86eeaf506701f99cdefb452b00138c66a4007aa873eef1897e401e1a34: Status 404 returned error can't find the container with id 1ab42e86eeaf506701f99cdefb452b00138c66a4007aa873eef1897e401e1a34 Mar 12 14:37:13.508922 master-0 kubenswrapper[37036]: I0312 14:37:13.508780 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a335739e-a77f-4315-9aa8-4eb3361acd6a-serving-cert\") pod \"a335739e-a77f-4315-9aa8-4eb3361acd6a\" (UID: \"a335739e-a77f-4315-9aa8-4eb3361acd6a\") " Mar 12 14:37:13.508922 master-0 kubenswrapper[37036]: I0312 14:37:13.508858 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a335739e-a77f-4315-9aa8-4eb3361acd6a-client-ca\") pod \"a335739e-a77f-4315-9aa8-4eb3361acd6a\" (UID: \"a335739e-a77f-4315-9aa8-4eb3361acd6a\") " Mar 12 14:37:13.509133 master-0 kubenswrapper[37036]: I0312 14:37:13.508938 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f18c7090-21d2-45e8-abf1-7ebf7e151c77-serving-cert\") pod \"f18c7090-21d2-45e8-abf1-7ebf7e151c77\" (UID: \"f18c7090-21d2-45e8-abf1-7ebf7e151c77\") " Mar 12 14:37:13.509133 master-0 kubenswrapper[37036]: I0312 14:37:13.508987 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a335739e-a77f-4315-9aa8-4eb3361acd6a-config\") pod \"a335739e-a77f-4315-9aa8-4eb3361acd6a\" (UID: \"a335739e-a77f-4315-9aa8-4eb3361acd6a\") " Mar 12 14:37:13.509133 master-0 kubenswrapper[37036]: I0312 14:37:13.509017 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckqrh\" (UniqueName: \"kubernetes.io/projected/a335739e-a77f-4315-9aa8-4eb3361acd6a-kube-api-access-ckqrh\") pod \"a335739e-a77f-4315-9aa8-4eb3361acd6a\" (UID: \"a335739e-a77f-4315-9aa8-4eb3361acd6a\") " Mar 12 14:37:13.509133 master-0 kubenswrapper[37036]: I0312 14:37:13.509068 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xzc5\" (UniqueName: \"kubernetes.io/projected/f18c7090-21d2-45e8-abf1-7ebf7e151c77-kube-api-access-7xzc5\") pod \"f18c7090-21d2-45e8-abf1-7ebf7e151c77\" (UID: \"f18c7090-21d2-45e8-abf1-7ebf7e151c77\") " Mar 12 14:37:13.509133 master-0 kubenswrapper[37036]: I0312 14:37:13.509108 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f18c7090-21d2-45e8-abf1-7ebf7e151c77-config\") pod \"f18c7090-21d2-45e8-abf1-7ebf7e151c77\" (UID: \"f18c7090-21d2-45e8-abf1-7ebf7e151c77\") " Mar 12 14:37:13.509285 master-0 kubenswrapper[37036]: I0312 14:37:13.509154 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f18c7090-21d2-45e8-abf1-7ebf7e151c77-client-ca\") pod \"f18c7090-21d2-45e8-abf1-7ebf7e151c77\" (UID: \"f18c7090-21d2-45e8-abf1-7ebf7e151c77\") " Mar 12 14:37:13.509285 master-0 kubenswrapper[37036]: I0312 14:37:13.509180 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a335739e-a77f-4315-9aa8-4eb3361acd6a-proxy-ca-bundles\") pod \"a335739e-a77f-4315-9aa8-4eb3361acd6a\" (UID: \"a335739e-a77f-4315-9aa8-4eb3361acd6a\") " Mar 12 14:37:13.510423 master-0 kubenswrapper[37036]: I0312 14:37:13.510390 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a335739e-a77f-4315-9aa8-4eb3361acd6a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a335739e-a77f-4315-9aa8-4eb3361acd6a" (UID: "a335739e-a77f-4315-9aa8-4eb3361acd6a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:37:13.510837 master-0 kubenswrapper[37036]: I0312 14:37:13.510808 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a335739e-a77f-4315-9aa8-4eb3361acd6a-client-ca" (OuterVolumeSpecName: "client-ca") pod "a335739e-a77f-4315-9aa8-4eb3361acd6a" (UID: "a335739e-a77f-4315-9aa8-4eb3361acd6a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:37:13.511513 master-0 kubenswrapper[37036]: I0312 14:37:13.511488 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a335739e-a77f-4315-9aa8-4eb3361acd6a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a335739e-a77f-4315-9aa8-4eb3361acd6a" (UID: "a335739e-a77f-4315-9aa8-4eb3361acd6a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:37:13.511887 master-0 kubenswrapper[37036]: I0312 14:37:13.511861 37036 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a335739e-a77f-4315-9aa8-4eb3361acd6a-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:13.511968 master-0 kubenswrapper[37036]: I0312 14:37:13.511889 37036 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a335739e-a77f-4315-9aa8-4eb3361acd6a-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:13.511968 master-0 kubenswrapper[37036]: I0312 14:37:13.511918 37036 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a335739e-a77f-4315-9aa8-4eb3361acd6a-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:13.511968 master-0 kubenswrapper[37036]: I0312 14:37:13.511915 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f18c7090-21d2-45e8-abf1-7ebf7e151c77-config" (OuterVolumeSpecName: "config") pod "f18c7090-21d2-45e8-abf1-7ebf7e151c77" (UID: "f18c7090-21d2-45e8-abf1-7ebf7e151c77"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:37:13.512425 master-0 kubenswrapper[37036]: I0312 14:37:13.512393 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a335739e-a77f-4315-9aa8-4eb3361acd6a-config" (OuterVolumeSpecName: "config") pod "a335739e-a77f-4315-9aa8-4eb3361acd6a" (UID: "a335739e-a77f-4315-9aa8-4eb3361acd6a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:37:13.512488 master-0 kubenswrapper[37036]: I0312 14:37:13.512420 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f18c7090-21d2-45e8-abf1-7ebf7e151c77-client-ca" (OuterVolumeSpecName: "client-ca") pod "f18c7090-21d2-45e8-abf1-7ebf7e151c77" (UID: "f18c7090-21d2-45e8-abf1-7ebf7e151c77"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:37:13.513287 master-0 kubenswrapper[37036]: I0312 14:37:13.513262 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f18c7090-21d2-45e8-abf1-7ebf7e151c77-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f18c7090-21d2-45e8-abf1-7ebf7e151c77" (UID: "f18c7090-21d2-45e8-abf1-7ebf7e151c77"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:37:13.514301 master-0 kubenswrapper[37036]: I0312 14:37:13.514247 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a335739e-a77f-4315-9aa8-4eb3361acd6a-kube-api-access-ckqrh" (OuterVolumeSpecName: "kube-api-access-ckqrh") pod "a335739e-a77f-4315-9aa8-4eb3361acd6a" (UID: "a335739e-a77f-4315-9aa8-4eb3361acd6a"). InnerVolumeSpecName "kube-api-access-ckqrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:37:13.514625 master-0 kubenswrapper[37036]: I0312 14:37:13.514593 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f18c7090-21d2-45e8-abf1-7ebf7e151c77-kube-api-access-7xzc5" (OuterVolumeSpecName: "kube-api-access-7xzc5") pod "f18c7090-21d2-45e8-abf1-7ebf7e151c77" (UID: "f18c7090-21d2-45e8-abf1-7ebf7e151c77"). InnerVolumeSpecName "kube-api-access-7xzc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:37:13.613644 master-0 kubenswrapper[37036]: I0312 14:37:13.613590 37036 patch_prober.go:28] interesting pod/console-6b77f48c6d-w6489 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 12 14:37:13.614098 master-0 kubenswrapper[37036]: I0312 14:37:13.614060 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6b77f48c6d-w6489" podUID="1dd55143-3e81-4eb5-9f83-b4c13614dd69" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 12 14:37:13.614223 master-0 kubenswrapper[37036]: I0312 14:37:13.614125 37036 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f18c7090-21d2-45e8-abf1-7ebf7e151c77-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:13.614340 master-0 kubenswrapper[37036]: I0312 14:37:13.614320 37036 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a335739e-a77f-4315-9aa8-4eb3361acd6a-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:13.614440 master-0 kubenswrapper[37036]: I0312 14:37:13.614423 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckqrh\" (UniqueName: \"kubernetes.io/projected/a335739e-a77f-4315-9aa8-4eb3361acd6a-kube-api-access-ckqrh\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:13.614563 master-0 kubenswrapper[37036]: I0312 14:37:13.614549 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xzc5\" (UniqueName: \"kubernetes.io/projected/f18c7090-21d2-45e8-abf1-7ebf7e151c77-kube-api-access-7xzc5\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:13.614657 master-0 kubenswrapper[37036]: I0312 14:37:13.614643 37036 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f18c7090-21d2-45e8-abf1-7ebf7e151c77-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:13.616085 master-0 kubenswrapper[37036]: I0312 14:37:13.616071 37036 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f18c7090-21d2-45e8-abf1-7ebf7e151c77-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:13.674997 master-0 kubenswrapper[37036]: I0312 14:37:13.674946 37036 generic.go:334] "Generic (PLEG): container finished" podID="f18c7090-21d2-45e8-abf1-7ebf7e151c77" containerID="2243679fdb8cbfc8adabe8fffbf5e9e8b0f7cbed367675c1529d2c11dc32c0d9" exitCode=0 Mar 12 14:37:13.675100 master-0 kubenswrapper[37036]: I0312 14:37:13.675029 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb" event={"ID":"f18c7090-21d2-45e8-abf1-7ebf7e151c77","Type":"ContainerDied","Data":"2243679fdb8cbfc8adabe8fffbf5e9e8b0f7cbed367675c1529d2c11dc32c0d9"} Mar 12 14:37:13.675100 master-0 kubenswrapper[37036]: I0312 14:37:13.675061 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb" event={"ID":"f18c7090-21d2-45e8-abf1-7ebf7e151c77","Type":"ContainerDied","Data":"36f9994625807a2f2debb395abd66fc1d434681fc029a6cf92a88a6d8628134e"} Mar 12 14:37:13.675100 master-0 kubenswrapper[37036]: I0312 14:37:13.675081 37036 scope.go:117] "RemoveContainer" containerID="2243679fdb8cbfc8adabe8fffbf5e9e8b0f7cbed367675c1529d2c11dc32c0d9" Mar 12 14:37:13.675202 master-0 kubenswrapper[37036]: I0312 14:37:13.675185 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb" Mar 12 14:37:13.680878 master-0 kubenswrapper[37036]: I0312 14:37:13.680458 37036 generic.go:334] "Generic (PLEG): container finished" podID="a335739e-a77f-4315-9aa8-4eb3361acd6a" containerID="1efa1711c0380030e152f693704bb3bbf7059dc65c603dfb7c3a615d9d088285" exitCode=0 Mar 12 14:37:13.680878 master-0 kubenswrapper[37036]: I0312 14:37:13.680569 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" Mar 12 14:37:13.682446 master-0 kubenswrapper[37036]: I0312 14:37:13.682123 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" event={"ID":"a335739e-a77f-4315-9aa8-4eb3361acd6a","Type":"ContainerDied","Data":"1efa1711c0380030e152f693704bb3bbf7059dc65c603dfb7c3a615d9d088285"} Mar 12 14:37:13.682446 master-0 kubenswrapper[37036]: I0312 14:37:13.682172 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-667cf89f7-gvhgl" event={"ID":"a335739e-a77f-4315-9aa8-4eb3361acd6a","Type":"ContainerDied","Data":"a85406199e50819889fdf742efed88428e1e7f4b54ae87737a0836191f2ab799"} Mar 12 14:37:13.686209 master-0 kubenswrapper[37036]: I0312 14:37:13.686166 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" event={"ID":"1172bb7b-c430-4011-b869-0f6ba03987d5","Type":"ContainerStarted","Data":"1ab42e86eeaf506701f99cdefb452b00138c66a4007aa873eef1897e401e1a34"} Mar 12 14:37:13.687675 master-0 kubenswrapper[37036]: I0312 14:37:13.687075 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:13.691013 master-0 kubenswrapper[37036]: I0312 14:37:13.690511 37036 patch_prober.go:28] interesting pod/oauth-openshift-859898ff78-qv7v9 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.128.0.101:6443/healthz\": dial tcp 10.128.0.101:6443: connect: connection refused" start-of-body= Mar 12 14:37:13.691013 master-0 kubenswrapper[37036]: I0312 14:37:13.690576 37036 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" podUID="1172bb7b-c430-4011-b869-0f6ba03987d5" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.128.0.101:6443/healthz\": dial tcp 10.128.0.101:6443: connect: connection refused" Mar 12 14:37:13.697708 master-0 kubenswrapper[37036]: I0312 14:37:13.697670 37036 scope.go:117] "RemoveContainer" containerID="2243679fdb8cbfc8adabe8fffbf5e9e8b0f7cbed367675c1529d2c11dc32c0d9" Mar 12 14:37:13.700270 master-0 kubenswrapper[37036]: E0312 14:37:13.700227 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2243679fdb8cbfc8adabe8fffbf5e9e8b0f7cbed367675c1529d2c11dc32c0d9\": container with ID starting with 2243679fdb8cbfc8adabe8fffbf5e9e8b0f7cbed367675c1529d2c11dc32c0d9 not found: ID does not exist" containerID="2243679fdb8cbfc8adabe8fffbf5e9e8b0f7cbed367675c1529d2c11dc32c0d9" Mar 12 14:37:13.700509 master-0 kubenswrapper[37036]: I0312 14:37:13.700476 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2243679fdb8cbfc8adabe8fffbf5e9e8b0f7cbed367675c1529d2c11dc32c0d9"} err="failed to get container status \"2243679fdb8cbfc8adabe8fffbf5e9e8b0f7cbed367675c1529d2c11dc32c0d9\": rpc error: code = NotFound desc = could not find container \"2243679fdb8cbfc8adabe8fffbf5e9e8b0f7cbed367675c1529d2c11dc32c0d9\": container with ID starting with 2243679fdb8cbfc8adabe8fffbf5e9e8b0f7cbed367675c1529d2c11dc32c0d9 not found: ID does not exist" Mar 12 14:37:13.700612 master-0 kubenswrapper[37036]: I0312 14:37:13.700596 37036 scope.go:117] "RemoveContainer" containerID="1efa1711c0380030e152f693704bb3bbf7059dc65c603dfb7c3a615d9d088285" Mar 12 14:37:13.716910 master-0 kubenswrapper[37036]: I0312 14:37:13.716759 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" podStartSLOduration=7.71670132 podStartE2EDuration="7.71670132s" podCreationTimestamp="2026-03-12 14:37:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:37:13.706713509 +0000 UTC m=+92.714454446" watchObservedRunningTime="2026-03-12 14:37:13.71670132 +0000 UTC m=+92.724442257" Mar 12 14:37:13.727067 master-0 kubenswrapper[37036]: I0312 14:37:13.723255 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb"] Mar 12 14:37:13.727864 master-0 kubenswrapper[37036]: I0312 14:37:13.727762 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fbf8bd5df-vsfpb"] Mar 12 14:37:13.749423 master-0 kubenswrapper[37036]: I0312 14:37:13.749364 37036 scope.go:117] "RemoveContainer" containerID="1efa1711c0380030e152f693704bb3bbf7059dc65c603dfb7c3a615d9d088285" Mar 12 14:37:13.750365 master-0 kubenswrapper[37036]: E0312 14:37:13.750336 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1efa1711c0380030e152f693704bb3bbf7059dc65c603dfb7c3a615d9d088285\": container with ID starting with 1efa1711c0380030e152f693704bb3bbf7059dc65c603dfb7c3a615d9d088285 not found: ID does not exist" containerID="1efa1711c0380030e152f693704bb3bbf7059dc65c603dfb7c3a615d9d088285" Mar 12 14:37:13.750444 master-0 kubenswrapper[37036]: I0312 14:37:13.750366 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1efa1711c0380030e152f693704bb3bbf7059dc65c603dfb7c3a615d9d088285"} err="failed to get container status \"1efa1711c0380030e152f693704bb3bbf7059dc65c603dfb7c3a615d9d088285\": rpc error: code = NotFound desc = could not find container \"1efa1711c0380030e152f693704bb3bbf7059dc65c603dfb7c3a615d9d088285\": container with ID starting with 1efa1711c0380030e152f693704bb3bbf7059dc65c603dfb7c3a615d9d088285 not found: ID does not exist" Mar 12 14:37:13.764348 master-0 kubenswrapper[37036]: I0312 14:37:13.764217 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-667cf89f7-gvhgl"] Mar 12 14:37:13.770843 master-0 kubenswrapper[37036]: I0312 14:37:13.770801 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-667cf89f7-gvhgl"] Mar 12 14:37:13.963357 master-0 kubenswrapper[37036]: I0312 14:37:13.963290 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt"] Mar 12 14:37:13.963821 master-0 kubenswrapper[37036]: E0312 14:37:13.963797 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a335739e-a77f-4315-9aa8-4eb3361acd6a" containerName="controller-manager" Mar 12 14:37:13.963821 master-0 kubenswrapper[37036]: I0312 14:37:13.963812 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="a335739e-a77f-4315-9aa8-4eb3361acd6a" containerName="controller-manager" Mar 12 14:37:13.963946 master-0 kubenswrapper[37036]: E0312 14:37:13.963854 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18c7090-21d2-45e8-abf1-7ebf7e151c77" containerName="route-controller-manager" Mar 12 14:37:13.963946 master-0 kubenswrapper[37036]: I0312 14:37:13.963861 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18c7090-21d2-45e8-abf1-7ebf7e151c77" containerName="route-controller-manager" Mar 12 14:37:13.964076 master-0 kubenswrapper[37036]: I0312 14:37:13.964048 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="f18c7090-21d2-45e8-abf1-7ebf7e151c77" containerName="route-controller-manager" Mar 12 14:37:13.964154 master-0 kubenswrapper[37036]: I0312 14:37:13.964096 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="a335739e-a77f-4315-9aa8-4eb3361acd6a" containerName="controller-manager" Mar 12 14:37:13.964523 master-0 kubenswrapper[37036]: I0312 14:37:13.964501 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt" Mar 12 14:37:13.967124 master-0 kubenswrapper[37036]: I0312 14:37:13.967080 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7459fbbf55-6pddj"] Mar 12 14:37:13.968274 master-0 kubenswrapper[37036]: I0312 14:37:13.968242 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" Mar 12 14:37:13.971979 master-0 kubenswrapper[37036]: I0312 14:37:13.971938 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-82tbw" Mar 12 14:37:13.972278 master-0 kubenswrapper[37036]: I0312 14:37:13.972247 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 14:37:13.972348 master-0 kubenswrapper[37036]: I0312 14:37:13.972303 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 14:37:13.973312 master-0 kubenswrapper[37036]: I0312 14:37:13.973276 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 14:37:13.973440 master-0 kubenswrapper[37036]: I0312 14:37:13.973411 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 14:37:13.973587 master-0 kubenswrapper[37036]: I0312 14:37:13.973555 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 14:37:13.973652 master-0 kubenswrapper[37036]: I0312 14:37:13.973610 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 14:37:13.974238 master-0 kubenswrapper[37036]: I0312 14:37:13.974209 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 14:37:13.975081 master-0 kubenswrapper[37036]: I0312 14:37:13.975052 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 14:37:13.975448 master-0 kubenswrapper[37036]: I0312 14:37:13.975418 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-2qg98" Mar 12 14:37:13.976155 master-0 kubenswrapper[37036]: I0312 14:37:13.976126 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 14:37:13.978128 master-0 kubenswrapper[37036]: I0312 14:37:13.978099 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 14:37:13.981617 master-0 kubenswrapper[37036]: I0312 14:37:13.981572 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7459fbbf55-6pddj"] Mar 12 14:37:13.983483 master-0 kubenswrapper[37036]: I0312 14:37:13.983451 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 14:37:13.984084 master-0 kubenswrapper[37036]: I0312 14:37:13.984044 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt"] Mar 12 14:37:14.024048 master-0 kubenswrapper[37036]: I0312 14:37:14.023162 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/467464b8-1685-43ac-8934-cbb0bccf0143-config\") pod \"route-controller-manager-76d5594548-b46pt\" (UID: \"467464b8-1685-43ac-8934-cbb0bccf0143\") " pod="openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt" Mar 12 14:37:14.024048 master-0 kubenswrapper[37036]: I0312 14:37:14.023241 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88bd8c29-8f46-4398-b61c-3da3014a6ea3-proxy-ca-bundles\") pod \"controller-manager-7459fbbf55-6pddj\" (UID: \"88bd8c29-8f46-4398-b61c-3da3014a6ea3\") " pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" Mar 12 14:37:14.024048 master-0 kubenswrapper[37036]: I0312 14:37:14.023285 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88bd8c29-8f46-4398-b61c-3da3014a6ea3-config\") pod \"controller-manager-7459fbbf55-6pddj\" (UID: \"88bd8c29-8f46-4398-b61c-3da3014a6ea3\") " pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" Mar 12 14:37:14.024048 master-0 kubenswrapper[37036]: I0312 14:37:14.023369 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/467464b8-1685-43ac-8934-cbb0bccf0143-client-ca\") pod \"route-controller-manager-76d5594548-b46pt\" (UID: \"467464b8-1685-43ac-8934-cbb0bccf0143\") " pod="openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt" Mar 12 14:37:14.024048 master-0 kubenswrapper[37036]: I0312 14:37:14.023392 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmwrr\" (UniqueName: \"kubernetes.io/projected/88bd8c29-8f46-4398-b61c-3da3014a6ea3-kube-api-access-xmwrr\") pod \"controller-manager-7459fbbf55-6pddj\" (UID: \"88bd8c29-8f46-4398-b61c-3da3014a6ea3\") " pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" Mar 12 14:37:14.024048 master-0 kubenswrapper[37036]: I0312 14:37:14.023415 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88bd8c29-8f46-4398-b61c-3da3014a6ea3-client-ca\") pod \"controller-manager-7459fbbf55-6pddj\" (UID: \"88bd8c29-8f46-4398-b61c-3da3014a6ea3\") " pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" Mar 12 14:37:14.024048 master-0 kubenswrapper[37036]: I0312 14:37:14.023492 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88bd8c29-8f46-4398-b61c-3da3014a6ea3-serving-cert\") pod \"controller-manager-7459fbbf55-6pddj\" (UID: \"88bd8c29-8f46-4398-b61c-3da3014a6ea3\") " pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" Mar 12 14:37:14.024048 master-0 kubenswrapper[37036]: I0312 14:37:14.023534 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/467464b8-1685-43ac-8934-cbb0bccf0143-serving-cert\") pod \"route-controller-manager-76d5594548-b46pt\" (UID: \"467464b8-1685-43ac-8934-cbb0bccf0143\") " pod="openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt" Mar 12 14:37:14.024048 master-0 kubenswrapper[37036]: I0312 14:37:14.023569 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xssl2\" (UniqueName: \"kubernetes.io/projected/467464b8-1685-43ac-8934-cbb0bccf0143-kube-api-access-xssl2\") pod \"route-controller-manager-76d5594548-b46pt\" (UID: \"467464b8-1685-43ac-8934-cbb0bccf0143\") " pod="openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt" Mar 12 14:37:14.124610 master-0 kubenswrapper[37036]: I0312 14:37:14.124548 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88bd8c29-8f46-4398-b61c-3da3014a6ea3-serving-cert\") pod \"controller-manager-7459fbbf55-6pddj\" (UID: \"88bd8c29-8f46-4398-b61c-3da3014a6ea3\") " pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" Mar 12 14:37:14.124831 master-0 kubenswrapper[37036]: I0312 14:37:14.124629 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/467464b8-1685-43ac-8934-cbb0bccf0143-serving-cert\") pod \"route-controller-manager-76d5594548-b46pt\" (UID: \"467464b8-1685-43ac-8934-cbb0bccf0143\") " pod="openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt" Mar 12 14:37:14.124831 master-0 kubenswrapper[37036]: I0312 14:37:14.124725 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xssl2\" (UniqueName: \"kubernetes.io/projected/467464b8-1685-43ac-8934-cbb0bccf0143-kube-api-access-xssl2\") pod \"route-controller-manager-76d5594548-b46pt\" (UID: \"467464b8-1685-43ac-8934-cbb0bccf0143\") " pod="openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt" Mar 12 14:37:14.125211 master-0 kubenswrapper[37036]: I0312 14:37:14.125173 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/467464b8-1685-43ac-8934-cbb0bccf0143-config\") pod \"route-controller-manager-76d5594548-b46pt\" (UID: \"467464b8-1685-43ac-8934-cbb0bccf0143\") " pod="openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt" Mar 12 14:37:14.129001 master-0 kubenswrapper[37036]: I0312 14:37:14.126396 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/467464b8-1685-43ac-8934-cbb0bccf0143-config\") pod \"route-controller-manager-76d5594548-b46pt\" (UID: \"467464b8-1685-43ac-8934-cbb0bccf0143\") " pod="openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt" Mar 12 14:37:14.129001 master-0 kubenswrapper[37036]: I0312 14:37:14.126477 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88bd8c29-8f46-4398-b61c-3da3014a6ea3-proxy-ca-bundles\") pod \"controller-manager-7459fbbf55-6pddj\" (UID: \"88bd8c29-8f46-4398-b61c-3da3014a6ea3\") " pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" Mar 12 14:37:14.129001 master-0 kubenswrapper[37036]: I0312 14:37:14.126524 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88bd8c29-8f46-4398-b61c-3da3014a6ea3-config\") pod \"controller-manager-7459fbbf55-6pddj\" (UID: \"88bd8c29-8f46-4398-b61c-3da3014a6ea3\") " pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" Mar 12 14:37:14.129001 master-0 kubenswrapper[37036]: I0312 14:37:14.127488 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/467464b8-1685-43ac-8934-cbb0bccf0143-client-ca\") pod \"route-controller-manager-76d5594548-b46pt\" (UID: \"467464b8-1685-43ac-8934-cbb0bccf0143\") " pod="openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt" Mar 12 14:37:14.129001 master-0 kubenswrapper[37036]: I0312 14:37:14.127724 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/467464b8-1685-43ac-8934-cbb0bccf0143-serving-cert\") pod \"route-controller-manager-76d5594548-b46pt\" (UID: \"467464b8-1685-43ac-8934-cbb0bccf0143\") " pod="openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt" Mar 12 14:37:14.129001 master-0 kubenswrapper[37036]: I0312 14:37:14.128037 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88bd8c29-8f46-4398-b61c-3da3014a6ea3-config\") pod \"controller-manager-7459fbbf55-6pddj\" (UID: \"88bd8c29-8f46-4398-b61c-3da3014a6ea3\") " pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" Mar 12 14:37:14.129001 master-0 kubenswrapper[37036]: I0312 14:37:14.128053 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88bd8c29-8f46-4398-b61c-3da3014a6ea3-proxy-ca-bundles\") pod \"controller-manager-7459fbbf55-6pddj\" (UID: \"88bd8c29-8f46-4398-b61c-3da3014a6ea3\") " pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" Mar 12 14:37:14.129001 master-0 kubenswrapper[37036]: I0312 14:37:14.128091 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/467464b8-1685-43ac-8934-cbb0bccf0143-client-ca\") pod \"route-controller-manager-76d5594548-b46pt\" (UID: \"467464b8-1685-43ac-8934-cbb0bccf0143\") " pod="openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt" Mar 12 14:37:14.129001 master-0 kubenswrapper[37036]: I0312 14:37:14.128128 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmwrr\" (UniqueName: \"kubernetes.io/projected/88bd8c29-8f46-4398-b61c-3da3014a6ea3-kube-api-access-xmwrr\") pod \"controller-manager-7459fbbf55-6pddj\" (UID: \"88bd8c29-8f46-4398-b61c-3da3014a6ea3\") " pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" Mar 12 14:37:14.129001 master-0 kubenswrapper[37036]: I0312 14:37:14.128171 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88bd8c29-8f46-4398-b61c-3da3014a6ea3-client-ca\") pod \"controller-manager-7459fbbf55-6pddj\" (UID: \"88bd8c29-8f46-4398-b61c-3da3014a6ea3\") " pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" Mar 12 14:37:14.129706 master-0 kubenswrapper[37036]: I0312 14:37:14.129663 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88bd8c29-8f46-4398-b61c-3da3014a6ea3-client-ca\") pod \"controller-manager-7459fbbf55-6pddj\" (UID: \"88bd8c29-8f46-4398-b61c-3da3014a6ea3\") " pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" Mar 12 14:37:14.131079 master-0 kubenswrapper[37036]: I0312 14:37:14.131001 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88bd8c29-8f46-4398-b61c-3da3014a6ea3-serving-cert\") pod \"controller-manager-7459fbbf55-6pddj\" (UID: \"88bd8c29-8f46-4398-b61c-3da3014a6ea3\") " pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" Mar 12 14:37:14.147460 master-0 kubenswrapper[37036]: I0312 14:37:14.147402 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xssl2\" (UniqueName: \"kubernetes.io/projected/467464b8-1685-43ac-8934-cbb0bccf0143-kube-api-access-xssl2\") pod \"route-controller-manager-76d5594548-b46pt\" (UID: \"467464b8-1685-43ac-8934-cbb0bccf0143\") " pod="openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt" Mar 12 14:37:14.151181 master-0 kubenswrapper[37036]: I0312 14:37:14.151140 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmwrr\" (UniqueName: \"kubernetes.io/projected/88bd8c29-8f46-4398-b61c-3da3014a6ea3-kube-api-access-xmwrr\") pod \"controller-manager-7459fbbf55-6pddj\" (UID: \"88bd8c29-8f46-4398-b61c-3da3014a6ea3\") " pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" Mar 12 14:37:14.290391 master-0 kubenswrapper[37036]: I0312 14:37:14.290259 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt" Mar 12 14:37:14.320390 master-0 kubenswrapper[37036]: I0312 14:37:14.320328 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" Mar 12 14:37:14.672863 master-0 kubenswrapper[37036]: I0312 14:37:14.672717 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt"] Mar 12 14:37:14.681650 master-0 kubenswrapper[37036]: W0312 14:37:14.681574 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod467464b8_1685_43ac_8934_cbb0bccf0143.slice/crio-a35fcc6ad8d9e62e4ec92a7e0e9dc08efcb1048e9e6769bcee30b95dfa1a9b0c WatchSource:0}: Error finding container a35fcc6ad8d9e62e4ec92a7e0e9dc08efcb1048e9e6769bcee30b95dfa1a9b0c: Status 404 returned error can't find the container with id a35fcc6ad8d9e62e4ec92a7e0e9dc08efcb1048e9e6769bcee30b95dfa1a9b0c Mar 12 14:37:14.697949 master-0 kubenswrapper[37036]: I0312 14:37:14.697880 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt" event={"ID":"467464b8-1685-43ac-8934-cbb0bccf0143","Type":"ContainerStarted","Data":"a35fcc6ad8d9e62e4ec92a7e0e9dc08efcb1048e9e6769bcee30b95dfa1a9b0c"} Mar 12 14:37:14.699212 master-0 kubenswrapper[37036]: I0312 14:37:14.699174 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" event={"ID":"1172bb7b-c430-4011-b869-0f6ba03987d5","Type":"ContainerStarted","Data":"07edbe479b4e439b79ca83542693555083c3ee8d0c664c32c8c3f2ca8b59c3cb"} Mar 12 14:37:14.703957 master-0 kubenswrapper[37036]: I0312 14:37:14.703880 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-859898ff78-qv7v9" Mar 12 14:37:14.779527 master-0 kubenswrapper[37036]: I0312 14:37:14.772828 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7459fbbf55-6pddj"] Mar 12 14:37:15.245944 master-0 kubenswrapper[37036]: I0312 14:37:15.245294 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a335739e-a77f-4315-9aa8-4eb3361acd6a" path="/var/lib/kubelet/pods/a335739e-a77f-4315-9aa8-4eb3361acd6a/volumes" Mar 12 14:37:15.245944 master-0 kubenswrapper[37036]: I0312 14:37:15.245827 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f18c7090-21d2-45e8-abf1-7ebf7e151c77" path="/var/lib/kubelet/pods/f18c7090-21d2-45e8-abf1-7ebf7e151c77/volumes" Mar 12 14:37:15.708520 master-0 kubenswrapper[37036]: I0312 14:37:15.708400 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt" event={"ID":"467464b8-1685-43ac-8934-cbb0bccf0143","Type":"ContainerStarted","Data":"8b5c92dad243ae4469433d70ff6cb805ca980e8043816d98ea1304697c732bf6"} Mar 12 14:37:15.708986 master-0 kubenswrapper[37036]: I0312 14:37:15.708717 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt" Mar 12 14:37:15.710366 master-0 kubenswrapper[37036]: I0312 14:37:15.710326 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" event={"ID":"88bd8c29-8f46-4398-b61c-3da3014a6ea3","Type":"ContainerStarted","Data":"9e540bbb15ddc259aedd338668840cd4d475ae2a2d75caaf5220cf5d404f5b87"} Mar 12 14:37:15.710404 master-0 kubenswrapper[37036]: I0312 14:37:15.710369 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" event={"ID":"88bd8c29-8f46-4398-b61c-3da3014a6ea3","Type":"ContainerStarted","Data":"bdb8108a57a4db70c026f2e24af6135b790509958b9634ccadfe20118c0b4f40"} Mar 12 14:37:15.717529 master-0 kubenswrapper[37036]: I0312 14:37:15.717471 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt" Mar 12 14:37:15.746276 master-0 kubenswrapper[37036]: I0312 14:37:15.746179 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt" podStartSLOduration=3.746126819 podStartE2EDuration="3.746126819s" podCreationTimestamp="2026-03-12 14:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:37:15.72587393 +0000 UTC m=+94.733614867" watchObservedRunningTime="2026-03-12 14:37:15.746126819 +0000 UTC m=+94.753867756" Mar 12 14:37:15.780802 master-0 kubenswrapper[37036]: I0312 14:37:15.780041 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" podStartSLOduration=3.78002157 podStartE2EDuration="3.78002157s" podCreationTimestamp="2026-03-12 14:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:37:15.778465771 +0000 UTC m=+94.786206718" watchObservedRunningTime="2026-03-12 14:37:15.78002157 +0000 UTC m=+94.787762497" Mar 12 14:37:16.717029 master-0 kubenswrapper[37036]: I0312 14:37:16.716943 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" Mar 12 14:37:16.721891 master-0 kubenswrapper[37036]: I0312 14:37:16.721650 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" Mar 12 14:37:18.508874 master-0 kubenswrapper[37036]: E0312 14:37:18.508813 37036 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbff9beb6_f6cc_4fd0_9d22_aaf1221c8b34.slice/crio-23f2052c5fbcc22cd38a431c5e2ac5863a96ce6b483b26a6af986d36abbcbca8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbff9beb6_f6cc_4fd0_9d22_aaf1221c8b34.slice\": RecentStats: unable to find data in memory cache]" Mar 12 14:37:18.932129 master-0 kubenswrapper[37036]: I0312 14:37:18.932007 37036 patch_prober.go:28] interesting pod/console-c847675b7-vfq5t container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Mar 12 14:37:18.932441 master-0 kubenswrapper[37036]: I0312 14:37:18.932403 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-c847675b7-vfq5t" podUID="0323a60d-acb9-4209-a5a5-9b45cc819ac5" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Mar 12 14:37:20.053723 master-0 kubenswrapper[37036]: I0312 14:37:20.053672 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6b77f48c6d-w6489"] Mar 12 14:37:20.101444 master-0 kubenswrapper[37036]: I0312 14:37:20.101388 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-d7bc769d-7n7p2"] Mar 12 14:37:20.102545 master-0 kubenswrapper[37036]: I0312 14:37:20.102514 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:37:20.110555 master-0 kubenswrapper[37036]: I0312 14:37:20.110515 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-d7bc769d-7n7p2"] Mar 12 14:37:20.116635 master-0 kubenswrapper[37036]: I0312 14:37:20.116574 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-console-config\") pod \"console-d7bc769d-7n7p2\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:37:20.116759 master-0 kubenswrapper[37036]: I0312 14:37:20.116636 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-console-serving-cert\") pod \"console-d7bc769d-7n7p2\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:37:20.116759 master-0 kubenswrapper[37036]: I0312 14:37:20.116657 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-console-oauth-config\") pod \"console-d7bc769d-7n7p2\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:37:20.116759 master-0 kubenswrapper[37036]: I0312 14:37:20.116692 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mskzv\" (UniqueName: \"kubernetes.io/projected/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-kube-api-access-mskzv\") pod \"console-d7bc769d-7n7p2\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:37:20.116759 master-0 kubenswrapper[37036]: I0312 14:37:20.116735 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-oauth-serving-cert\") pod \"console-d7bc769d-7n7p2\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:37:20.116986 master-0 kubenswrapper[37036]: I0312 14:37:20.116768 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-service-ca\") pod \"console-d7bc769d-7n7p2\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:37:20.116986 master-0 kubenswrapper[37036]: I0312 14:37:20.116799 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-trusted-ca-bundle\") pod \"console-d7bc769d-7n7p2\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:37:20.218016 master-0 kubenswrapper[37036]: I0312 14:37:20.217945 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-service-ca\") pod \"console-d7bc769d-7n7p2\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:37:20.218370 master-0 kubenswrapper[37036]: I0312 14:37:20.218349 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-trusted-ca-bundle\") pod \"console-d7bc769d-7n7p2\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:37:20.218555 master-0 kubenswrapper[37036]: I0312 14:37:20.218538 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-console-config\") pod \"console-d7bc769d-7n7p2\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:37:20.218722 master-0 kubenswrapper[37036]: I0312 14:37:20.218689 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-console-oauth-config\") pod \"console-d7bc769d-7n7p2\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:37:20.218861 master-0 kubenswrapper[37036]: I0312 14:37:20.218842 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-console-serving-cert\") pod \"console-d7bc769d-7n7p2\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:37:20.219036 master-0 kubenswrapper[37036]: I0312 14:37:20.219016 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mskzv\" (UniqueName: \"kubernetes.io/projected/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-kube-api-access-mskzv\") pod \"console-d7bc769d-7n7p2\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:37:20.219282 master-0 kubenswrapper[37036]: I0312 14:37:20.218957 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-service-ca\") pod \"console-d7bc769d-7n7p2\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:37:20.219356 master-0 kubenswrapper[37036]: I0312 14:37:20.219227 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-oauth-serving-cert\") pod \"console-d7bc769d-7n7p2\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:37:20.219356 master-0 kubenswrapper[37036]: I0312 14:37:20.219301 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-console-config\") pod \"console-d7bc769d-7n7p2\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:37:20.220287 master-0 kubenswrapper[37036]: I0312 14:37:20.220268 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-oauth-serving-cert\") pod \"console-d7bc769d-7n7p2\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:37:20.220662 master-0 kubenswrapper[37036]: I0312 14:37:20.220630 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-trusted-ca-bundle\") pod \"console-d7bc769d-7n7p2\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:37:20.222600 master-0 kubenswrapper[37036]: I0312 14:37:20.222538 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-console-serving-cert\") pod \"console-d7bc769d-7n7p2\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:37:20.224834 master-0 kubenswrapper[37036]: I0312 14:37:20.224753 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-console-oauth-config\") pod \"console-d7bc769d-7n7p2\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:37:20.243669 master-0 kubenswrapper[37036]: I0312 14:37:20.243623 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mskzv\" (UniqueName: \"kubernetes.io/projected/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-kube-api-access-mskzv\") pod \"console-d7bc769d-7n7p2\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:37:20.472050 master-0 kubenswrapper[37036]: I0312 14:37:20.471878 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:37:20.868728 master-0 kubenswrapper[37036]: I0312 14:37:20.868639 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-d7bc769d-7n7p2"] Mar 12 14:37:20.873180 master-0 kubenswrapper[37036]: W0312 14:37:20.873122 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd192dc2b_d1d6_45fe_bdd1_9ceb6ec6e687.slice/crio-9d6e4aa19467aa0fd054b900eafce5c903e382b7c83456640b5767327c3cbfb0 WatchSource:0}: Error finding container 9d6e4aa19467aa0fd054b900eafce5c903e382b7c83456640b5767327c3cbfb0: Status 404 returned error can't find the container with id 9d6e4aa19467aa0fd054b900eafce5c903e382b7c83456640b5767327c3cbfb0 Mar 12 14:37:20.930805 master-0 kubenswrapper[37036]: I0312 14:37:20.930758 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:37:20.933603 master-0 kubenswrapper[37036]: I0312 14:37:20.933569 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 14:37:21.031829 master-0 kubenswrapper[37036]: I0312 14:37:21.031771 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access\") pod \"5a56d42a-efb4-4956-acab-d12c7ca5276e\" (UID: \"5a56d42a-efb4-4956-acab-d12c7ca5276e\") " Mar 12 14:37:21.035426 master-0 kubenswrapper[37036]: I0312 14:37:21.035366 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5a56d42a-efb4-4956-acab-d12c7ca5276e" (UID: "5a56d42a-efb4-4956-acab-d12c7ca5276e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:37:21.133966 master-0 kubenswrapper[37036]: I0312 14:37:21.133802 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a56d42a-efb4-4956-acab-d12c7ca5276e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:21.749426 master-0 kubenswrapper[37036]: I0312 14:37:21.749340 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-d7bc769d-7n7p2" event={"ID":"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687","Type":"ContainerStarted","Data":"80c9073214d7c9dfb60e279e6fa079010abac62c8357abe680584ea3eb7ecac8"} Mar 12 14:37:21.749654 master-0 kubenswrapper[37036]: I0312 14:37:21.749453 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-d7bc769d-7n7p2" event={"ID":"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687","Type":"ContainerStarted","Data":"9d6e4aa19467aa0fd054b900eafce5c903e382b7c83456640b5767327c3cbfb0"} Mar 12 14:37:21.769192 master-0 kubenswrapper[37036]: I0312 14:37:21.769083 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-d7bc769d-7n7p2" podStartSLOduration=1.769065564 podStartE2EDuration="1.769065564s" podCreationTimestamp="2026-03-12 14:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:37:21.768375687 +0000 UTC m=+100.776116654" watchObservedRunningTime="2026-03-12 14:37:21.769065564 +0000 UTC m=+100.776806501" Mar 12 14:37:22.475047 master-0 kubenswrapper[37036]: E0312 14:37:22.474984 37036 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbff9beb6_f6cc_4fd0_9d22_aaf1221c8b34.slice/crio-23f2052c5fbcc22cd38a431c5e2ac5863a96ce6b483b26a6af986d36abbcbca8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbff9beb6_f6cc_4fd0_9d22_aaf1221c8b34.slice\": RecentStats: unable to find data in memory cache]" Mar 12 14:37:28.931819 master-0 kubenswrapper[37036]: I0312 14:37:28.931726 37036 patch_prober.go:28] interesting pod/console-c847675b7-vfq5t container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Mar 12 14:37:28.931819 master-0 kubenswrapper[37036]: I0312 14:37:28.931800 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-c847675b7-vfq5t" podUID="0323a60d-acb9-4209-a5a5-9b45cc819ac5" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Mar 12 14:37:30.473081 master-0 kubenswrapper[37036]: I0312 14:37:30.473019 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:37:30.473527 master-0 kubenswrapper[37036]: I0312 14:37:30.473093 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:37:30.475172 master-0 kubenswrapper[37036]: I0312 14:37:30.475028 37036 patch_prober.go:28] interesting pod/console-d7bc769d-7n7p2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 12 14:37:30.475245 master-0 kubenswrapper[37036]: I0312 14:37:30.475194 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-d7bc769d-7n7p2" podUID="d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 12 14:37:31.794719 master-0 kubenswrapper[37036]: I0312 14:37:31.794611 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7459fbbf55-6pddj"] Mar 12 14:37:31.795377 master-0 kubenswrapper[37036]: I0312 14:37:31.794859 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" podUID="88bd8c29-8f46-4398-b61c-3da3014a6ea3" containerName="controller-manager" containerID="cri-o://9e540bbb15ddc259aedd338668840cd4d475ae2a2d75caaf5220cf5d404f5b87" gracePeriod=30 Mar 12 14:37:31.812222 master-0 kubenswrapper[37036]: I0312 14:37:31.812163 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt"] Mar 12 14:37:31.812447 master-0 kubenswrapper[37036]: I0312 14:37:31.812376 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt" podUID="467464b8-1685-43ac-8934-cbb0bccf0143" containerName="route-controller-manager" containerID="cri-o://8b5c92dad243ae4469433d70ff6cb805ca980e8043816d98ea1304697c732bf6" gracePeriod=30 Mar 12 14:37:32.453058 master-0 kubenswrapper[37036]: I0312 14:37:32.452850 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt" Mar 12 14:37:32.510187 master-0 kubenswrapper[37036]: I0312 14:37:32.508482 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/467464b8-1685-43ac-8934-cbb0bccf0143-client-ca\") pod \"467464b8-1685-43ac-8934-cbb0bccf0143\" (UID: \"467464b8-1685-43ac-8934-cbb0bccf0143\") " Mar 12 14:37:32.510187 master-0 kubenswrapper[37036]: I0312 14:37:32.508597 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xssl2\" (UniqueName: \"kubernetes.io/projected/467464b8-1685-43ac-8934-cbb0bccf0143-kube-api-access-xssl2\") pod \"467464b8-1685-43ac-8934-cbb0bccf0143\" (UID: \"467464b8-1685-43ac-8934-cbb0bccf0143\") " Mar 12 14:37:32.510187 master-0 kubenswrapper[37036]: I0312 14:37:32.508662 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/467464b8-1685-43ac-8934-cbb0bccf0143-config\") pod \"467464b8-1685-43ac-8934-cbb0bccf0143\" (UID: \"467464b8-1685-43ac-8934-cbb0bccf0143\") " Mar 12 14:37:32.510187 master-0 kubenswrapper[37036]: I0312 14:37:32.508711 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/467464b8-1685-43ac-8934-cbb0bccf0143-serving-cert\") pod \"467464b8-1685-43ac-8934-cbb0bccf0143\" (UID: \"467464b8-1685-43ac-8934-cbb0bccf0143\") " Mar 12 14:37:32.510584 master-0 kubenswrapper[37036]: I0312 14:37:32.510360 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/467464b8-1685-43ac-8934-cbb0bccf0143-config" (OuterVolumeSpecName: "config") pod "467464b8-1685-43ac-8934-cbb0bccf0143" (UID: "467464b8-1685-43ac-8934-cbb0bccf0143"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:37:32.511418 master-0 kubenswrapper[37036]: I0312 14:37:32.510969 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/467464b8-1685-43ac-8934-cbb0bccf0143-client-ca" (OuterVolumeSpecName: "client-ca") pod "467464b8-1685-43ac-8934-cbb0bccf0143" (UID: "467464b8-1685-43ac-8934-cbb0bccf0143"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:37:32.524988 master-0 kubenswrapper[37036]: I0312 14:37:32.516066 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/467464b8-1685-43ac-8934-cbb0bccf0143-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "467464b8-1685-43ac-8934-cbb0bccf0143" (UID: "467464b8-1685-43ac-8934-cbb0bccf0143"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:37:32.525464 master-0 kubenswrapper[37036]: I0312 14:37:32.525404 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/467464b8-1685-43ac-8934-cbb0bccf0143-kube-api-access-xssl2" (OuterVolumeSpecName: "kube-api-access-xssl2") pod "467464b8-1685-43ac-8934-cbb0bccf0143" (UID: "467464b8-1685-43ac-8934-cbb0bccf0143"). InnerVolumeSpecName "kube-api-access-xssl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:37:32.604762 master-0 kubenswrapper[37036]: I0312 14:37:32.604716 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" Mar 12 14:37:32.609957 master-0 kubenswrapper[37036]: I0312 14:37:32.609893 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xssl2\" (UniqueName: \"kubernetes.io/projected/467464b8-1685-43ac-8934-cbb0bccf0143-kube-api-access-xssl2\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:32.610041 master-0 kubenswrapper[37036]: I0312 14:37:32.609960 37036 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/467464b8-1685-43ac-8934-cbb0bccf0143-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:32.610041 master-0 kubenswrapper[37036]: I0312 14:37:32.609979 37036 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/467464b8-1685-43ac-8934-cbb0bccf0143-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:32.610041 master-0 kubenswrapper[37036]: I0312 14:37:32.609995 37036 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/467464b8-1685-43ac-8934-cbb0bccf0143-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:32.665165 master-0 kubenswrapper[37036]: E0312 14:37:32.665099 37036 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbff9beb6_f6cc_4fd0_9d22_aaf1221c8b34.slice/crio-23f2052c5fbcc22cd38a431c5e2ac5863a96ce6b483b26a6af986d36abbcbca8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbff9beb6_f6cc_4fd0_9d22_aaf1221c8b34.slice\": RecentStats: unable to find data in memory cache]" Mar 12 14:37:32.711132 master-0 kubenswrapper[37036]: I0312 14:37:32.711009 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88bd8c29-8f46-4398-b61c-3da3014a6ea3-client-ca\") pod \"88bd8c29-8f46-4398-b61c-3da3014a6ea3\" (UID: \"88bd8c29-8f46-4398-b61c-3da3014a6ea3\") " Mar 12 14:37:32.711132 master-0 kubenswrapper[37036]: I0312 14:37:32.711076 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88bd8c29-8f46-4398-b61c-3da3014a6ea3-proxy-ca-bundles\") pod \"88bd8c29-8f46-4398-b61c-3da3014a6ea3\" (UID: \"88bd8c29-8f46-4398-b61c-3da3014a6ea3\") " Mar 12 14:37:32.711338 master-0 kubenswrapper[37036]: I0312 14:37:32.711163 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88bd8c29-8f46-4398-b61c-3da3014a6ea3-config\") pod \"88bd8c29-8f46-4398-b61c-3da3014a6ea3\" (UID: \"88bd8c29-8f46-4398-b61c-3da3014a6ea3\") " Mar 12 14:37:32.711338 master-0 kubenswrapper[37036]: I0312 14:37:32.711268 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmwrr\" (UniqueName: \"kubernetes.io/projected/88bd8c29-8f46-4398-b61c-3da3014a6ea3-kube-api-access-xmwrr\") pod \"88bd8c29-8f46-4398-b61c-3da3014a6ea3\" (UID: \"88bd8c29-8f46-4398-b61c-3da3014a6ea3\") " Mar 12 14:37:32.711338 master-0 kubenswrapper[37036]: I0312 14:37:32.711314 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88bd8c29-8f46-4398-b61c-3da3014a6ea3-serving-cert\") pod \"88bd8c29-8f46-4398-b61c-3da3014a6ea3\" (UID: \"88bd8c29-8f46-4398-b61c-3da3014a6ea3\") " Mar 12 14:37:32.711545 master-0 kubenswrapper[37036]: I0312 14:37:32.711500 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88bd8c29-8f46-4398-b61c-3da3014a6ea3-client-ca" (OuterVolumeSpecName: "client-ca") pod "88bd8c29-8f46-4398-b61c-3da3014a6ea3" (UID: "88bd8c29-8f46-4398-b61c-3da3014a6ea3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:37:32.711588 master-0 kubenswrapper[37036]: I0312 14:37:32.711571 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88bd8c29-8f46-4398-b61c-3da3014a6ea3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "88bd8c29-8f46-4398-b61c-3da3014a6ea3" (UID: "88bd8c29-8f46-4398-b61c-3da3014a6ea3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:37:32.712219 master-0 kubenswrapper[37036]: I0312 14:37:32.712164 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88bd8c29-8f46-4398-b61c-3da3014a6ea3-config" (OuterVolumeSpecName: "config") pod "88bd8c29-8f46-4398-b61c-3da3014a6ea3" (UID: "88bd8c29-8f46-4398-b61c-3da3014a6ea3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:37:32.713648 master-0 kubenswrapper[37036]: I0312 14:37:32.713621 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88bd8c29-8f46-4398-b61c-3da3014a6ea3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "88bd8c29-8f46-4398-b61c-3da3014a6ea3" (UID: "88bd8c29-8f46-4398-b61c-3da3014a6ea3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:37:32.713722 master-0 kubenswrapper[37036]: I0312 14:37:32.713683 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88bd8c29-8f46-4398-b61c-3da3014a6ea3-kube-api-access-xmwrr" (OuterVolumeSpecName: "kube-api-access-xmwrr") pod "88bd8c29-8f46-4398-b61c-3da3014a6ea3" (UID: "88bd8c29-8f46-4398-b61c-3da3014a6ea3"). InnerVolumeSpecName "kube-api-access-xmwrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:37:32.812844 master-0 kubenswrapper[37036]: I0312 14:37:32.812768 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmwrr\" (UniqueName: \"kubernetes.io/projected/88bd8c29-8f46-4398-b61c-3da3014a6ea3-kube-api-access-xmwrr\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:32.812844 master-0 kubenswrapper[37036]: I0312 14:37:32.812820 37036 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88bd8c29-8f46-4398-b61c-3da3014a6ea3-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:32.812844 master-0 kubenswrapper[37036]: I0312 14:37:32.812829 37036 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88bd8c29-8f46-4398-b61c-3da3014a6ea3-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:32.812844 master-0 kubenswrapper[37036]: I0312 14:37:32.812844 37036 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88bd8c29-8f46-4398-b61c-3da3014a6ea3-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:32.812844 master-0 kubenswrapper[37036]: I0312 14:37:32.812854 37036 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88bd8c29-8f46-4398-b61c-3da3014a6ea3-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:32.822931 master-0 kubenswrapper[37036]: I0312 14:37:32.822857 37036 generic.go:334] "Generic (PLEG): container finished" podID="88bd8c29-8f46-4398-b61c-3da3014a6ea3" containerID="9e540bbb15ddc259aedd338668840cd4d475ae2a2d75caaf5220cf5d404f5b87" exitCode=0 Mar 12 14:37:32.823134 master-0 kubenswrapper[37036]: I0312 14:37:32.822952 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" event={"ID":"88bd8c29-8f46-4398-b61c-3da3014a6ea3","Type":"ContainerDied","Data":"9e540bbb15ddc259aedd338668840cd4d475ae2a2d75caaf5220cf5d404f5b87"} Mar 12 14:37:32.823134 master-0 kubenswrapper[37036]: I0312 14:37:32.822981 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" event={"ID":"88bd8c29-8f46-4398-b61c-3da3014a6ea3","Type":"ContainerDied","Data":"bdb8108a57a4db70c026f2e24af6135b790509958b9634ccadfe20118c0b4f40"} Mar 12 14:37:32.823134 master-0 kubenswrapper[37036]: I0312 14:37:32.822979 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7459fbbf55-6pddj" Mar 12 14:37:32.823134 master-0 kubenswrapper[37036]: I0312 14:37:32.823008 37036 scope.go:117] "RemoveContainer" containerID="9e540bbb15ddc259aedd338668840cd4d475ae2a2d75caaf5220cf5d404f5b87" Mar 12 14:37:32.825434 master-0 kubenswrapper[37036]: I0312 14:37:32.824944 37036 generic.go:334] "Generic (PLEG): container finished" podID="467464b8-1685-43ac-8934-cbb0bccf0143" containerID="8b5c92dad243ae4469433d70ff6cb805ca980e8043816d98ea1304697c732bf6" exitCode=0 Mar 12 14:37:32.825434 master-0 kubenswrapper[37036]: I0312 14:37:32.824989 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt" event={"ID":"467464b8-1685-43ac-8934-cbb0bccf0143","Type":"ContainerDied","Data":"8b5c92dad243ae4469433d70ff6cb805ca980e8043816d98ea1304697c732bf6"} Mar 12 14:37:32.825434 master-0 kubenswrapper[37036]: I0312 14:37:32.825018 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt" event={"ID":"467464b8-1685-43ac-8934-cbb0bccf0143","Type":"ContainerDied","Data":"a35fcc6ad8d9e62e4ec92a7e0e9dc08efcb1048e9e6769bcee30b95dfa1a9b0c"} Mar 12 14:37:32.825434 master-0 kubenswrapper[37036]: I0312 14:37:32.825068 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt" Mar 12 14:37:32.860207 master-0 kubenswrapper[37036]: I0312 14:37:32.860157 37036 scope.go:117] "RemoveContainer" containerID="9e540bbb15ddc259aedd338668840cd4d475ae2a2d75caaf5220cf5d404f5b87" Mar 12 14:37:32.860683 master-0 kubenswrapper[37036]: E0312 14:37:32.860634 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e540bbb15ddc259aedd338668840cd4d475ae2a2d75caaf5220cf5d404f5b87\": container with ID starting with 9e540bbb15ddc259aedd338668840cd4d475ae2a2d75caaf5220cf5d404f5b87 not found: ID does not exist" containerID="9e540bbb15ddc259aedd338668840cd4d475ae2a2d75caaf5220cf5d404f5b87" Mar 12 14:37:32.860683 master-0 kubenswrapper[37036]: I0312 14:37:32.860672 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e540bbb15ddc259aedd338668840cd4d475ae2a2d75caaf5220cf5d404f5b87"} err="failed to get container status \"9e540bbb15ddc259aedd338668840cd4d475ae2a2d75caaf5220cf5d404f5b87\": rpc error: code = NotFound desc = could not find container \"9e540bbb15ddc259aedd338668840cd4d475ae2a2d75caaf5220cf5d404f5b87\": container with ID starting with 9e540bbb15ddc259aedd338668840cd4d475ae2a2d75caaf5220cf5d404f5b87 not found: ID does not exist" Mar 12 14:37:32.860839 master-0 kubenswrapper[37036]: I0312 14:37:32.860692 37036 scope.go:117] "RemoveContainer" containerID="8b5c92dad243ae4469433d70ff6cb805ca980e8043816d98ea1304697c732bf6" Mar 12 14:37:32.876137 master-0 kubenswrapper[37036]: I0312 14:37:32.876085 37036 scope.go:117] "RemoveContainer" containerID="8b5c92dad243ae4469433d70ff6cb805ca980e8043816d98ea1304697c732bf6" Mar 12 14:37:32.876562 master-0 kubenswrapper[37036]: E0312 14:37:32.876511 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b5c92dad243ae4469433d70ff6cb805ca980e8043816d98ea1304697c732bf6\": container with ID starting with 8b5c92dad243ae4469433d70ff6cb805ca980e8043816d98ea1304697c732bf6 not found: ID does not exist" containerID="8b5c92dad243ae4469433d70ff6cb805ca980e8043816d98ea1304697c732bf6" Mar 12 14:37:32.876780 master-0 kubenswrapper[37036]: I0312 14:37:32.876560 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b5c92dad243ae4469433d70ff6cb805ca980e8043816d98ea1304697c732bf6"} err="failed to get container status \"8b5c92dad243ae4469433d70ff6cb805ca980e8043816d98ea1304697c732bf6\": rpc error: code = NotFound desc = could not find container \"8b5c92dad243ae4469433d70ff6cb805ca980e8043816d98ea1304697c732bf6\": container with ID starting with 8b5c92dad243ae4469433d70ff6cb805ca980e8043816d98ea1304697c732bf6 not found: ID does not exist" Mar 12 14:37:33.224228 master-0 kubenswrapper[37036]: I0312 14:37:33.224144 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b48555cb-jblpb"] Mar 12 14:37:33.224562 master-0 kubenswrapper[37036]: E0312 14:37:33.224530 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="467464b8-1685-43ac-8934-cbb0bccf0143" containerName="route-controller-manager" Mar 12 14:37:33.224562 master-0 kubenswrapper[37036]: I0312 14:37:33.224557 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="467464b8-1685-43ac-8934-cbb0bccf0143" containerName="route-controller-manager" Mar 12 14:37:33.224679 master-0 kubenswrapper[37036]: E0312 14:37:33.224618 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88bd8c29-8f46-4398-b61c-3da3014a6ea3" containerName="controller-manager" Mar 12 14:37:33.224679 master-0 kubenswrapper[37036]: I0312 14:37:33.224627 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="88bd8c29-8f46-4398-b61c-3da3014a6ea3" containerName="controller-manager" Mar 12 14:37:33.224866 master-0 kubenswrapper[37036]: I0312 14:37:33.224840 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="467464b8-1685-43ac-8934-cbb0bccf0143" containerName="route-controller-manager" Mar 12 14:37:33.224866 master-0 kubenswrapper[37036]: I0312 14:37:33.224861 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="88bd8c29-8f46-4398-b61c-3da3014a6ea3" containerName="controller-manager" Mar 12 14:37:33.225343 master-0 kubenswrapper[37036]: I0312 14:37:33.225317 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b48555cb-jblpb" Mar 12 14:37:33.227732 master-0 kubenswrapper[37036]: I0312 14:37:33.227557 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 14:37:33.228014 master-0 kubenswrapper[37036]: I0312 14:37:33.227855 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 14:37:33.228217 master-0 kubenswrapper[37036]: I0312 14:37:33.228147 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 14:37:33.228335 master-0 kubenswrapper[37036]: I0312 14:37:33.228307 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-82tbw" Mar 12 14:37:33.228532 master-0 kubenswrapper[37036]: I0312 14:37:33.228514 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 14:37:33.229677 master-0 kubenswrapper[37036]: I0312 14:37:33.229604 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 14:37:33.232226 master-0 kubenswrapper[37036]: I0312 14:37:33.232178 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt"] Mar 12 14:37:33.254166 master-0 kubenswrapper[37036]: I0312 14:37:33.254107 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b48555cb-jblpb"] Mar 12 14:37:33.254166 master-0 kubenswrapper[37036]: I0312 14:37:33.254159 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d5594548-b46pt"] Mar 12 14:37:33.297827 master-0 kubenswrapper[37036]: I0312 14:37:33.297007 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7459fbbf55-6pddj"] Mar 12 14:37:33.299525 master-0 kubenswrapper[37036]: I0312 14:37:33.299453 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7459fbbf55-6pddj"] Mar 12 14:37:33.322270 master-0 kubenswrapper[37036]: I0312 14:37:33.322196 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78a644be-66df-421a-a0cc-86d508d09cff-serving-cert\") pod \"route-controller-manager-6b48555cb-jblpb\" (UID: \"78a644be-66df-421a-a0cc-86d508d09cff\") " pod="openshift-route-controller-manager/route-controller-manager-6b48555cb-jblpb" Mar 12 14:37:33.322553 master-0 kubenswrapper[37036]: I0312 14:37:33.322403 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khrhr\" (UniqueName: \"kubernetes.io/projected/78a644be-66df-421a-a0cc-86d508d09cff-kube-api-access-khrhr\") pod \"route-controller-manager-6b48555cb-jblpb\" (UID: \"78a644be-66df-421a-a0cc-86d508d09cff\") " pod="openshift-route-controller-manager/route-controller-manager-6b48555cb-jblpb" Mar 12 14:37:33.322692 master-0 kubenswrapper[37036]: I0312 14:37:33.322651 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78a644be-66df-421a-a0cc-86d508d09cff-config\") pod \"route-controller-manager-6b48555cb-jblpb\" (UID: \"78a644be-66df-421a-a0cc-86d508d09cff\") " pod="openshift-route-controller-manager/route-controller-manager-6b48555cb-jblpb" Mar 12 14:37:33.322845 master-0 kubenswrapper[37036]: I0312 14:37:33.322813 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78a644be-66df-421a-a0cc-86d508d09cff-client-ca\") pod \"route-controller-manager-6b48555cb-jblpb\" (UID: \"78a644be-66df-421a-a0cc-86d508d09cff\") " pod="openshift-route-controller-manager/route-controller-manager-6b48555cb-jblpb" Mar 12 14:37:33.404134 master-0 kubenswrapper[37036]: E0312 14:37:33.404017 37036 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbff9beb6_f6cc_4fd0_9d22_aaf1221c8b34.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbff9beb6_f6cc_4fd0_9d22_aaf1221c8b34.slice/crio-23f2052c5fbcc22cd38a431c5e2ac5863a96ce6b483b26a6af986d36abbcbca8\": RecentStats: unable to find data in memory cache]" Mar 12 14:37:33.425658 master-0 kubenswrapper[37036]: I0312 14:37:33.425556 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78a644be-66df-421a-a0cc-86d508d09cff-serving-cert\") pod \"route-controller-manager-6b48555cb-jblpb\" (UID: \"78a644be-66df-421a-a0cc-86d508d09cff\") " pod="openshift-route-controller-manager/route-controller-manager-6b48555cb-jblpb" Mar 12 14:37:33.425658 master-0 kubenswrapper[37036]: I0312 14:37:33.425661 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khrhr\" (UniqueName: \"kubernetes.io/projected/78a644be-66df-421a-a0cc-86d508d09cff-kube-api-access-khrhr\") pod \"route-controller-manager-6b48555cb-jblpb\" (UID: \"78a644be-66df-421a-a0cc-86d508d09cff\") " pod="openshift-route-controller-manager/route-controller-manager-6b48555cb-jblpb" Mar 12 14:37:33.425853 master-0 kubenswrapper[37036]: I0312 14:37:33.425701 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78a644be-66df-421a-a0cc-86d508d09cff-config\") pod \"route-controller-manager-6b48555cb-jblpb\" (UID: \"78a644be-66df-421a-a0cc-86d508d09cff\") " pod="openshift-route-controller-manager/route-controller-manager-6b48555cb-jblpb" Mar 12 14:37:33.425853 master-0 kubenswrapper[37036]: I0312 14:37:33.425755 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78a644be-66df-421a-a0cc-86d508d09cff-client-ca\") pod \"route-controller-manager-6b48555cb-jblpb\" (UID: \"78a644be-66df-421a-a0cc-86d508d09cff\") " pod="openshift-route-controller-manager/route-controller-manager-6b48555cb-jblpb" Mar 12 14:37:33.427317 master-0 kubenswrapper[37036]: I0312 14:37:33.427289 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78a644be-66df-421a-a0cc-86d508d09cff-config\") pod \"route-controller-manager-6b48555cb-jblpb\" (UID: \"78a644be-66df-421a-a0cc-86d508d09cff\") " pod="openshift-route-controller-manager/route-controller-manager-6b48555cb-jblpb" Mar 12 14:37:33.427452 master-0 kubenswrapper[37036]: I0312 14:37:33.427411 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78a644be-66df-421a-a0cc-86d508d09cff-client-ca\") pod \"route-controller-manager-6b48555cb-jblpb\" (UID: \"78a644be-66df-421a-a0cc-86d508d09cff\") " pod="openshift-route-controller-manager/route-controller-manager-6b48555cb-jblpb" Mar 12 14:37:33.429397 master-0 kubenswrapper[37036]: I0312 14:37:33.429356 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78a644be-66df-421a-a0cc-86d508d09cff-serving-cert\") pod \"route-controller-manager-6b48555cb-jblpb\" (UID: \"78a644be-66df-421a-a0cc-86d508d09cff\") " pod="openshift-route-controller-manager/route-controller-manager-6b48555cb-jblpb" Mar 12 14:37:33.441588 master-0 kubenswrapper[37036]: I0312 14:37:33.441546 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khrhr\" (UniqueName: \"kubernetes.io/projected/78a644be-66df-421a-a0cc-86d508d09cff-kube-api-access-khrhr\") pod \"route-controller-manager-6b48555cb-jblpb\" (UID: \"78a644be-66df-421a-a0cc-86d508d09cff\") " pod="openshift-route-controller-manager/route-controller-manager-6b48555cb-jblpb" Mar 12 14:37:33.561887 master-0 kubenswrapper[37036]: I0312 14:37:33.561805 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b48555cb-jblpb" Mar 12 14:37:34.040254 master-0 kubenswrapper[37036]: I0312 14:37:34.040220 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b48555cb-jblpb"] Mar 12 14:37:34.046095 master-0 kubenswrapper[37036]: W0312 14:37:34.046051 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78a644be_66df_421a_a0cc_86d508d09cff.slice/crio-12c1207b008d605c3b69b92d4de0187965323f4690aa83885a313aa8e8d89545 WatchSource:0}: Error finding container 12c1207b008d605c3b69b92d4de0187965323f4690aa83885a313aa8e8d89545: Status 404 returned error can't find the container with id 12c1207b008d605c3b69b92d4de0187965323f4690aa83885a313aa8e8d89545 Mar 12 14:37:34.842584 master-0 kubenswrapper[37036]: I0312 14:37:34.841781 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b48555cb-jblpb" event={"ID":"78a644be-66df-421a-a0cc-86d508d09cff","Type":"ContainerStarted","Data":"bcedab60cf625de6196028f68bf642e3f47b8376d70e445efff7aee9ec58bc31"} Mar 12 14:37:34.842584 master-0 kubenswrapper[37036]: I0312 14:37:34.841861 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b48555cb-jblpb" event={"ID":"78a644be-66df-421a-a0cc-86d508d09cff","Type":"ContainerStarted","Data":"12c1207b008d605c3b69b92d4de0187965323f4690aa83885a313aa8e8d89545"} Mar 12 14:37:34.844112 master-0 kubenswrapper[37036]: I0312 14:37:34.843315 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6b48555cb-jblpb" Mar 12 14:37:34.848304 master-0 kubenswrapper[37036]: I0312 14:37:34.848254 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6b48555cb-jblpb" Mar 12 14:37:34.892393 master-0 kubenswrapper[37036]: I0312 14:37:34.892311 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6b48555cb-jblpb" podStartSLOduration=3.89229111 podStartE2EDuration="3.89229111s" podCreationTimestamp="2026-03-12 14:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:37:34.868090421 +0000 UTC m=+113.875831398" watchObservedRunningTime="2026-03-12 14:37:34.89229111 +0000 UTC m=+113.900032067" Mar 12 14:37:35.242252 master-0 kubenswrapper[37036]: I0312 14:37:35.242191 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="467464b8-1685-43ac-8934-cbb0bccf0143" path="/var/lib/kubelet/pods/467464b8-1685-43ac-8934-cbb0bccf0143/volumes" Mar 12 14:37:35.243512 master-0 kubenswrapper[37036]: I0312 14:37:35.242719 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88bd8c29-8f46-4398-b61c-3da3014a6ea3" path="/var/lib/kubelet/pods/88bd8c29-8f46-4398-b61c-3da3014a6ea3/volumes" Mar 12 14:37:35.979091 master-0 kubenswrapper[37036]: I0312 14:37:35.979027 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-547c59f7cc-lv94f"] Mar 12 14:37:35.980008 master-0 kubenswrapper[37036]: I0312 14:37:35.979973 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-547c59f7cc-lv94f" Mar 12 14:37:35.982534 master-0 kubenswrapper[37036]: I0312 14:37:35.982509 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 14:37:35.982780 master-0 kubenswrapper[37036]: I0312 14:37:35.982766 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 14:37:35.983058 master-0 kubenswrapper[37036]: I0312 14:37:35.983045 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 14:37:35.983387 master-0 kubenswrapper[37036]: I0312 14:37:35.983373 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 14:37:35.985317 master-0 kubenswrapper[37036]: I0312 14:37:35.985296 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 14:37:35.985601 master-0 kubenswrapper[37036]: I0312 14:37:35.985585 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-2qg98" Mar 12 14:37:35.994338 master-0 kubenswrapper[37036]: I0312 14:37:35.994267 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 14:37:35.996539 master-0 kubenswrapper[37036]: I0312 14:37:35.996489 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-547c59f7cc-lv94f"] Mar 12 14:37:36.067785 master-0 kubenswrapper[37036]: I0312 14:37:36.067714 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0a750e63-2387-4d04-97db-72946bde68a8-proxy-ca-bundles\") pod \"controller-manager-547c59f7cc-lv94f\" (UID: \"0a750e63-2387-4d04-97db-72946bde68a8\") " pod="openshift-controller-manager/controller-manager-547c59f7cc-lv94f" Mar 12 14:37:36.067785 master-0 kubenswrapper[37036]: I0312 14:37:36.067787 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a750e63-2387-4d04-97db-72946bde68a8-serving-cert\") pod \"controller-manager-547c59f7cc-lv94f\" (UID: \"0a750e63-2387-4d04-97db-72946bde68a8\") " pod="openshift-controller-manager/controller-manager-547c59f7cc-lv94f" Mar 12 14:37:36.068094 master-0 kubenswrapper[37036]: I0312 14:37:36.067854 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w9tg\" (UniqueName: \"kubernetes.io/projected/0a750e63-2387-4d04-97db-72946bde68a8-kube-api-access-2w9tg\") pod \"controller-manager-547c59f7cc-lv94f\" (UID: \"0a750e63-2387-4d04-97db-72946bde68a8\") " pod="openshift-controller-manager/controller-manager-547c59f7cc-lv94f" Mar 12 14:37:36.068094 master-0 kubenswrapper[37036]: I0312 14:37:36.067923 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a750e63-2387-4d04-97db-72946bde68a8-config\") pod \"controller-manager-547c59f7cc-lv94f\" (UID: \"0a750e63-2387-4d04-97db-72946bde68a8\") " pod="openshift-controller-manager/controller-manager-547c59f7cc-lv94f" Mar 12 14:37:36.068094 master-0 kubenswrapper[37036]: I0312 14:37:36.067959 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a750e63-2387-4d04-97db-72946bde68a8-client-ca\") pod \"controller-manager-547c59f7cc-lv94f\" (UID: \"0a750e63-2387-4d04-97db-72946bde68a8\") " pod="openshift-controller-manager/controller-manager-547c59f7cc-lv94f" Mar 12 14:37:36.169805 master-0 kubenswrapper[37036]: I0312 14:37:36.169727 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a750e63-2387-4d04-97db-72946bde68a8-serving-cert\") pod \"controller-manager-547c59f7cc-lv94f\" (UID: \"0a750e63-2387-4d04-97db-72946bde68a8\") " pod="openshift-controller-manager/controller-manager-547c59f7cc-lv94f" Mar 12 14:37:36.170076 master-0 kubenswrapper[37036]: I0312 14:37:36.169968 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w9tg\" (UniqueName: \"kubernetes.io/projected/0a750e63-2387-4d04-97db-72946bde68a8-kube-api-access-2w9tg\") pod \"controller-manager-547c59f7cc-lv94f\" (UID: \"0a750e63-2387-4d04-97db-72946bde68a8\") " pod="openshift-controller-manager/controller-manager-547c59f7cc-lv94f" Mar 12 14:37:36.170076 master-0 kubenswrapper[37036]: I0312 14:37:36.170023 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a750e63-2387-4d04-97db-72946bde68a8-config\") pod \"controller-manager-547c59f7cc-lv94f\" (UID: \"0a750e63-2387-4d04-97db-72946bde68a8\") " pod="openshift-controller-manager/controller-manager-547c59f7cc-lv94f" Mar 12 14:37:36.170076 master-0 kubenswrapper[37036]: I0312 14:37:36.170058 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a750e63-2387-4d04-97db-72946bde68a8-client-ca\") pod \"controller-manager-547c59f7cc-lv94f\" (UID: \"0a750e63-2387-4d04-97db-72946bde68a8\") " pod="openshift-controller-manager/controller-manager-547c59f7cc-lv94f" Mar 12 14:37:36.170224 master-0 kubenswrapper[37036]: I0312 14:37:36.170118 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0a750e63-2387-4d04-97db-72946bde68a8-proxy-ca-bundles\") pod \"controller-manager-547c59f7cc-lv94f\" (UID: \"0a750e63-2387-4d04-97db-72946bde68a8\") " pod="openshift-controller-manager/controller-manager-547c59f7cc-lv94f" Mar 12 14:37:36.171677 master-0 kubenswrapper[37036]: I0312 14:37:36.171642 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a750e63-2387-4d04-97db-72946bde68a8-client-ca\") pod \"controller-manager-547c59f7cc-lv94f\" (UID: \"0a750e63-2387-4d04-97db-72946bde68a8\") " pod="openshift-controller-manager/controller-manager-547c59f7cc-lv94f" Mar 12 14:37:36.172250 master-0 kubenswrapper[37036]: I0312 14:37:36.172221 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a750e63-2387-4d04-97db-72946bde68a8-config\") pod \"controller-manager-547c59f7cc-lv94f\" (UID: \"0a750e63-2387-4d04-97db-72946bde68a8\") " pod="openshift-controller-manager/controller-manager-547c59f7cc-lv94f" Mar 12 14:37:36.173317 master-0 kubenswrapper[37036]: I0312 14:37:36.173258 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0a750e63-2387-4d04-97db-72946bde68a8-proxy-ca-bundles\") pod \"controller-manager-547c59f7cc-lv94f\" (UID: \"0a750e63-2387-4d04-97db-72946bde68a8\") " pod="openshift-controller-manager/controller-manager-547c59f7cc-lv94f" Mar 12 14:37:36.177032 master-0 kubenswrapper[37036]: I0312 14:37:36.176956 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a750e63-2387-4d04-97db-72946bde68a8-serving-cert\") pod \"controller-manager-547c59f7cc-lv94f\" (UID: \"0a750e63-2387-4d04-97db-72946bde68a8\") " pod="openshift-controller-manager/controller-manager-547c59f7cc-lv94f" Mar 12 14:37:36.195002 master-0 kubenswrapper[37036]: I0312 14:37:36.193068 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w9tg\" (UniqueName: \"kubernetes.io/projected/0a750e63-2387-4d04-97db-72946bde68a8-kube-api-access-2w9tg\") pod \"controller-manager-547c59f7cc-lv94f\" (UID: \"0a750e63-2387-4d04-97db-72946bde68a8\") " pod="openshift-controller-manager/controller-manager-547c59f7cc-lv94f" Mar 12 14:37:36.354065 master-0 kubenswrapper[37036]: I0312 14:37:36.353763 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-547c59f7cc-lv94f" Mar 12 14:37:36.741170 master-0 kubenswrapper[37036]: I0312 14:37:36.741116 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-547c59f7cc-lv94f"] Mar 12 14:37:36.748330 master-0 kubenswrapper[37036]: W0312 14:37:36.748261 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a750e63_2387_4d04_97db_72946bde68a8.slice/crio-7e2df283351d447655e77d886b8604b5803cf9bc3b23f55c03e3c0b95944d485 WatchSource:0}: Error finding container 7e2df283351d447655e77d886b8604b5803cf9bc3b23f55c03e3c0b95944d485: Status 404 returned error can't find the container with id 7e2df283351d447655e77d886b8604b5803cf9bc3b23f55c03e3c0b95944d485 Mar 12 14:37:36.854657 master-0 kubenswrapper[37036]: I0312 14:37:36.854614 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-547c59f7cc-lv94f" event={"ID":"0a750e63-2387-4d04-97db-72946bde68a8","Type":"ContainerStarted","Data":"7e2df283351d447655e77d886b8604b5803cf9bc3b23f55c03e3c0b95944d485"} Mar 12 14:37:37.861683 master-0 kubenswrapper[37036]: I0312 14:37:37.861621 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-547c59f7cc-lv94f" event={"ID":"0a750e63-2387-4d04-97db-72946bde68a8","Type":"ContainerStarted","Data":"b08dfd4e5913013c2638153fa8a693f37274fece2af4972c37f3979f6f724206"} Mar 12 14:37:37.862295 master-0 kubenswrapper[37036]: I0312 14:37:37.862067 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-547c59f7cc-lv94f" Mar 12 14:37:37.867275 master-0 kubenswrapper[37036]: I0312 14:37:37.867220 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-547c59f7cc-lv94f" Mar 12 14:37:37.880546 master-0 kubenswrapper[37036]: I0312 14:37:37.880386 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-547c59f7cc-lv94f" podStartSLOduration=6.880370459 podStartE2EDuration="6.880370459s" podCreationTimestamp="2026-03-12 14:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:37:37.879360805 +0000 UTC m=+116.887101752" watchObservedRunningTime="2026-03-12 14:37:37.880370459 +0000 UTC m=+116.888111396" Mar 12 14:37:38.933164 master-0 kubenswrapper[37036]: I0312 14:37:38.932476 37036 patch_prober.go:28] interesting pod/console-c847675b7-vfq5t container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Mar 12 14:37:38.933164 master-0 kubenswrapper[37036]: I0312 14:37:38.932542 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-c847675b7-vfq5t" podUID="0323a60d-acb9-4209-a5a5-9b45cc819ac5" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Mar 12 14:37:40.472580 master-0 kubenswrapper[37036]: I0312 14:37:40.472503 37036 patch_prober.go:28] interesting pod/console-d7bc769d-7n7p2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 12 14:37:40.472580 master-0 kubenswrapper[37036]: I0312 14:37:40.472567 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-d7bc769d-7n7p2" podUID="d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 12 14:37:42.246239 master-0 kubenswrapper[37036]: I0312 14:37:42.246161 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-zmfxg"] Mar 12 14:37:42.247757 master-0 kubenswrapper[37036]: I0312 14:37:42.247709 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cbd49d755-zmfxg" Mar 12 14:37:42.253955 master-0 kubenswrapper[37036]: I0312 14:37:42.253341 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 12 14:37:42.253955 master-0 kubenswrapper[37036]: I0312 14:37:42.253620 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 12 14:37:42.267780 master-0 kubenswrapper[37036]: I0312 14:37:42.266319 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-zmfxg"] Mar 12 14:37:42.354888 master-0 kubenswrapper[37036]: I0312 14:37:42.354828 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/34c25abb-4afe-4f5f-b259-a194ac6f0013-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-zmfxg\" (UID: \"34c25abb-4afe-4f5f-b259-a194ac6f0013\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-zmfxg" Mar 12 14:37:42.355096 master-0 kubenswrapper[37036]: I0312 14:37:42.354971 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/34c25abb-4afe-4f5f-b259-a194ac6f0013-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-zmfxg\" (UID: \"34c25abb-4afe-4f5f-b259-a194ac6f0013\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-zmfxg" Mar 12 14:37:42.456283 master-0 kubenswrapper[37036]: I0312 14:37:42.456216 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/34c25abb-4afe-4f5f-b259-a194ac6f0013-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-zmfxg\" (UID: \"34c25abb-4afe-4f5f-b259-a194ac6f0013\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-zmfxg" Mar 12 14:37:42.456515 master-0 kubenswrapper[37036]: E0312 14:37:42.456430 37036 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 12 14:37:42.456576 master-0 kubenswrapper[37036]: E0312 14:37:42.456524 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34c25abb-4afe-4f5f-b259-a194ac6f0013-networking-console-plugin-cert podName:34c25abb-4afe-4f5f-b259-a194ac6f0013 nodeName:}" failed. No retries permitted until 2026-03-12 14:37:42.956501734 +0000 UTC m=+121.964242671 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/34c25abb-4afe-4f5f-b259-a194ac6f0013-networking-console-plugin-cert") pod "networking-console-plugin-5cbd49d755-zmfxg" (UID: "34c25abb-4afe-4f5f-b259-a194ac6f0013") : secret "networking-console-plugin-cert" not found Mar 12 14:37:42.457611 master-0 kubenswrapper[37036]: I0312 14:37:42.456623 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/34c25abb-4afe-4f5f-b259-a194ac6f0013-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-zmfxg\" (UID: \"34c25abb-4afe-4f5f-b259-a194ac6f0013\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-zmfxg" Mar 12 14:37:42.457611 master-0 kubenswrapper[37036]: I0312 14:37:42.457569 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/34c25abb-4afe-4f5f-b259-a194ac6f0013-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-zmfxg\" (UID: \"34c25abb-4afe-4f5f-b259-a194ac6f0013\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-zmfxg" Mar 12 14:37:42.827736 master-0 kubenswrapper[37036]: E0312 14:37:42.825440 37036 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbff9beb6_f6cc_4fd0_9d22_aaf1221c8b34.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbff9beb6_f6cc_4fd0_9d22_aaf1221c8b34.slice/crio-23f2052c5fbcc22cd38a431c5e2ac5863a96ce6b483b26a6af986d36abbcbca8\": RecentStats: unable to find data in memory cache]" Mar 12 14:37:42.964639 master-0 kubenswrapper[37036]: I0312 14:37:42.964557 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/34c25abb-4afe-4f5f-b259-a194ac6f0013-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-zmfxg\" (UID: \"34c25abb-4afe-4f5f-b259-a194ac6f0013\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-zmfxg" Mar 12 14:37:42.967692 master-0 kubenswrapper[37036]: I0312 14:37:42.967649 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/34c25abb-4afe-4f5f-b259-a194ac6f0013-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-zmfxg\" (UID: \"34c25abb-4afe-4f5f-b259-a194ac6f0013\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-zmfxg" Mar 12 14:37:43.176727 master-0 kubenswrapper[37036]: I0312 14:37:43.176589 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cbd49d755-zmfxg" Mar 12 14:37:43.381966 master-0 kubenswrapper[37036]: I0312 14:37:43.380439 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-c847675b7-vfq5t"] Mar 12 14:37:43.409794 master-0 kubenswrapper[37036]: I0312 14:37:43.409581 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-59db58b99d-jwn7z"] Mar 12 14:37:43.411186 master-0 kubenswrapper[37036]: I0312 14:37:43.410671 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:37:43.425619 master-0 kubenswrapper[37036]: I0312 14:37:43.425229 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-59db58b99d-jwn7z"] Mar 12 14:37:43.507919 master-0 kubenswrapper[37036]: I0312 14:37:43.507854 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5125edfe-0ec5-4664-ae68-2c98e3187d79-console-serving-cert\") pod \"console-59db58b99d-jwn7z\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:37:43.507919 master-0 kubenswrapper[37036]: I0312 14:37:43.507922 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5125edfe-0ec5-4664-ae68-2c98e3187d79-console-oauth-config\") pod \"console-59db58b99d-jwn7z\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:37:43.508216 master-0 kubenswrapper[37036]: I0312 14:37:43.508040 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5125edfe-0ec5-4664-ae68-2c98e3187d79-console-config\") pod \"console-59db58b99d-jwn7z\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:37:43.508359 master-0 kubenswrapper[37036]: I0312 14:37:43.508316 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4snz\" (UniqueName: \"kubernetes.io/projected/5125edfe-0ec5-4664-ae68-2c98e3187d79-kube-api-access-p4snz\") pod \"console-59db58b99d-jwn7z\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:37:43.508486 master-0 kubenswrapper[37036]: I0312 14:37:43.508449 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5125edfe-0ec5-4664-ae68-2c98e3187d79-service-ca\") pod \"console-59db58b99d-jwn7z\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:37:43.508585 master-0 kubenswrapper[37036]: I0312 14:37:43.508503 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5125edfe-0ec5-4664-ae68-2c98e3187d79-trusted-ca-bundle\") pod \"console-59db58b99d-jwn7z\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:37:43.508619 master-0 kubenswrapper[37036]: I0312 14:37:43.508584 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5125edfe-0ec5-4664-ae68-2c98e3187d79-oauth-serving-cert\") pod \"console-59db58b99d-jwn7z\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:37:43.609625 master-0 kubenswrapper[37036]: I0312 14:37:43.609564 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4snz\" (UniqueName: \"kubernetes.io/projected/5125edfe-0ec5-4664-ae68-2c98e3187d79-kube-api-access-p4snz\") pod \"console-59db58b99d-jwn7z\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:37:43.609857 master-0 kubenswrapper[37036]: I0312 14:37:43.609794 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5125edfe-0ec5-4664-ae68-2c98e3187d79-service-ca\") pod \"console-59db58b99d-jwn7z\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:37:43.613410 master-0 kubenswrapper[37036]: I0312 14:37:43.609969 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5125edfe-0ec5-4664-ae68-2c98e3187d79-trusted-ca-bundle\") pod \"console-59db58b99d-jwn7z\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:37:43.613410 master-0 kubenswrapper[37036]: I0312 14:37:43.610084 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5125edfe-0ec5-4664-ae68-2c98e3187d79-oauth-serving-cert\") pod \"console-59db58b99d-jwn7z\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:37:43.613410 master-0 kubenswrapper[37036]: I0312 14:37:43.610139 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5125edfe-0ec5-4664-ae68-2c98e3187d79-console-serving-cert\") pod \"console-59db58b99d-jwn7z\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:37:43.613410 master-0 kubenswrapper[37036]: I0312 14:37:43.610155 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5125edfe-0ec5-4664-ae68-2c98e3187d79-console-oauth-config\") pod \"console-59db58b99d-jwn7z\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:37:43.613410 master-0 kubenswrapper[37036]: I0312 14:37:43.610337 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5125edfe-0ec5-4664-ae68-2c98e3187d79-console-config\") pod \"console-59db58b99d-jwn7z\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:37:43.613410 master-0 kubenswrapper[37036]: I0312 14:37:43.610647 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5125edfe-0ec5-4664-ae68-2c98e3187d79-service-ca\") pod \"console-59db58b99d-jwn7z\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:37:43.613410 master-0 kubenswrapper[37036]: I0312 14:37:43.610942 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5125edfe-0ec5-4664-ae68-2c98e3187d79-trusted-ca-bundle\") pod \"console-59db58b99d-jwn7z\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:37:43.613410 master-0 kubenswrapper[37036]: I0312 14:37:43.611259 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5125edfe-0ec5-4664-ae68-2c98e3187d79-console-config\") pod \"console-59db58b99d-jwn7z\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:37:43.613410 master-0 kubenswrapper[37036]: I0312 14:37:43.611328 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5125edfe-0ec5-4664-ae68-2c98e3187d79-oauth-serving-cert\") pod \"console-59db58b99d-jwn7z\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:37:43.620388 master-0 kubenswrapper[37036]: I0312 14:37:43.616073 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5125edfe-0ec5-4664-ae68-2c98e3187d79-console-oauth-config\") pod \"console-59db58b99d-jwn7z\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:37:43.620388 master-0 kubenswrapper[37036]: I0312 14:37:43.616093 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5125edfe-0ec5-4664-ae68-2c98e3187d79-console-serving-cert\") pod \"console-59db58b99d-jwn7z\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:37:43.620388 master-0 kubenswrapper[37036]: I0312 14:37:43.618860 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-zmfxg"] Mar 12 14:37:43.628496 master-0 kubenswrapper[37036]: I0312 14:37:43.628394 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4snz\" (UniqueName: \"kubernetes.io/projected/5125edfe-0ec5-4664-ae68-2c98e3187d79-kube-api-access-p4snz\") pod \"console-59db58b99d-jwn7z\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:37:43.740629 master-0 kubenswrapper[37036]: I0312 14:37:43.740514 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:37:43.908667 master-0 kubenswrapper[37036]: I0312 14:37:43.908581 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cbd49d755-zmfxg" event={"ID":"34c25abb-4afe-4f5f-b259-a194ac6f0013","Type":"ContainerStarted","Data":"f23f0ea0967d324e7a7a051ddb126983c60c0eea9194be046599ff90569ea1c5"} Mar 12 14:37:44.114974 master-0 kubenswrapper[37036]: I0312 14:37:44.114918 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-59db58b99d-jwn7z"] Mar 12 14:37:44.120975 master-0 kubenswrapper[37036]: W0312 14:37:44.120914 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5125edfe_0ec5_4664_ae68_2c98e3187d79.slice/crio-578e2cad66d2e5b22a6881e70299814b8c5512b30af15fd467fc0d05faf8165e WatchSource:0}: Error finding container 578e2cad66d2e5b22a6881e70299814b8c5512b30af15fd467fc0d05faf8165e: Status 404 returned error can't find the container with id 578e2cad66d2e5b22a6881e70299814b8c5512b30af15fd467fc0d05faf8165e Mar 12 14:37:44.915947 master-0 kubenswrapper[37036]: I0312 14:37:44.915850 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59db58b99d-jwn7z" event={"ID":"5125edfe-0ec5-4664-ae68-2c98e3187d79","Type":"ContainerStarted","Data":"39e0a785ec6f848229b39d7d3d01faa94660e8c2e17cdb3be5b43efffe0573b8"} Mar 12 14:37:44.915947 master-0 kubenswrapper[37036]: I0312 14:37:44.915914 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59db58b99d-jwn7z" event={"ID":"5125edfe-0ec5-4664-ae68-2c98e3187d79","Type":"ContainerStarted","Data":"578e2cad66d2e5b22a6881e70299814b8c5512b30af15fd467fc0d05faf8165e"} Mar 12 14:37:44.935409 master-0 kubenswrapper[37036]: I0312 14:37:44.935311 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-59db58b99d-jwn7z" podStartSLOduration=1.9352900480000002 podStartE2EDuration="1.935290048s" podCreationTimestamp="2026-03-12 14:37:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:37:44.931310139 +0000 UTC m=+123.939051086" watchObservedRunningTime="2026-03-12 14:37:44.935290048 +0000 UTC m=+123.943030985" Mar 12 14:37:45.096518 master-0 kubenswrapper[37036]: I0312 14:37:45.096310 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6b77f48c6d-w6489" podUID="1dd55143-3e81-4eb5-9f83-b4c13614dd69" containerName="console" containerID="cri-o://228bb983396bf00758302746e1baf37b799848dbac21045f7d8e5330914695fb" gracePeriod=15 Mar 12 14:37:45.340221 master-0 kubenswrapper[37036]: I0312 14:37:45.340161 37036 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 14:37:45.341074 master-0 kubenswrapper[37036]: I0312 14:37:45.341046 37036 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 12 14:37:45.341185 master-0 kubenswrapper[37036]: I0312 14:37:45.341159 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:37:45.341306 master-0 kubenswrapper[37036]: I0312 14:37:45.341253 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver" containerID="cri-o://bd47b92106de563d3373945a17b8e6aaefdc2d9f737608fa199cd4000e84df8c" gracePeriod=15 Mar 12 14:37:45.341382 master-0 kubenswrapper[37036]: I0312 14:37:45.341311 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3cc6add3b8ddeafffa30f8317b74f57c52371e22c6de0912648ca83e47756722" gracePeriod=15 Mar 12 14:37:45.341382 master-0 kubenswrapper[37036]: I0312 14:37:45.341322 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://a9d7b0be96b2dd2ee16b0e4d8085acc0eb870f88bd3a21243f9c99d9574c51c9" gracePeriod=15 Mar 12 14:37:45.341382 master-0 kubenswrapper[37036]: I0312 14:37:45.341276 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b751bdf0e39401a4d13a469f6d8fde858fcfb6b8b01934e3aae4c85b3c34ac55" gracePeriod=15 Mar 12 14:37:45.341382 master-0 kubenswrapper[37036]: I0312 14:37:45.341346 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ae60fe54b5ccd230d5c299ecbcb6f31dfb5d0828ec56237e3d4b1ef25899a097" gracePeriod=15 Mar 12 14:37:45.343320 master-0 kubenswrapper[37036]: I0312 14:37:45.341792 37036 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 12 14:37:45.343320 master-0 kubenswrapper[37036]: E0312 14:37:45.342002 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 14:37:45.343320 master-0 kubenswrapper[37036]: I0312 14:37:45.342014 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 14:37:45.343320 master-0 kubenswrapper[37036]: E0312 14:37:45.342027 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-insecure-readyz" Mar 12 14:37:45.343320 master-0 kubenswrapper[37036]: I0312 14:37:45.342033 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-insecure-readyz" Mar 12 14:37:45.343320 master-0 kubenswrapper[37036]: E0312 14:37:45.342044 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-check-endpoints" Mar 12 14:37:45.343320 master-0 kubenswrapper[37036]: I0312 14:37:45.342050 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-check-endpoints" Mar 12 14:37:45.343320 master-0 kubenswrapper[37036]: E0312 14:37:45.342055 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-syncer" Mar 12 14:37:45.343320 master-0 kubenswrapper[37036]: I0312 14:37:45.342061 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-syncer" Mar 12 14:37:45.343320 master-0 kubenswrapper[37036]: E0312 14:37:45.342071 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver" Mar 12 14:37:45.343320 master-0 kubenswrapper[37036]: I0312 14:37:45.342078 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver" Mar 12 14:37:45.343320 master-0 kubenswrapper[37036]: E0312 14:37:45.342093 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="setup" Mar 12 14:37:45.343320 master-0 kubenswrapper[37036]: I0312 14:37:45.342101 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="setup" Mar 12 14:37:45.343320 master-0 kubenswrapper[37036]: E0312 14:37:45.342117 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-check-endpoints" Mar 12 14:37:45.343320 master-0 kubenswrapper[37036]: I0312 14:37:45.342125 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-check-endpoints" Mar 12 14:37:45.343320 master-0 kubenswrapper[37036]: I0312 14:37:45.342269 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-syncer" Mar 12 14:37:45.343320 master-0 kubenswrapper[37036]: I0312 14:37:45.342287 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-check-endpoints" Mar 12 14:37:45.343320 master-0 kubenswrapper[37036]: I0312 14:37:45.342298 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver" Mar 12 14:37:45.343320 master-0 kubenswrapper[37036]: I0312 14:37:45.342310 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 14:37:45.343320 master-0 kubenswrapper[37036]: I0312 14:37:45.342327 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-insecure-readyz" Mar 12 14:37:45.343320 master-0 kubenswrapper[37036]: I0312 14:37:45.342628 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-check-endpoints" Mar 12 14:37:45.427360 master-0 kubenswrapper[37036]: E0312 14:37:45.427285 37036 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:37:45.438704 master-0 kubenswrapper[37036]: I0312 14:37:45.438641 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:37:45.438704 master-0 kubenswrapper[37036]: I0312 14:37:45.438708 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:37:45.439044 master-0 kubenswrapper[37036]: I0312 14:37:45.438830 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:37:45.439044 master-0 kubenswrapper[37036]: I0312 14:37:45.438857 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:37:45.439044 master-0 kubenswrapper[37036]: I0312 14:37:45.438884 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:37:45.439044 master-0 kubenswrapper[37036]: I0312 14:37:45.438972 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:37:45.439044 master-0 kubenswrapper[37036]: I0312 14:37:45.439001 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:37:45.439044 master-0 kubenswrapper[37036]: I0312 14:37:45.439029 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:37:45.540429 master-0 kubenswrapper[37036]: I0312 14:37:45.540332 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:37:45.540429 master-0 kubenswrapper[37036]: I0312 14:37:45.540410 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:37:45.540840 master-0 kubenswrapper[37036]: I0312 14:37:45.540491 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:37:45.540840 master-0 kubenswrapper[37036]: I0312 14:37:45.540522 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:37:45.540840 master-0 kubenswrapper[37036]: I0312 14:37:45.540571 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:37:45.540840 master-0 kubenswrapper[37036]: I0312 14:37:45.540600 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:37:45.540840 master-0 kubenswrapper[37036]: I0312 14:37:45.540623 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:37:45.540840 master-0 kubenswrapper[37036]: I0312 14:37:45.540663 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:37:45.540840 master-0 kubenswrapper[37036]: I0312 14:37:45.540765 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:37:45.540840 master-0 kubenswrapper[37036]: I0312 14:37:45.540823 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:37:45.540840 master-0 kubenswrapper[37036]: I0312 14:37:45.540845 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:37:45.541471 master-0 kubenswrapper[37036]: I0312 14:37:45.540881 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:37:45.541471 master-0 kubenswrapper[37036]: I0312 14:37:45.540927 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:37:45.541471 master-0 kubenswrapper[37036]: I0312 14:37:45.540951 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:37:45.541471 master-0 kubenswrapper[37036]: I0312 14:37:45.540970 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:37:45.541471 master-0 kubenswrapper[37036]: I0312 14:37:45.541022 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:37:45.661787 master-0 kubenswrapper[37036]: I0312 14:37:45.658694 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b77f48c6d-w6489_1dd55143-3e81-4eb5-9f83-b4c13614dd69/console/0.log" Mar 12 14:37:45.661787 master-0 kubenswrapper[37036]: I0312 14:37:45.658760 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b77f48c6d-w6489" Mar 12 14:37:45.662348 master-0 kubenswrapper[37036]: I0312 14:37:45.662290 37036 status_manager.go:851] "Failed to get status for pod" podUID="1dd55143-3e81-4eb5-9f83-b4c13614dd69" pod="openshift-console/console-6b77f48c6d-w6489" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b77f48c6d-w6489\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:45.663016 master-0 kubenswrapper[37036]: I0312 14:37:45.662977 37036 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:45.727863 master-0 kubenswrapper[37036]: I0312 14:37:45.727794 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:37:45.743138 master-0 kubenswrapper[37036]: I0312 14:37:45.743084 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1dd55143-3e81-4eb5-9f83-b4c13614dd69-console-serving-cert\") pod \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\" (UID: \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\") " Mar 12 14:37:45.743209 master-0 kubenswrapper[37036]: I0312 14:37:45.743172 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1dd55143-3e81-4eb5-9f83-b4c13614dd69-console-oauth-config\") pod \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\" (UID: \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\") " Mar 12 14:37:45.743452 master-0 kubenswrapper[37036]: I0312 14:37:45.743409 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1dd55143-3e81-4eb5-9f83-b4c13614dd69-console-config\") pod \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\" (UID: \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\") " Mar 12 14:37:45.743545 master-0 kubenswrapper[37036]: I0312 14:37:45.743515 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfvdn\" (UniqueName: \"kubernetes.io/projected/1dd55143-3e81-4eb5-9f83-b4c13614dd69-kube-api-access-qfvdn\") pod \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\" (UID: \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\") " Mar 12 14:37:45.743624 master-0 kubenswrapper[37036]: I0312 14:37:45.743604 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1dd55143-3e81-4eb5-9f83-b4c13614dd69-service-ca\") pod \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\" (UID: \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\") " Mar 12 14:37:45.743819 master-0 kubenswrapper[37036]: I0312 14:37:45.743682 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1dd55143-3e81-4eb5-9f83-b4c13614dd69-oauth-serving-cert\") pod \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\" (UID: \"1dd55143-3e81-4eb5-9f83-b4c13614dd69\") " Mar 12 14:37:45.743819 master-0 kubenswrapper[37036]: I0312 14:37:45.743763 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd55143-3e81-4eb5-9f83-b4c13614dd69-console-config" (OuterVolumeSpecName: "console-config") pod "1dd55143-3e81-4eb5-9f83-b4c13614dd69" (UID: "1dd55143-3e81-4eb5-9f83-b4c13614dd69"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:37:45.744276 master-0 kubenswrapper[37036]: I0312 14:37:45.744238 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd55143-3e81-4eb5-9f83-b4c13614dd69-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "1dd55143-3e81-4eb5-9f83-b4c13614dd69" (UID: "1dd55143-3e81-4eb5-9f83-b4c13614dd69"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:37:45.744276 master-0 kubenswrapper[37036]: I0312 14:37:45.744238 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd55143-3e81-4eb5-9f83-b4c13614dd69-service-ca" (OuterVolumeSpecName: "service-ca") pod "1dd55143-3e81-4eb5-9f83-b4c13614dd69" (UID: "1dd55143-3e81-4eb5-9f83-b4c13614dd69"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:37:45.744396 master-0 kubenswrapper[37036]: I0312 14:37:45.744277 37036 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1dd55143-3e81-4eb5-9f83-b4c13614dd69-console-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:45.746100 master-0 kubenswrapper[37036]: I0312 14:37:45.746060 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dd55143-3e81-4eb5-9f83-b4c13614dd69-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "1dd55143-3e81-4eb5-9f83-b4c13614dd69" (UID: "1dd55143-3e81-4eb5-9f83-b4c13614dd69"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:37:45.746490 master-0 kubenswrapper[37036]: I0312 14:37:45.746417 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dd55143-3e81-4eb5-9f83-b4c13614dd69-kube-api-access-qfvdn" (OuterVolumeSpecName: "kube-api-access-qfvdn") pod "1dd55143-3e81-4eb5-9f83-b4c13614dd69" (UID: "1dd55143-3e81-4eb5-9f83-b4c13614dd69"). InnerVolumeSpecName "kube-api-access-qfvdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:37:45.747607 master-0 kubenswrapper[37036]: W0312 14:37:45.747573 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda814bd60de133d95cf99630a978c017e.slice/crio-d039067c7cd1410737e5bc53552b85b5841831badf7248114b80fa11180a9d6c WatchSource:0}: Error finding container d039067c7cd1410737e5bc53552b85b5841831badf7248114b80fa11180a9d6c: Status 404 returned error can't find the container with id d039067c7cd1410737e5bc53552b85b5841831badf7248114b80fa11180a9d6c Mar 12 14:37:45.748126 master-0 kubenswrapper[37036]: I0312 14:37:45.748078 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dd55143-3e81-4eb5-9f83-b4c13614dd69-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "1dd55143-3e81-4eb5-9f83-b4c13614dd69" (UID: "1dd55143-3e81-4eb5-9f83-b4c13614dd69"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:37:45.750804 master-0 kubenswrapper[37036]: E0312 14:37:45.750673 37036 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189c1ed7dce4255a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:a814bd60de133d95cf99630a978c017e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:37:45.74987401 +0000 UTC m=+124.757614947,LastTimestamp:2026-03-12 14:37:45.74987401 +0000 UTC m=+124.757614947,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:37:45.846234 master-0 kubenswrapper[37036]: I0312 14:37:45.846155 37036 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1dd55143-3e81-4eb5-9f83-b4c13614dd69-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:45.846234 master-0 kubenswrapper[37036]: I0312 14:37:45.846214 37036 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1dd55143-3e81-4eb5-9f83-b4c13614dd69-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:45.846485 master-0 kubenswrapper[37036]: I0312 14:37:45.846261 37036 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1dd55143-3e81-4eb5-9f83-b4c13614dd69-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:45.846485 master-0 kubenswrapper[37036]: I0312 14:37:45.846275 37036 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1dd55143-3e81-4eb5-9f83-b4c13614dd69-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:45.846485 master-0 kubenswrapper[37036]: I0312 14:37:45.846291 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfvdn\" (UniqueName: \"kubernetes.io/projected/1dd55143-3e81-4eb5-9f83-b4c13614dd69-kube-api-access-qfvdn\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:45.924644 master-0 kubenswrapper[37036]: I0312 14:37:45.924591 37036 generic.go:334] "Generic (PLEG): container finished" podID="e3b3151f-a9b1-43e7-9aec-96d4ff896bf2" containerID="852a885b7715f01e617c3f371756c175c3c437f2c97c3223d69b9af5d6424ea5" exitCode=0 Mar 12 14:37:45.925096 master-0 kubenswrapper[37036]: I0312 14:37:45.924671 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"e3b3151f-a9b1-43e7-9aec-96d4ff896bf2","Type":"ContainerDied","Data":"852a885b7715f01e617c3f371756c175c3c437f2c97c3223d69b9af5d6424ea5"} Mar 12 14:37:45.927814 master-0 kubenswrapper[37036]: I0312 14:37:45.927770 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_48512e02022680c9d90092634f0fc146/kube-apiserver-check-endpoints/0.log" Mar 12 14:37:45.928549 master-0 kubenswrapper[37036]: I0312 14:37:45.928509 37036 status_manager.go:851] "Failed to get status for pod" podUID="1dd55143-3e81-4eb5-9f83-b4c13614dd69" pod="openshift-console/console-6b77f48c6d-w6489" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b77f48c6d-w6489\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:45.929485 master-0 kubenswrapper[37036]: I0312 14:37:45.929442 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_48512e02022680c9d90092634f0fc146/kube-apiserver-cert-syncer/0.log" Mar 12 14:37:45.929677 master-0 kubenswrapper[37036]: I0312 14:37:45.929611 37036 status_manager.go:851] "Failed to get status for pod" podUID="e3b3151f-a9b1-43e7-9aec-96d4ff896bf2" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:45.930688 master-0 kubenswrapper[37036]: I0312 14:37:45.930427 37036 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="b751bdf0e39401a4d13a469f6d8fde858fcfb6b8b01934e3aae4c85b3c34ac55" exitCode=0 Mar 12 14:37:45.930688 master-0 kubenswrapper[37036]: I0312 14:37:45.930653 37036 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="3cc6add3b8ddeafffa30f8317b74f57c52371e22c6de0912648ca83e47756722" exitCode=0 Mar 12 14:37:45.930688 master-0 kubenswrapper[37036]: I0312 14:37:45.930661 37036 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="a9d7b0be96b2dd2ee16b0e4d8085acc0eb870f88bd3a21243f9c99d9574c51c9" exitCode=0 Mar 12 14:37:45.930688 master-0 kubenswrapper[37036]: I0312 14:37:45.930668 37036 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="ae60fe54b5ccd230d5c299ecbcb6f31dfb5d0828ec56237e3d4b1ef25899a097" exitCode=2 Mar 12 14:37:45.930688 master-0 kubenswrapper[37036]: I0312 14:37:45.930449 37036 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:45.931353 master-0 kubenswrapper[37036]: I0312 14:37:45.930717 37036 scope.go:117] "RemoveContainer" containerID="38d6f94bd36743b5e1de43d22e67db88c9c5b063935ce36f553f6e277d2085b0" Mar 12 14:37:45.932776 master-0 kubenswrapper[37036]: I0312 14:37:45.932741 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b77f48c6d-w6489_1dd55143-3e81-4eb5-9f83-b4c13614dd69/console/0.log" Mar 12 14:37:45.932850 master-0 kubenswrapper[37036]: I0312 14:37:45.932820 37036 generic.go:334] "Generic (PLEG): container finished" podID="1dd55143-3e81-4eb5-9f83-b4c13614dd69" containerID="228bb983396bf00758302746e1baf37b799848dbac21045f7d8e5330914695fb" exitCode=2 Mar 12 14:37:45.932891 master-0 kubenswrapper[37036]: I0312 14:37:45.932874 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b77f48c6d-w6489" Mar 12 14:37:45.932980 master-0 kubenswrapper[37036]: I0312 14:37:45.932934 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b77f48c6d-w6489" event={"ID":"1dd55143-3e81-4eb5-9f83-b4c13614dd69","Type":"ContainerDied","Data":"228bb983396bf00758302746e1baf37b799848dbac21045f7d8e5330914695fb"} Mar 12 14:37:45.933024 master-0 kubenswrapper[37036]: I0312 14:37:45.932991 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b77f48c6d-w6489" event={"ID":"1dd55143-3e81-4eb5-9f83-b4c13614dd69","Type":"ContainerDied","Data":"acf7593e35971481ff36e3b3d9c788080b6be43257a0f65baadbb95e0371defe"} Mar 12 14:37:45.933932 master-0 kubenswrapper[37036]: I0312 14:37:45.933874 37036 status_manager.go:851] "Failed to get status for pod" podUID="1dd55143-3e81-4eb5-9f83-b4c13614dd69" pod="openshift-console/console-6b77f48c6d-w6489" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b77f48c6d-w6489\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:45.934466 master-0 kubenswrapper[37036]: I0312 14:37:45.934428 37036 status_manager.go:851] "Failed to get status for pod" podUID="e3b3151f-a9b1-43e7-9aec-96d4ff896bf2" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:45.935034 master-0 kubenswrapper[37036]: I0312 14:37:45.934997 37036 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:45.940239 master-0 kubenswrapper[37036]: I0312 14:37:45.940205 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"a814bd60de133d95cf99630a978c017e","Type":"ContainerStarted","Data":"d039067c7cd1410737e5bc53552b85b5841831badf7248114b80fa11180a9d6c"} Mar 12 14:37:45.949878 master-0 kubenswrapper[37036]: I0312 14:37:45.949814 37036 status_manager.go:851] "Failed to get status for pod" podUID="1dd55143-3e81-4eb5-9f83-b4c13614dd69" pod="openshift-console/console-6b77f48c6d-w6489" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b77f48c6d-w6489\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:45.950374 master-0 kubenswrapper[37036]: I0312 14:37:45.950331 37036 status_manager.go:851] "Failed to get status for pod" podUID="e3b3151f-a9b1-43e7-9aec-96d4ff896bf2" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:45.950823 master-0 kubenswrapper[37036]: I0312 14:37:45.950784 37036 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:45.953340 master-0 kubenswrapper[37036]: I0312 14:37:45.953310 37036 scope.go:117] "RemoveContainer" containerID="228bb983396bf00758302746e1baf37b799848dbac21045f7d8e5330914695fb" Mar 12 14:37:45.971822 master-0 kubenswrapper[37036]: I0312 14:37:45.971761 37036 scope.go:117] "RemoveContainer" containerID="228bb983396bf00758302746e1baf37b799848dbac21045f7d8e5330914695fb" Mar 12 14:37:45.972568 master-0 kubenswrapper[37036]: E0312 14:37:45.972489 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"228bb983396bf00758302746e1baf37b799848dbac21045f7d8e5330914695fb\": container with ID starting with 228bb983396bf00758302746e1baf37b799848dbac21045f7d8e5330914695fb not found: ID does not exist" containerID="228bb983396bf00758302746e1baf37b799848dbac21045f7d8e5330914695fb" Mar 12 14:37:45.973323 master-0 kubenswrapper[37036]: I0312 14:37:45.972582 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"228bb983396bf00758302746e1baf37b799848dbac21045f7d8e5330914695fb"} err="failed to get container status \"228bb983396bf00758302746e1baf37b799848dbac21045f7d8e5330914695fb\": rpc error: code = NotFound desc = could not find container \"228bb983396bf00758302746e1baf37b799848dbac21045f7d8e5330914695fb\": container with ID starting with 228bb983396bf00758302746e1baf37b799848dbac21045f7d8e5330914695fb not found: ID does not exist" Mar 12 14:37:46.949268 master-0 kubenswrapper[37036]: I0312 14:37:46.949218 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_48512e02022680c9d90092634f0fc146/kube-apiserver-cert-syncer/0.log" Mar 12 14:37:46.951494 master-0 kubenswrapper[37036]: I0312 14:37:46.951310 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cbd49d755-zmfxg" event={"ID":"34c25abb-4afe-4f5f-b259-a194ac6f0013","Type":"ContainerStarted","Data":"e7c3e8a5aa7d74afbcc072b47643962d03647ea2c76c5e75e5d3f28c83d7d90e"} Mar 12 14:37:46.952669 master-0 kubenswrapper[37036]: I0312 14:37:46.952606 37036 status_manager.go:851] "Failed to get status for pod" podUID="34c25abb-4afe-4f5f-b259-a194ac6f0013" pod="openshift-network-console/networking-console-plugin-5cbd49d755-zmfxg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-5cbd49d755-zmfxg\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:46.953617 master-0 kubenswrapper[37036]: I0312 14:37:46.953310 37036 status_manager.go:851] "Failed to get status for pod" podUID="1dd55143-3e81-4eb5-9f83-b4c13614dd69" pod="openshift-console/console-6b77f48c6d-w6489" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b77f48c6d-w6489\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:46.954297 master-0 kubenswrapper[37036]: I0312 14:37:46.954219 37036 status_manager.go:851] "Failed to get status for pod" podUID="e3b3151f-a9b1-43e7-9aec-96d4ff896bf2" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:46.955222 master-0 kubenswrapper[37036]: I0312 14:37:46.955160 37036 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:46.957361 master-0 kubenswrapper[37036]: I0312 14:37:46.957308 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"a814bd60de133d95cf99630a978c017e","Type":"ContainerStarted","Data":"ac12e82d73b7c87bfc300714078d0fbfbd14306ee32b92a1a487af0f8c03b0e0"} Mar 12 14:37:46.959085 master-0 kubenswrapper[37036]: I0312 14:37:46.959037 37036 status_manager.go:851] "Failed to get status for pod" podUID="34c25abb-4afe-4f5f-b259-a194ac6f0013" pod="openshift-network-console/networking-console-plugin-5cbd49d755-zmfxg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-5cbd49d755-zmfxg\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:46.959139 master-0 kubenswrapper[37036]: E0312 14:37:46.959070 37036 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:37:46.959647 master-0 kubenswrapper[37036]: I0312 14:37:46.959606 37036 status_manager.go:851] "Failed to get status for pod" podUID="1dd55143-3e81-4eb5-9f83-b4c13614dd69" pod="openshift-console/console-6b77f48c6d-w6489" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b77f48c6d-w6489\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:46.960151 master-0 kubenswrapper[37036]: I0312 14:37:46.960112 37036 status_manager.go:851] "Failed to get status for pod" podUID="e3b3151f-a9b1-43e7-9aec-96d4ff896bf2" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:46.960736 master-0 kubenswrapper[37036]: I0312 14:37:46.960693 37036 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:47.507046 master-0 kubenswrapper[37036]: I0312 14:37:47.507002 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 14:37:47.511546 master-0 kubenswrapper[37036]: I0312 14:37:47.511480 37036 status_manager.go:851] "Failed to get status for pod" podUID="34c25abb-4afe-4f5f-b259-a194ac6f0013" pod="openshift-network-console/networking-console-plugin-5cbd49d755-zmfxg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-5cbd49d755-zmfxg\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:47.513971 master-0 kubenswrapper[37036]: I0312 14:37:47.513813 37036 status_manager.go:851] "Failed to get status for pod" podUID="1dd55143-3e81-4eb5-9f83-b4c13614dd69" pod="openshift-console/console-6b77f48c6d-w6489" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b77f48c6d-w6489\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:47.515416 master-0 kubenswrapper[37036]: I0312 14:37:47.515351 37036 status_manager.go:851] "Failed to get status for pod" podUID="e3b3151f-a9b1-43e7-9aec-96d4ff896bf2" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:47.601161 master-0 kubenswrapper[37036]: I0312 14:37:47.601108 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e3b3151f-a9b1-43e7-9aec-96d4ff896bf2-var-lock\") pod \"e3b3151f-a9b1-43e7-9aec-96d4ff896bf2\" (UID: \"e3b3151f-a9b1-43e7-9aec-96d4ff896bf2\") " Mar 12 14:37:47.601368 master-0 kubenswrapper[37036]: I0312 14:37:47.601197 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3b3151f-a9b1-43e7-9aec-96d4ff896bf2-kube-api-access\") pod \"e3b3151f-a9b1-43e7-9aec-96d4ff896bf2\" (UID: \"e3b3151f-a9b1-43e7-9aec-96d4ff896bf2\") " Mar 12 14:37:47.601368 master-0 kubenswrapper[37036]: I0312 14:37:47.601288 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3b3151f-a9b1-43e7-9aec-96d4ff896bf2-var-lock" (OuterVolumeSpecName: "var-lock") pod "e3b3151f-a9b1-43e7-9aec-96d4ff896bf2" (UID: "e3b3151f-a9b1-43e7-9aec-96d4ff896bf2"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:37:47.601441 master-0 kubenswrapper[37036]: I0312 14:37:47.601399 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3b3151f-a9b1-43e7-9aec-96d4ff896bf2-kubelet-dir\") pod \"e3b3151f-a9b1-43e7-9aec-96d4ff896bf2\" (UID: \"e3b3151f-a9b1-43e7-9aec-96d4ff896bf2\") " Mar 12 14:37:47.601441 master-0 kubenswrapper[37036]: I0312 14:37:47.601426 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3b3151f-a9b1-43e7-9aec-96d4ff896bf2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e3b3151f-a9b1-43e7-9aec-96d4ff896bf2" (UID: "e3b3151f-a9b1-43e7-9aec-96d4ff896bf2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:37:47.601756 master-0 kubenswrapper[37036]: I0312 14:37:47.601731 37036 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3b3151f-a9b1-43e7-9aec-96d4ff896bf2-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:47.601824 master-0 kubenswrapper[37036]: I0312 14:37:47.601760 37036 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e3b3151f-a9b1-43e7-9aec-96d4ff896bf2-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:47.603771 master-0 kubenswrapper[37036]: I0312 14:37:47.603720 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b3151f-a9b1-43e7-9aec-96d4ff896bf2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e3b3151f-a9b1-43e7-9aec-96d4ff896bf2" (UID: "e3b3151f-a9b1-43e7-9aec-96d4ff896bf2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:37:47.703065 master-0 kubenswrapper[37036]: I0312 14:37:47.703014 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3b3151f-a9b1-43e7-9aec-96d4ff896bf2-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:47.741695 master-0 kubenswrapper[37036]: I0312 14:37:47.741644 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_48512e02022680c9d90092634f0fc146/kube-apiserver-cert-syncer/0.log" Mar 12 14:37:47.742763 master-0 kubenswrapper[37036]: I0312 14:37:47.742736 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:37:47.743911 master-0 kubenswrapper[37036]: I0312 14:37:47.743842 37036 status_manager.go:851] "Failed to get status for pod" podUID="34c25abb-4afe-4f5f-b259-a194ac6f0013" pod="openshift-network-console/networking-console-plugin-5cbd49d755-zmfxg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-5cbd49d755-zmfxg\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:47.744985 master-0 kubenswrapper[37036]: I0312 14:37:47.744929 37036 status_manager.go:851] "Failed to get status for pod" podUID="1dd55143-3e81-4eb5-9f83-b4c13614dd69" pod="openshift-console/console-6b77f48c6d-w6489" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b77f48c6d-w6489\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:47.745707 master-0 kubenswrapper[37036]: I0312 14:37:47.745657 37036 status_manager.go:851] "Failed to get status for pod" podUID="e3b3151f-a9b1-43e7-9aec-96d4ff896bf2" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:47.746526 master-0 kubenswrapper[37036]: I0312 14:37:47.746474 37036 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:47.804176 master-0 kubenswrapper[37036]: I0312 14:37:47.804132 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"48512e02022680c9d90092634f0fc146\" (UID: \"48512e02022680c9d90092634f0fc146\") " Mar 12 14:37:47.804490 master-0 kubenswrapper[37036]: I0312 14:37:47.804188 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"48512e02022680c9d90092634f0fc146\" (UID: \"48512e02022680c9d90092634f0fc146\") " Mar 12 14:37:47.804490 master-0 kubenswrapper[37036]: I0312 14:37:47.804275 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"48512e02022680c9d90092634f0fc146\" (UID: \"48512e02022680c9d90092634f0fc146\") " Mar 12 14:37:47.804490 master-0 kubenswrapper[37036]: I0312 14:37:47.804362 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "48512e02022680c9d90092634f0fc146" (UID: "48512e02022680c9d90092634f0fc146"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:37:47.804490 master-0 kubenswrapper[37036]: I0312 14:37:47.804440 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "48512e02022680c9d90092634f0fc146" (UID: "48512e02022680c9d90092634f0fc146"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:37:47.804490 master-0 kubenswrapper[37036]: I0312 14:37:47.804457 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "48512e02022680c9d90092634f0fc146" (UID: "48512e02022680c9d90092634f0fc146"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:37:47.804989 master-0 kubenswrapper[37036]: I0312 14:37:47.804959 37036 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:47.804989 master-0 kubenswrapper[37036]: I0312 14:37:47.804984 37036 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:47.805071 master-0 kubenswrapper[37036]: I0312 14:37:47.804996 37036 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:37:47.971047 master-0 kubenswrapper[37036]: I0312 14:37:47.970617 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 14:37:47.971047 master-0 kubenswrapper[37036]: I0312 14:37:47.970654 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"e3b3151f-a9b1-43e7-9aec-96d4ff896bf2","Type":"ContainerDied","Data":"58d256f4853a92fa41f8baf1824657f6a653651e0d010b3ecaaf171e7a124d11"} Mar 12 14:37:47.971047 master-0 kubenswrapper[37036]: I0312 14:37:47.970707 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58d256f4853a92fa41f8baf1824657f6a653651e0d010b3ecaaf171e7a124d11" Mar 12 14:37:47.975255 master-0 kubenswrapper[37036]: I0312 14:37:47.975210 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_48512e02022680c9d90092634f0fc146/kube-apiserver-cert-syncer/0.log" Mar 12 14:37:47.976150 master-0 kubenswrapper[37036]: I0312 14:37:47.976112 37036 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="bd47b92106de563d3373945a17b8e6aaefdc2d9f737608fa199cd4000e84df8c" exitCode=0 Mar 12 14:37:47.976291 master-0 kubenswrapper[37036]: I0312 14:37:47.976248 37036 scope.go:117] "RemoveContainer" containerID="b751bdf0e39401a4d13a469f6d8fde858fcfb6b8b01934e3aae4c85b3c34ac55" Mar 12 14:37:47.976651 master-0 kubenswrapper[37036]: I0312 14:37:47.976567 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:37:47.977715 master-0 kubenswrapper[37036]: E0312 14:37:47.977645 37036 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:37:47.991024 master-0 kubenswrapper[37036]: I0312 14:37:47.990988 37036 scope.go:117] "RemoveContainer" containerID="3cc6add3b8ddeafffa30f8317b74f57c52371e22c6de0912648ca83e47756722" Mar 12 14:37:48.004114 master-0 kubenswrapper[37036]: I0312 14:37:48.004038 37036 status_manager.go:851] "Failed to get status for pod" podUID="34c25abb-4afe-4f5f-b259-a194ac6f0013" pod="openshift-network-console/networking-console-plugin-5cbd49d755-zmfxg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-5cbd49d755-zmfxg\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:48.004869 master-0 kubenswrapper[37036]: I0312 14:37:48.004818 37036 status_manager.go:851] "Failed to get status for pod" podUID="1dd55143-3e81-4eb5-9f83-b4c13614dd69" pod="openshift-console/console-6b77f48c6d-w6489" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b77f48c6d-w6489\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:48.005578 master-0 kubenswrapper[37036]: I0312 14:37:48.005514 37036 status_manager.go:851] "Failed to get status for pod" podUID="e3b3151f-a9b1-43e7-9aec-96d4ff896bf2" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:48.006187 master-0 kubenswrapper[37036]: I0312 14:37:48.006135 37036 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:48.006803 master-0 kubenswrapper[37036]: I0312 14:37:48.006753 37036 status_manager.go:851] "Failed to get status for pod" podUID="34c25abb-4afe-4f5f-b259-a194ac6f0013" pod="openshift-network-console/networking-console-plugin-5cbd49d755-zmfxg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-5cbd49d755-zmfxg\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:48.007536 master-0 kubenswrapper[37036]: I0312 14:37:48.007479 37036 status_manager.go:851] "Failed to get status for pod" podUID="1dd55143-3e81-4eb5-9f83-b4c13614dd69" pod="openshift-console/console-6b77f48c6d-w6489" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b77f48c6d-w6489\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:48.008159 master-0 kubenswrapper[37036]: I0312 14:37:48.008095 37036 status_manager.go:851] "Failed to get status for pod" podUID="e3b3151f-a9b1-43e7-9aec-96d4ff896bf2" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:48.008955 master-0 kubenswrapper[37036]: I0312 14:37:48.008888 37036 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:48.015570 master-0 kubenswrapper[37036]: I0312 14:37:48.015526 37036 scope.go:117] "RemoveContainer" containerID="a9d7b0be96b2dd2ee16b0e4d8085acc0eb870f88bd3a21243f9c99d9574c51c9" Mar 12 14:37:48.040970 master-0 kubenswrapper[37036]: I0312 14:37:48.040913 37036 scope.go:117] "RemoveContainer" containerID="ae60fe54b5ccd230d5c299ecbcb6f31dfb5d0828ec56237e3d4b1ef25899a097" Mar 12 14:37:48.058282 master-0 kubenswrapper[37036]: I0312 14:37:48.058240 37036 scope.go:117] "RemoveContainer" containerID="bd47b92106de563d3373945a17b8e6aaefdc2d9f737608fa199cd4000e84df8c" Mar 12 14:37:48.079511 master-0 kubenswrapper[37036]: I0312 14:37:48.079455 37036 scope.go:117] "RemoveContainer" containerID="680cd62a7f090bc2a4f20cc8a440912f04f5a4fb884d39ec76cd168ddf53e447" Mar 12 14:37:48.099764 master-0 kubenswrapper[37036]: I0312 14:37:48.099719 37036 scope.go:117] "RemoveContainer" containerID="b751bdf0e39401a4d13a469f6d8fde858fcfb6b8b01934e3aae4c85b3c34ac55" Mar 12 14:37:48.100180 master-0 kubenswrapper[37036]: E0312 14:37:48.100124 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b751bdf0e39401a4d13a469f6d8fde858fcfb6b8b01934e3aae4c85b3c34ac55\": container with ID starting with b751bdf0e39401a4d13a469f6d8fde858fcfb6b8b01934e3aae4c85b3c34ac55 not found: ID does not exist" containerID="b751bdf0e39401a4d13a469f6d8fde858fcfb6b8b01934e3aae4c85b3c34ac55" Mar 12 14:37:48.100257 master-0 kubenswrapper[37036]: I0312 14:37:48.100177 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b751bdf0e39401a4d13a469f6d8fde858fcfb6b8b01934e3aae4c85b3c34ac55"} err="failed to get container status \"b751bdf0e39401a4d13a469f6d8fde858fcfb6b8b01934e3aae4c85b3c34ac55\": rpc error: code = NotFound desc = could not find container \"b751bdf0e39401a4d13a469f6d8fde858fcfb6b8b01934e3aae4c85b3c34ac55\": container with ID starting with b751bdf0e39401a4d13a469f6d8fde858fcfb6b8b01934e3aae4c85b3c34ac55 not found: ID does not exist" Mar 12 14:37:48.100257 master-0 kubenswrapper[37036]: I0312 14:37:48.100215 37036 scope.go:117] "RemoveContainer" containerID="3cc6add3b8ddeafffa30f8317b74f57c52371e22c6de0912648ca83e47756722" Mar 12 14:37:48.100513 master-0 kubenswrapper[37036]: E0312 14:37:48.100468 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cc6add3b8ddeafffa30f8317b74f57c52371e22c6de0912648ca83e47756722\": container with ID starting with 3cc6add3b8ddeafffa30f8317b74f57c52371e22c6de0912648ca83e47756722 not found: ID does not exist" containerID="3cc6add3b8ddeafffa30f8317b74f57c52371e22c6de0912648ca83e47756722" Mar 12 14:37:48.100576 master-0 kubenswrapper[37036]: I0312 14:37:48.100523 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cc6add3b8ddeafffa30f8317b74f57c52371e22c6de0912648ca83e47756722"} err="failed to get container status \"3cc6add3b8ddeafffa30f8317b74f57c52371e22c6de0912648ca83e47756722\": rpc error: code = NotFound desc = could not find container \"3cc6add3b8ddeafffa30f8317b74f57c52371e22c6de0912648ca83e47756722\": container with ID starting with 3cc6add3b8ddeafffa30f8317b74f57c52371e22c6de0912648ca83e47756722 not found: ID does not exist" Mar 12 14:37:48.100576 master-0 kubenswrapper[37036]: I0312 14:37:48.100554 37036 scope.go:117] "RemoveContainer" containerID="a9d7b0be96b2dd2ee16b0e4d8085acc0eb870f88bd3a21243f9c99d9574c51c9" Mar 12 14:37:48.101025 master-0 kubenswrapper[37036]: E0312 14:37:48.100994 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9d7b0be96b2dd2ee16b0e4d8085acc0eb870f88bd3a21243f9c99d9574c51c9\": container with ID starting with a9d7b0be96b2dd2ee16b0e4d8085acc0eb870f88bd3a21243f9c99d9574c51c9 not found: ID does not exist" containerID="a9d7b0be96b2dd2ee16b0e4d8085acc0eb870f88bd3a21243f9c99d9574c51c9" Mar 12 14:37:48.101025 master-0 kubenswrapper[37036]: I0312 14:37:48.101016 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9d7b0be96b2dd2ee16b0e4d8085acc0eb870f88bd3a21243f9c99d9574c51c9"} err="failed to get container status \"a9d7b0be96b2dd2ee16b0e4d8085acc0eb870f88bd3a21243f9c99d9574c51c9\": rpc error: code = NotFound desc = could not find container \"a9d7b0be96b2dd2ee16b0e4d8085acc0eb870f88bd3a21243f9c99d9574c51c9\": container with ID starting with a9d7b0be96b2dd2ee16b0e4d8085acc0eb870f88bd3a21243f9c99d9574c51c9 not found: ID does not exist" Mar 12 14:37:48.101025 master-0 kubenswrapper[37036]: I0312 14:37:48.101029 37036 scope.go:117] "RemoveContainer" containerID="ae60fe54b5ccd230d5c299ecbcb6f31dfb5d0828ec56237e3d4b1ef25899a097" Mar 12 14:37:48.101654 master-0 kubenswrapper[37036]: E0312 14:37:48.101618 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae60fe54b5ccd230d5c299ecbcb6f31dfb5d0828ec56237e3d4b1ef25899a097\": container with ID starting with ae60fe54b5ccd230d5c299ecbcb6f31dfb5d0828ec56237e3d4b1ef25899a097 not found: ID does not exist" containerID="ae60fe54b5ccd230d5c299ecbcb6f31dfb5d0828ec56237e3d4b1ef25899a097" Mar 12 14:37:48.101775 master-0 kubenswrapper[37036]: I0312 14:37:48.101656 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae60fe54b5ccd230d5c299ecbcb6f31dfb5d0828ec56237e3d4b1ef25899a097"} err="failed to get container status \"ae60fe54b5ccd230d5c299ecbcb6f31dfb5d0828ec56237e3d4b1ef25899a097\": rpc error: code = NotFound desc = could not find container \"ae60fe54b5ccd230d5c299ecbcb6f31dfb5d0828ec56237e3d4b1ef25899a097\": container with ID starting with ae60fe54b5ccd230d5c299ecbcb6f31dfb5d0828ec56237e3d4b1ef25899a097 not found: ID does not exist" Mar 12 14:37:48.101775 master-0 kubenswrapper[37036]: I0312 14:37:48.101685 37036 scope.go:117] "RemoveContainer" containerID="bd47b92106de563d3373945a17b8e6aaefdc2d9f737608fa199cd4000e84df8c" Mar 12 14:37:48.102056 master-0 kubenswrapper[37036]: E0312 14:37:48.102010 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd47b92106de563d3373945a17b8e6aaefdc2d9f737608fa199cd4000e84df8c\": container with ID starting with bd47b92106de563d3373945a17b8e6aaefdc2d9f737608fa199cd4000e84df8c not found: ID does not exist" containerID="bd47b92106de563d3373945a17b8e6aaefdc2d9f737608fa199cd4000e84df8c" Mar 12 14:37:48.102056 master-0 kubenswrapper[37036]: I0312 14:37:48.102036 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd47b92106de563d3373945a17b8e6aaefdc2d9f737608fa199cd4000e84df8c"} err="failed to get container status \"bd47b92106de563d3373945a17b8e6aaefdc2d9f737608fa199cd4000e84df8c\": rpc error: code = NotFound desc = could not find container \"bd47b92106de563d3373945a17b8e6aaefdc2d9f737608fa199cd4000e84df8c\": container with ID starting with bd47b92106de563d3373945a17b8e6aaefdc2d9f737608fa199cd4000e84df8c not found: ID does not exist" Mar 12 14:37:48.102056 master-0 kubenswrapper[37036]: I0312 14:37:48.102049 37036 scope.go:117] "RemoveContainer" containerID="680cd62a7f090bc2a4f20cc8a440912f04f5a4fb884d39ec76cd168ddf53e447" Mar 12 14:37:48.102341 master-0 kubenswrapper[37036]: E0312 14:37:48.102301 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"680cd62a7f090bc2a4f20cc8a440912f04f5a4fb884d39ec76cd168ddf53e447\": container with ID starting with 680cd62a7f090bc2a4f20cc8a440912f04f5a4fb884d39ec76cd168ddf53e447 not found: ID does not exist" containerID="680cd62a7f090bc2a4f20cc8a440912f04f5a4fb884d39ec76cd168ddf53e447" Mar 12 14:37:48.102341 master-0 kubenswrapper[37036]: I0312 14:37:48.102329 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"680cd62a7f090bc2a4f20cc8a440912f04f5a4fb884d39ec76cd168ddf53e447"} err="failed to get container status \"680cd62a7f090bc2a4f20cc8a440912f04f5a4fb884d39ec76cd168ddf53e447\": rpc error: code = NotFound desc = could not find container \"680cd62a7f090bc2a4f20cc8a440912f04f5a4fb884d39ec76cd168ddf53e447\": container with ID starting with 680cd62a7f090bc2a4f20cc8a440912f04f5a4fb884d39ec76cd168ddf53e447 not found: ID does not exist" Mar 12 14:37:48.534687 master-0 kubenswrapper[37036]: E0312 14:37:48.534588 37036 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbff9beb6_f6cc_4fd0_9d22_aaf1221c8b34.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbff9beb6_f6cc_4fd0_9d22_aaf1221c8b34.slice/crio-23f2052c5fbcc22cd38a431c5e2ac5863a96ce6b483b26a6af986d36abbcbca8\": RecentStats: unable to find data in memory cache]" Mar 12 14:37:49.241232 master-0 kubenswrapper[37036]: I0312 14:37:49.241175 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48512e02022680c9d90092634f0fc146" path="/var/lib/kubelet/pods/48512e02022680c9d90092634f0fc146/volumes" Mar 12 14:37:50.472880 master-0 kubenswrapper[37036]: I0312 14:37:50.472783 37036 patch_prober.go:28] interesting pod/console-d7bc769d-7n7p2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 12 14:37:50.473423 master-0 kubenswrapper[37036]: I0312 14:37:50.472963 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-d7bc769d-7n7p2" podUID="d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 12 14:37:51.241305 master-0 kubenswrapper[37036]: I0312 14:37:51.241233 37036 status_manager.go:851] "Failed to get status for pod" podUID="e3b3151f-a9b1-43e7-9aec-96d4ff896bf2" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:51.241845 master-0 kubenswrapper[37036]: I0312 14:37:51.241795 37036 status_manager.go:851] "Failed to get status for pod" podUID="34c25abb-4afe-4f5f-b259-a194ac6f0013" pod="openshift-network-console/networking-console-plugin-5cbd49d755-zmfxg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-5cbd49d755-zmfxg\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:51.242357 master-0 kubenswrapper[37036]: I0312 14:37:51.242313 37036 status_manager.go:851] "Failed to get status for pod" podUID="1dd55143-3e81-4eb5-9f83-b4c13614dd69" pod="openshift-console/console-6b77f48c6d-w6489" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b77f48c6d-w6489\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:52.606850 master-0 kubenswrapper[37036]: E0312 14:37:52.606692 37036 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189c1ed7dce4255a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:a814bd60de133d95cf99630a978c017e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 14:37:45.74987401 +0000 UTC m=+124.757614947,LastTimestamp:2026-03-12 14:37:45.74987401 +0000 UTC m=+124.757614947,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 14:37:52.867807 master-0 kubenswrapper[37036]: E0312 14:37:52.867559 37036 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbff9beb6_f6cc_4fd0_9d22_aaf1221c8b34.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbff9beb6_f6cc_4fd0_9d22_aaf1221c8b34.slice/crio-23f2052c5fbcc22cd38a431c5e2ac5863a96ce6b483b26a6af986d36abbcbca8\": RecentStats: unable to find data in memory cache]" Mar 12 14:37:53.741084 master-0 kubenswrapper[37036]: I0312 14:37:53.740960 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:37:53.741084 master-0 kubenswrapper[37036]: I0312 14:37:53.741077 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:37:53.743259 master-0 kubenswrapper[37036]: I0312 14:37:53.743184 37036 patch_prober.go:28] interesting pod/console-59db58b99d-jwn7z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" start-of-body= Mar 12 14:37:53.743401 master-0 kubenswrapper[37036]: I0312 14:37:53.743269 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-59db58b99d-jwn7z" podUID="5125edfe-0ec5-4664-ae68-2c98e3187d79" containerName="console" probeResult="failure" output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" Mar 12 14:37:55.648492 master-0 kubenswrapper[37036]: E0312 14:37:55.648368 37036 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:55.649609 master-0 kubenswrapper[37036]: E0312 14:37:55.649461 37036 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:55.650761 master-0 kubenswrapper[37036]: E0312 14:37:55.650340 37036 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:55.651494 master-0 kubenswrapper[37036]: E0312 14:37:55.651388 37036 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:55.652977 master-0 kubenswrapper[37036]: E0312 14:37:55.652870 37036 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:55.652977 master-0 kubenswrapper[37036]: I0312 14:37:55.652958 37036 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 12 14:37:55.653933 master-0 kubenswrapper[37036]: E0312 14:37:55.653808 37036 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 12 14:37:55.855304 master-0 kubenswrapper[37036]: E0312 14:37:55.855207 37036 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 12 14:37:56.256918 master-0 kubenswrapper[37036]: E0312 14:37:56.256815 37036 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 12 14:37:57.057722 master-0 kubenswrapper[37036]: E0312 14:37:57.057644 37036 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 12 14:37:58.233831 master-0 kubenswrapper[37036]: I0312 14:37:58.233723 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:37:58.235588 master-0 kubenswrapper[37036]: I0312 14:37:58.235490 37036 status_manager.go:851] "Failed to get status for pod" podUID="34c25abb-4afe-4f5f-b259-a194ac6f0013" pod="openshift-network-console/networking-console-plugin-5cbd49d755-zmfxg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-5cbd49d755-zmfxg\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:58.236622 master-0 kubenswrapper[37036]: I0312 14:37:58.236538 37036 status_manager.go:851] "Failed to get status for pod" podUID="1dd55143-3e81-4eb5-9f83-b4c13614dd69" pod="openshift-console/console-6b77f48c6d-w6489" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b77f48c6d-w6489\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:58.237561 master-0 kubenswrapper[37036]: I0312 14:37:58.237486 37036 status_manager.go:851] "Failed to get status for pod" podUID="e3b3151f-a9b1-43e7-9aec-96d4ff896bf2" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:58.260098 master-0 kubenswrapper[37036]: I0312 14:37:58.260025 37036 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="9d9a6940-2460-471e-8000-b234979b051a" Mar 12 14:37:58.260098 master-0 kubenswrapper[37036]: I0312 14:37:58.260080 37036 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="9d9a6940-2460-471e-8000-b234979b051a" Mar 12 14:37:58.261741 master-0 kubenswrapper[37036]: E0312 14:37:58.261618 37036 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:37:58.263448 master-0 kubenswrapper[37036]: I0312 14:37:58.263378 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:37:58.302982 master-0 kubenswrapper[37036]: W0312 14:37:58.302859 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36d4251d3504cdc0ec85144c1379056c.slice/crio-e4dfd751fca076f483fc120f611f1ea2f4bff8163201a09dc0cf4aab3d1b9d38 WatchSource:0}: Error finding container e4dfd751fca076f483fc120f611f1ea2f4bff8163201a09dc0cf4aab3d1b9d38: Status 404 returned error can't find the container with id e4dfd751fca076f483fc120f611f1ea2f4bff8163201a09dc0cf4aab3d1b9d38 Mar 12 14:37:58.659066 master-0 kubenswrapper[37036]: E0312 14:37:58.658987 37036 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 12 14:37:59.063643 master-0 kubenswrapper[37036]: I0312 14:37:59.063580 37036 generic.go:334] "Generic (PLEG): container finished" podID="36d4251d3504cdc0ec85144c1379056c" containerID="411826a4415b6f149147d06ca2fb8d657145ea53c34aea0baf5c571ac52364e8" exitCode=0 Mar 12 14:37:59.063969 master-0 kubenswrapper[37036]: I0312 14:37:59.063661 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerDied","Data":"411826a4415b6f149147d06ca2fb8d657145ea53c34aea0baf5c571ac52364e8"} Mar 12 14:37:59.063969 master-0 kubenswrapper[37036]: I0312 14:37:59.063782 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"e4dfd751fca076f483fc120f611f1ea2f4bff8163201a09dc0cf4aab3d1b9d38"} Mar 12 14:37:59.064463 master-0 kubenswrapper[37036]: I0312 14:37:59.064394 37036 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="9d9a6940-2460-471e-8000-b234979b051a" Mar 12 14:37:59.064463 master-0 kubenswrapper[37036]: I0312 14:37:59.064451 37036 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="9d9a6940-2460-471e-8000-b234979b051a" Mar 12 14:37:59.065339 master-0 kubenswrapper[37036]: E0312 14:37:59.065258 37036 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:37:59.065458 master-0 kubenswrapper[37036]: I0312 14:37:59.065327 37036 status_manager.go:851] "Failed to get status for pod" podUID="34c25abb-4afe-4f5f-b259-a194ac6f0013" pod="openshift-network-console/networking-console-plugin-5cbd49d755-zmfxg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-5cbd49d755-zmfxg\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:59.066546 master-0 kubenswrapper[37036]: I0312 14:37:59.066419 37036 status_manager.go:851] "Failed to get status for pod" podUID="1dd55143-3e81-4eb5-9f83-b4c13614dd69" pod="openshift-console/console-6b77f48c6d-w6489" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-6b77f48c6d-w6489\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:37:59.068035 master-0 kubenswrapper[37036]: I0312 14:37:59.067969 37036 status_manager.go:851] "Failed to get status for pod" podUID="e3b3151f-a9b1-43e7-9aec-96d4ff896bf2" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 14:38:00.083246 master-0 kubenswrapper[37036]: I0312 14:38:00.083136 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"8b8bae8379f5ba5560548867cf7208a17a357b3042f8554d1dec25f34d331ae5"} Mar 12 14:38:00.083246 master-0 kubenswrapper[37036]: I0312 14:38:00.083221 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"e343fde5033b8e0c20b71291301e9c89357dd0d6e28ff369878f508c27d47d2c"} Mar 12 14:38:00.083246 master-0 kubenswrapper[37036]: I0312 14:38:00.083255 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"933935edae34e816a7599cafdc833d53da4b2f9c2d2d32ebc4e6c4b2b73720bc"} Mar 12 14:38:00.473030 master-0 kubenswrapper[37036]: I0312 14:38:00.472919 37036 patch_prober.go:28] interesting pod/console-d7bc769d-7n7p2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 12 14:38:00.473030 master-0 kubenswrapper[37036]: I0312 14:38:00.472988 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-d7bc769d-7n7p2" podUID="d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 12 14:38:01.091478 master-0 kubenswrapper[37036]: I0312 14:38:01.091424 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_965d6e0e3f611771f8ba2352415f565a/kube-controller-manager/0.log" Mar 12 14:38:01.091478 master-0 kubenswrapper[37036]: I0312 14:38:01.091476 37036 generic.go:334] "Generic (PLEG): container finished" podID="965d6e0e3f611771f8ba2352415f565a" containerID="504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd" exitCode=1 Mar 12 14:38:01.092009 master-0 kubenswrapper[37036]: I0312 14:38:01.091527 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"965d6e0e3f611771f8ba2352415f565a","Type":"ContainerDied","Data":"504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd"} Mar 12 14:38:01.092009 master-0 kubenswrapper[37036]: I0312 14:38:01.091965 37036 scope.go:117] "RemoveContainer" containerID="504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd" Mar 12 14:38:01.095114 master-0 kubenswrapper[37036]: I0312 14:38:01.095059 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"92e5d2a59ab754e61cded93f6b15ff8a6cf8f31183b240fb71b0e25070abf7d0"} Mar 12 14:38:01.095176 master-0 kubenswrapper[37036]: I0312 14:38:01.095122 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"c4df6329e9a5a5aa232d86ba6fac6207da35896395fe5ab7d0be4000a0143276"} Mar 12 14:38:01.095228 master-0 kubenswrapper[37036]: I0312 14:38:01.095202 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:38:01.095309 master-0 kubenswrapper[37036]: I0312 14:38:01.095287 37036 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="9d9a6940-2460-471e-8000-b234979b051a" Mar 12 14:38:01.095309 master-0 kubenswrapper[37036]: I0312 14:38:01.095306 37036 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="9d9a6940-2460-471e-8000-b234979b051a" Mar 12 14:38:02.105341 master-0 kubenswrapper[37036]: I0312 14:38:02.105296 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_965d6e0e3f611771f8ba2352415f565a/kube-controller-manager/0.log" Mar 12 14:38:02.105845 master-0 kubenswrapper[37036]: I0312 14:38:02.105358 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"965d6e0e3f611771f8ba2352415f565a","Type":"ContainerStarted","Data":"cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa"} Mar 12 14:38:02.886033 master-0 kubenswrapper[37036]: I0312 14:38:02.885952 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:38:02.886242 master-0 kubenswrapper[37036]: I0312 14:38:02.886058 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:38:02.890889 master-0 kubenswrapper[37036]: I0312 14:38:02.890850 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:38:03.056992 master-0 kubenswrapper[37036]: E0312 14:38:03.055722 37036 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbff9beb6_f6cc_4fd0_9d22_aaf1221c8b34.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbff9beb6_f6cc_4fd0_9d22_aaf1221c8b34.slice/crio-23f2052c5fbcc22cd38a431c5e2ac5863a96ce6b483b26a6af986d36abbcbca8\": RecentStats: unable to find data in memory cache]" Mar 12 14:38:03.264735 master-0 kubenswrapper[37036]: I0312 14:38:03.264267 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:38:03.264735 master-0 kubenswrapper[37036]: I0312 14:38:03.264573 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:38:03.273773 master-0 kubenswrapper[37036]: I0312 14:38:03.273667 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:38:03.398420 master-0 kubenswrapper[37036]: E0312 14:38:03.398364 37036 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbff9beb6_f6cc_4fd0_9d22_aaf1221c8b34.slice/crio-23f2052c5fbcc22cd38a431c5e2ac5863a96ce6b483b26a6af986d36abbcbca8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbff9beb6_f6cc_4fd0_9d22_aaf1221c8b34.slice\": RecentStats: unable to find data in memory cache]" Mar 12 14:38:03.741966 master-0 kubenswrapper[37036]: I0312 14:38:03.741891 37036 patch_prober.go:28] interesting pod/console-59db58b99d-jwn7z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" start-of-body= Mar 12 14:38:03.742192 master-0 kubenswrapper[37036]: I0312 14:38:03.741977 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-59db58b99d-jwn7z" podUID="5125edfe-0ec5-4664-ae68-2c98e3187d79" containerName="console" probeResult="failure" output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" Mar 12 14:38:06.187546 master-0 kubenswrapper[37036]: I0312 14:38:06.187466 37036 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:38:06.370424 master-0 kubenswrapper[37036]: I0312 14:38:06.370324 37036 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="36d4251d3504cdc0ec85144c1379056c" podUID="5e6be5e2-926b-44e5-988d-5543084bc077" Mar 12 14:38:07.146062 master-0 kubenswrapper[37036]: I0312 14:38:07.146008 37036 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="9d9a6940-2460-471e-8000-b234979b051a" Mar 12 14:38:07.146062 master-0 kubenswrapper[37036]: I0312 14:38:07.146058 37036 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="9d9a6940-2460-471e-8000-b234979b051a" Mar 12 14:38:07.149471 master-0 kubenswrapper[37036]: I0312 14:38:07.149409 37036 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="36d4251d3504cdc0ec85144c1379056c" podUID="5e6be5e2-926b-44e5-988d-5543084bc077" Mar 12 14:38:07.150586 master-0 kubenswrapper[37036]: I0312 14:38:07.150531 37036 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-master-0" containerID="cri-o://933935edae34e816a7599cafdc833d53da4b2f9c2d2d32ebc4e6c4b2b73720bc" Mar 12 14:38:07.150586 master-0 kubenswrapper[37036]: I0312 14:38:07.150585 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:38:08.152911 master-0 kubenswrapper[37036]: I0312 14:38:08.152843 37036 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="9d9a6940-2460-471e-8000-b234979b051a" Mar 12 14:38:08.152911 master-0 kubenswrapper[37036]: I0312 14:38:08.152875 37036 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="9d9a6940-2460-471e-8000-b234979b051a" Mar 12 14:38:08.156033 master-0 kubenswrapper[37036]: I0312 14:38:08.155977 37036 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="36d4251d3504cdc0ec85144c1379056c" podUID="5e6be5e2-926b-44e5-988d-5543084bc077" Mar 12 14:38:08.418763 master-0 kubenswrapper[37036]: I0312 14:38:08.418509 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-c847675b7-vfq5t" podUID="0323a60d-acb9-4209-a5a5-9b45cc819ac5" containerName="console" containerID="cri-o://e2ba9e53767ff486d4655064bbffedba7e2ebc32e2cab581b35941369717ec49" gracePeriod=15 Mar 12 14:38:08.930775 master-0 kubenswrapper[37036]: I0312 14:38:08.930724 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-c847675b7-vfq5t_0323a60d-acb9-4209-a5a5-9b45cc819ac5/console/0.log" Mar 12 14:38:08.931014 master-0 kubenswrapper[37036]: I0312 14:38:08.930802 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:38:09.012487 master-0 kubenswrapper[37036]: I0312 14:38:09.012434 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0323a60d-acb9-4209-a5a5-9b45cc819ac5-console-config\") pod \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " Mar 12 14:38:09.012487 master-0 kubenswrapper[37036]: I0312 14:38:09.012485 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0323a60d-acb9-4209-a5a5-9b45cc819ac5-service-ca\") pod \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " Mar 12 14:38:09.012728 master-0 kubenswrapper[37036]: I0312 14:38:09.012518 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0323a60d-acb9-4209-a5a5-9b45cc819ac5-trusted-ca-bundle\") pod \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " Mar 12 14:38:09.012728 master-0 kubenswrapper[37036]: I0312 14:38:09.012652 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtdbd\" (UniqueName: \"kubernetes.io/projected/0323a60d-acb9-4209-a5a5-9b45cc819ac5-kube-api-access-xtdbd\") pod \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " Mar 12 14:38:09.012728 master-0 kubenswrapper[37036]: I0312 14:38:09.012711 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0323a60d-acb9-4209-a5a5-9b45cc819ac5-oauth-serving-cert\") pod \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " Mar 12 14:38:09.012831 master-0 kubenswrapper[37036]: I0312 14:38:09.012750 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0323a60d-acb9-4209-a5a5-9b45cc819ac5-console-oauth-config\") pod \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " Mar 12 14:38:09.012831 master-0 kubenswrapper[37036]: I0312 14:38:09.012785 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0323a60d-acb9-4209-a5a5-9b45cc819ac5-console-serving-cert\") pod \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\" (UID: \"0323a60d-acb9-4209-a5a5-9b45cc819ac5\") " Mar 12 14:38:09.013492 master-0 kubenswrapper[37036]: I0312 14:38:09.013071 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0323a60d-acb9-4209-a5a5-9b45cc819ac5-service-ca" (OuterVolumeSpecName: "service-ca") pod "0323a60d-acb9-4209-a5a5-9b45cc819ac5" (UID: "0323a60d-acb9-4209-a5a5-9b45cc819ac5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:38:09.013492 master-0 kubenswrapper[37036]: I0312 14:38:09.013353 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0323a60d-acb9-4209-a5a5-9b45cc819ac5-console-config" (OuterVolumeSpecName: "console-config") pod "0323a60d-acb9-4209-a5a5-9b45cc819ac5" (UID: "0323a60d-acb9-4209-a5a5-9b45cc819ac5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:38:09.013849 master-0 kubenswrapper[37036]: I0312 14:38:09.013669 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0323a60d-acb9-4209-a5a5-9b45cc819ac5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "0323a60d-acb9-4209-a5a5-9b45cc819ac5" (UID: "0323a60d-acb9-4209-a5a5-9b45cc819ac5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:38:09.013849 master-0 kubenswrapper[37036]: I0312 14:38:09.013683 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0323a60d-acb9-4209-a5a5-9b45cc819ac5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "0323a60d-acb9-4209-a5a5-9b45cc819ac5" (UID: "0323a60d-acb9-4209-a5a5-9b45cc819ac5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:38:09.016020 master-0 kubenswrapper[37036]: I0312 14:38:09.015968 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0323a60d-acb9-4209-a5a5-9b45cc819ac5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "0323a60d-acb9-4209-a5a5-9b45cc819ac5" (UID: "0323a60d-acb9-4209-a5a5-9b45cc819ac5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:38:09.016942 master-0 kubenswrapper[37036]: I0312 14:38:09.016295 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0323a60d-acb9-4209-a5a5-9b45cc819ac5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "0323a60d-acb9-4209-a5a5-9b45cc819ac5" (UID: "0323a60d-acb9-4209-a5a5-9b45cc819ac5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:38:09.016942 master-0 kubenswrapper[37036]: I0312 14:38:09.016766 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0323a60d-acb9-4209-a5a5-9b45cc819ac5-kube-api-access-xtdbd" (OuterVolumeSpecName: "kube-api-access-xtdbd") pod "0323a60d-acb9-4209-a5a5-9b45cc819ac5" (UID: "0323a60d-acb9-4209-a5a5-9b45cc819ac5"). InnerVolumeSpecName "kube-api-access-xtdbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:38:09.114758 master-0 kubenswrapper[37036]: I0312 14:38:09.114702 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtdbd\" (UniqueName: \"kubernetes.io/projected/0323a60d-acb9-4209-a5a5-9b45cc819ac5-kube-api-access-xtdbd\") on node \"master-0\" DevicePath \"\"" Mar 12 14:38:09.114758 master-0 kubenswrapper[37036]: I0312 14:38:09.114736 37036 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0323a60d-acb9-4209-a5a5-9b45cc819ac5-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:38:09.114758 master-0 kubenswrapper[37036]: I0312 14:38:09.114746 37036 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0323a60d-acb9-4209-a5a5-9b45cc819ac5-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:38:09.114758 master-0 kubenswrapper[37036]: I0312 14:38:09.114755 37036 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0323a60d-acb9-4209-a5a5-9b45cc819ac5-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:38:09.114758 master-0 kubenswrapper[37036]: I0312 14:38:09.114765 37036 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0323a60d-acb9-4209-a5a5-9b45cc819ac5-console-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:38:09.114758 master-0 kubenswrapper[37036]: I0312 14:38:09.114773 37036 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0323a60d-acb9-4209-a5a5-9b45cc819ac5-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:38:09.114758 master-0 kubenswrapper[37036]: I0312 14:38:09.114782 37036 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0323a60d-acb9-4209-a5a5-9b45cc819ac5-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:38:09.161326 master-0 kubenswrapper[37036]: I0312 14:38:09.161233 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-c847675b7-vfq5t_0323a60d-acb9-4209-a5a5-9b45cc819ac5/console/0.log" Mar 12 14:38:09.161326 master-0 kubenswrapper[37036]: I0312 14:38:09.161292 37036 generic.go:334] "Generic (PLEG): container finished" podID="0323a60d-acb9-4209-a5a5-9b45cc819ac5" containerID="e2ba9e53767ff486d4655064bbffedba7e2ebc32e2cab581b35941369717ec49" exitCode=2 Mar 12 14:38:09.162928 master-0 kubenswrapper[37036]: I0312 14:38:09.161324 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c847675b7-vfq5t" event={"ID":"0323a60d-acb9-4209-a5a5-9b45cc819ac5","Type":"ContainerDied","Data":"e2ba9e53767ff486d4655064bbffedba7e2ebc32e2cab581b35941369717ec49"} Mar 12 14:38:09.162928 master-0 kubenswrapper[37036]: I0312 14:38:09.161360 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c847675b7-vfq5t" event={"ID":"0323a60d-acb9-4209-a5a5-9b45cc819ac5","Type":"ContainerDied","Data":"77c6c832f2a05f89db685a86302c19f4dd2903a2a825c503719fb08eb2eb2a1d"} Mar 12 14:38:09.162928 master-0 kubenswrapper[37036]: I0312 14:38:09.161368 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c847675b7-vfq5t" Mar 12 14:38:09.162928 master-0 kubenswrapper[37036]: I0312 14:38:09.161380 37036 scope.go:117] "RemoveContainer" containerID="e2ba9e53767ff486d4655064bbffedba7e2ebc32e2cab581b35941369717ec49" Mar 12 14:38:09.179821 master-0 kubenswrapper[37036]: I0312 14:38:09.179779 37036 scope.go:117] "RemoveContainer" containerID="e2ba9e53767ff486d4655064bbffedba7e2ebc32e2cab581b35941369717ec49" Mar 12 14:38:09.180358 master-0 kubenswrapper[37036]: E0312 14:38:09.180323 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2ba9e53767ff486d4655064bbffedba7e2ebc32e2cab581b35941369717ec49\": container with ID starting with e2ba9e53767ff486d4655064bbffedba7e2ebc32e2cab581b35941369717ec49 not found: ID does not exist" containerID="e2ba9e53767ff486d4655064bbffedba7e2ebc32e2cab581b35941369717ec49" Mar 12 14:38:09.180448 master-0 kubenswrapper[37036]: I0312 14:38:09.180356 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2ba9e53767ff486d4655064bbffedba7e2ebc32e2cab581b35941369717ec49"} err="failed to get container status \"e2ba9e53767ff486d4655064bbffedba7e2ebc32e2cab581b35941369717ec49\": rpc error: code = NotFound desc = could not find container \"e2ba9e53767ff486d4655064bbffedba7e2ebc32e2cab581b35941369717ec49\": container with ID starting with e2ba9e53767ff486d4655064bbffedba7e2ebc32e2cab581b35941369717ec49 not found: ID does not exist" Mar 12 14:38:10.472947 master-0 kubenswrapper[37036]: I0312 14:38:10.472854 37036 patch_prober.go:28] interesting pod/console-d7bc769d-7n7p2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 12 14:38:10.473964 master-0 kubenswrapper[37036]: I0312 14:38:10.473869 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-d7bc769d-7n7p2" podUID="d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 12 14:38:11.634444 master-0 kubenswrapper[37036]: E0312 14:38:11.632735 37036 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c0b5d556a16a6a075d3c006b53692df4861705ea22fdea0e8d5d4991d4d77098/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c0b5d556a16a6a075d3c006b53692df4861705ea22fdea0e8d5d4991d4d77098/diff: no such file or directory, extraDiskErr: Mar 12 14:38:12.890064 master-0 kubenswrapper[37036]: I0312 14:38:12.889998 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:38:13.741370 master-0 kubenswrapper[37036]: I0312 14:38:13.741277 37036 patch_prober.go:28] interesting pod/console-59db58b99d-jwn7z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" start-of-body= Mar 12 14:38:13.741701 master-0 kubenswrapper[37036]: I0312 14:38:13.741413 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-59db58b99d-jwn7z" podUID="5125edfe-0ec5-4664-ae68-2c98e3187d79" containerName="console" probeResult="failure" output="Get \"https://10.128.0.108:8443/health\": dial tcp 10.128.0.108:8443: connect: connection refused" Mar 12 14:38:16.697386 master-0 kubenswrapper[37036]: I0312 14:38:16.697308 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 12 14:38:17.155190 master-0 kubenswrapper[37036]: I0312 14:38:17.155140 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 12 14:38:17.199942 master-0 kubenswrapper[37036]: I0312 14:38:17.198649 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 12 14:38:17.728735 master-0 kubenswrapper[37036]: I0312 14:38:17.728680 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 14:38:17.793302 master-0 kubenswrapper[37036]: I0312 14:38:17.793253 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 12 14:38:17.883276 master-0 kubenswrapper[37036]: I0312 14:38:17.883233 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 12 14:38:17.959054 master-0 kubenswrapper[37036]: I0312 14:38:17.958999 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 12 14:38:18.008993 master-0 kubenswrapper[37036]: I0312 14:38:18.008937 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 12 14:38:18.080079 master-0 kubenswrapper[37036]: I0312 14:38:18.080039 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 12 14:38:18.167086 master-0 kubenswrapper[37036]: I0312 14:38:18.167031 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 12 14:38:18.307243 master-0 kubenswrapper[37036]: I0312 14:38:18.307129 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 12 14:38:18.397000 master-0 kubenswrapper[37036]: I0312 14:38:18.395499 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 12 14:38:18.451313 master-0 kubenswrapper[37036]: I0312 14:38:18.451256 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 12 14:38:18.454319 master-0 kubenswrapper[37036]: I0312 14:38:18.454279 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 12 14:38:18.495846 master-0 kubenswrapper[37036]: I0312 14:38:18.495790 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 12 14:38:18.575098 master-0 kubenswrapper[37036]: I0312 14:38:18.573866 37036 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 12 14:38:18.665694 master-0 kubenswrapper[37036]: I0312 14:38:18.665642 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 12 14:38:18.677980 master-0 kubenswrapper[37036]: I0312 14:38:18.677923 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 12 14:38:18.680029 master-0 kubenswrapper[37036]: I0312 14:38:18.679993 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 12 14:38:18.772067 master-0 kubenswrapper[37036]: I0312 14:38:18.772019 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 12 14:38:18.899109 master-0 kubenswrapper[37036]: I0312 14:38:18.898995 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-fsj54" Mar 12 14:38:18.951119 master-0 kubenswrapper[37036]: I0312 14:38:18.951053 37036 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 12 14:38:18.987134 master-0 kubenswrapper[37036]: I0312 14:38:18.987057 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 14:38:19.092159 master-0 kubenswrapper[37036]: I0312 14:38:19.092091 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-8hbfc" Mar 12 14:38:19.143380 master-0 kubenswrapper[37036]: I0312 14:38:19.143319 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 12 14:38:19.219366 master-0 kubenswrapper[37036]: I0312 14:38:19.219231 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 12 14:38:19.220882 master-0 kubenswrapper[37036]: I0312 14:38:19.220837 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 12 14:38:19.368928 master-0 kubenswrapper[37036]: I0312 14:38:19.368831 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 12 14:38:19.476561 master-0 kubenswrapper[37036]: I0312 14:38:19.476430 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 12 14:38:19.722022 master-0 kubenswrapper[37036]: I0312 14:38:19.721973 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 12 14:38:19.813194 master-0 kubenswrapper[37036]: I0312 14:38:19.813145 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-7zf28" Mar 12 14:38:19.893720 master-0 kubenswrapper[37036]: I0312 14:38:19.893658 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 12 14:38:20.022011 master-0 kubenswrapper[37036]: I0312 14:38:20.021964 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-rn7z4" Mar 12 14:38:20.039715 master-0 kubenswrapper[37036]: I0312 14:38:20.039652 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 12 14:38:20.052393 master-0 kubenswrapper[37036]: I0312 14:38:20.052337 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 12 14:38:20.064486 master-0 kubenswrapper[37036]: I0312 14:38:20.064398 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 12 14:38:20.177399 master-0 kubenswrapper[37036]: I0312 14:38:20.177327 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 12 14:38:20.223915 master-0 kubenswrapper[37036]: I0312 14:38:20.223843 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 12 14:38:20.230700 master-0 kubenswrapper[37036]: I0312 14:38:20.230666 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 12 14:38:20.413050 master-0 kubenswrapper[37036]: I0312 14:38:20.412957 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 12 14:38:20.473028 master-0 kubenswrapper[37036]: I0312 14:38:20.472979 37036 patch_prober.go:28] interesting pod/console-d7bc769d-7n7p2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 12 14:38:20.473243 master-0 kubenswrapper[37036]: I0312 14:38:20.473037 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-d7bc769d-7n7p2" podUID="d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 12 14:38:20.633345 master-0 kubenswrapper[37036]: I0312 14:38:20.633291 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 14:38:20.720761 master-0 kubenswrapper[37036]: I0312 14:38:20.720653 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 12 14:38:20.746291 master-0 kubenswrapper[37036]: I0312 14:38:20.746238 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 14:38:20.774056 master-0 kubenswrapper[37036]: I0312 14:38:20.774017 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 12 14:38:20.827298 master-0 kubenswrapper[37036]: I0312 14:38:20.827256 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 12 14:38:20.896589 master-0 kubenswrapper[37036]: I0312 14:38:20.896548 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 12 14:38:20.944388 master-0 kubenswrapper[37036]: I0312 14:38:20.944338 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 12 14:38:20.964953 master-0 kubenswrapper[37036]: I0312 14:38:20.964913 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 12 14:38:21.012651 master-0 kubenswrapper[37036]: I0312 14:38:21.012615 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 12 14:38:21.014417 master-0 kubenswrapper[37036]: I0312 14:38:21.014343 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 12 14:38:21.035075 master-0 kubenswrapper[37036]: I0312 14:38:21.035016 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 12 14:38:21.110055 master-0 kubenswrapper[37036]: I0312 14:38:21.109998 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-d48x8" Mar 12 14:38:21.121862 master-0 kubenswrapper[37036]: I0312 14:38:21.121788 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 12 14:38:21.206409 master-0 kubenswrapper[37036]: I0312 14:38:21.206361 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 12 14:38:21.271643 master-0 kubenswrapper[37036]: I0312 14:38:21.271521 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 12 14:38:21.279285 master-0 kubenswrapper[37036]: I0312 14:38:21.279248 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-pjs2n" Mar 12 14:38:21.287152 master-0 kubenswrapper[37036]: I0312 14:38:21.287122 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 12 14:38:21.291540 master-0 kubenswrapper[37036]: I0312 14:38:21.291518 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 12 14:38:21.304711 master-0 kubenswrapper[37036]: I0312 14:38:21.304685 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 12 14:38:21.340424 master-0 kubenswrapper[37036]: I0312 14:38:21.340368 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 12 14:38:21.346019 master-0 kubenswrapper[37036]: I0312 14:38:21.345977 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 12 14:38:21.444434 master-0 kubenswrapper[37036]: I0312 14:38:21.444362 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 12 14:38:21.576759 master-0 kubenswrapper[37036]: I0312 14:38:21.576574 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 12 14:38:21.634916 master-0 kubenswrapper[37036]: I0312 14:38:21.634845 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 12 14:38:21.708790 master-0 kubenswrapper[37036]: I0312 14:38:21.708726 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 14:38:21.766600 master-0 kubenswrapper[37036]: I0312 14:38:21.766547 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 12 14:38:21.797083 master-0 kubenswrapper[37036]: I0312 14:38:21.797027 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-82tbw" Mar 12 14:38:21.859545 master-0 kubenswrapper[37036]: I0312 14:38:21.859413 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 12 14:38:21.867944 master-0 kubenswrapper[37036]: I0312 14:38:21.867908 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 12 14:38:21.875560 master-0 kubenswrapper[37036]: I0312 14:38:21.875509 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 12 14:38:21.914145 master-0 kubenswrapper[37036]: I0312 14:38:21.914082 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 12 14:38:21.939655 master-0 kubenswrapper[37036]: I0312 14:38:21.939578 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 12 14:38:21.945698 master-0 kubenswrapper[37036]: I0312 14:38:21.945665 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 12 14:38:21.995142 master-0 kubenswrapper[37036]: I0312 14:38:21.995034 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 12 14:38:22.086736 master-0 kubenswrapper[37036]: I0312 14:38:22.086689 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 12 14:38:22.109185 master-0 kubenswrapper[37036]: I0312 14:38:22.109125 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 12 14:38:22.117253 master-0 kubenswrapper[37036]: I0312 14:38:22.117132 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 12 14:38:22.232115 master-0 kubenswrapper[37036]: I0312 14:38:22.193553 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 14:38:22.232115 master-0 kubenswrapper[37036]: I0312 14:38:22.206364 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 12 14:38:22.232115 master-0 kubenswrapper[37036]: I0312 14:38:22.216209 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 14:38:22.275036 master-0 kubenswrapper[37036]: I0312 14:38:22.274989 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 12 14:38:22.336399 master-0 kubenswrapper[37036]: I0312 14:38:22.335566 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 12 14:38:22.342633 master-0 kubenswrapper[37036]: I0312 14:38:22.342540 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 12 14:38:22.369529 master-0 kubenswrapper[37036]: I0312 14:38:22.369375 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 12 14:38:22.403078 master-0 kubenswrapper[37036]: I0312 14:38:22.403027 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 12 14:38:22.559271 master-0 kubenswrapper[37036]: I0312 14:38:22.559221 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 12 14:38:22.596054 master-0 kubenswrapper[37036]: I0312 14:38:22.595996 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-mxcxn" Mar 12 14:38:22.651919 master-0 kubenswrapper[37036]: I0312 14:38:22.651802 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 12 14:38:22.734141 master-0 kubenswrapper[37036]: I0312 14:38:22.734100 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 12 14:38:22.748439 master-0 kubenswrapper[37036]: I0312 14:38:22.748392 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 12 14:38:22.751065 master-0 kubenswrapper[37036]: I0312 14:38:22.751040 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 12 14:38:22.832147 master-0 kubenswrapper[37036]: I0312 14:38:22.832083 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 12 14:38:22.867200 master-0 kubenswrapper[37036]: I0312 14:38:22.867143 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 12 14:38:22.875931 master-0 kubenswrapper[37036]: I0312 14:38:22.875877 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 12 14:38:22.884183 master-0 kubenswrapper[37036]: I0312 14:38:22.884135 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 12 14:38:22.915310 master-0 kubenswrapper[37036]: I0312 14:38:22.915176 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 12 14:38:22.923098 master-0 kubenswrapper[37036]: I0312 14:38:22.922846 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 12 14:38:22.935756 master-0 kubenswrapper[37036]: I0312 14:38:22.935713 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 12 14:38:22.951834 master-0 kubenswrapper[37036]: I0312 14:38:22.951784 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-kblmx" Mar 12 14:38:22.974886 master-0 kubenswrapper[37036]: I0312 14:38:22.974836 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 12 14:38:23.024101 master-0 kubenswrapper[37036]: I0312 14:38:23.024054 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-x42kh" Mar 12 14:38:23.102218 master-0 kubenswrapper[37036]: I0312 14:38:23.102155 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 12 14:38:23.205229 master-0 kubenswrapper[37036]: I0312 14:38:23.205107 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 12 14:38:23.235040 master-0 kubenswrapper[37036]: I0312 14:38:23.234986 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 12 14:38:23.243290 master-0 kubenswrapper[37036]: I0312 14:38:23.243243 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 12 14:38:23.271849 master-0 kubenswrapper[37036]: I0312 14:38:23.271781 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-787xq" Mar 12 14:38:23.414919 master-0 kubenswrapper[37036]: I0312 14:38:23.414846 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 12 14:38:23.432986 master-0 kubenswrapper[37036]: I0312 14:38:23.432939 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 12 14:38:23.471772 master-0 kubenswrapper[37036]: I0312 14:38:23.471665 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 14:38:23.486455 master-0 kubenswrapper[37036]: I0312 14:38:23.486408 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 12 14:38:23.570874 master-0 kubenswrapper[37036]: I0312 14:38:23.570821 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 12 14:38:23.636436 master-0 kubenswrapper[37036]: I0312 14:38:23.636386 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 12 14:38:23.739779 master-0 kubenswrapper[37036]: I0312 14:38:23.739716 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 12 14:38:23.745024 master-0 kubenswrapper[37036]: I0312 14:38:23.744971 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:38:23.748767 master-0 kubenswrapper[37036]: I0312 14:38:23.748705 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:38:23.764026 master-0 kubenswrapper[37036]: I0312 14:38:23.763977 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 12 14:38:23.785860 master-0 kubenswrapper[37036]: I0312 14:38:23.785786 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 12 14:38:23.820173 master-0 kubenswrapper[37036]: I0312 14:38:23.820113 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 12 14:38:23.820404 master-0 kubenswrapper[37036]: I0312 14:38:23.820122 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-7nn9s21bftmgp" Mar 12 14:38:23.855138 master-0 kubenswrapper[37036]: I0312 14:38:23.855086 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 12 14:38:23.883148 master-0 kubenswrapper[37036]: I0312 14:38:23.883094 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 12 14:38:23.895917 master-0 kubenswrapper[37036]: I0312 14:38:23.895872 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-btzl2" Mar 12 14:38:23.950376 master-0 kubenswrapper[37036]: I0312 14:38:23.950337 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 12 14:38:24.020477 master-0 kubenswrapper[37036]: I0312 14:38:24.020383 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 12 14:38:24.108991 master-0 kubenswrapper[37036]: I0312 14:38:24.108953 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 12 14:38:24.137393 master-0 kubenswrapper[37036]: I0312 14:38:24.137325 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 12 14:38:24.389647 master-0 kubenswrapper[37036]: I0312 14:38:24.389542 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 12 14:38:24.489929 master-0 kubenswrapper[37036]: I0312 14:38:24.489849 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 12 14:38:24.492891 master-0 kubenswrapper[37036]: I0312 14:38:24.492784 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 12 14:38:24.520737 master-0 kubenswrapper[37036]: I0312 14:38:24.520675 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 12 14:38:24.592859 master-0 kubenswrapper[37036]: I0312 14:38:24.592794 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 12 14:38:24.595069 master-0 kubenswrapper[37036]: I0312 14:38:24.595020 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 12 14:38:24.631658 master-0 kubenswrapper[37036]: I0312 14:38:24.631587 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 12 14:38:24.701944 master-0 kubenswrapper[37036]: I0312 14:38:24.701820 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 12 14:38:24.750601 master-0 kubenswrapper[37036]: I0312 14:38:24.750227 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 12 14:38:24.773047 master-0 kubenswrapper[37036]: I0312 14:38:24.772990 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 12 14:38:24.886214 master-0 kubenswrapper[37036]: I0312 14:38:24.886159 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 12 14:38:24.903011 master-0 kubenswrapper[37036]: I0312 14:38:24.902949 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 12 14:38:24.908814 master-0 kubenswrapper[37036]: I0312 14:38:24.908780 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 12 14:38:24.910327 master-0 kubenswrapper[37036]: I0312 14:38:24.910299 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 12 14:38:24.933515 master-0 kubenswrapper[37036]: I0312 14:38:24.933467 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 12 14:38:24.951466 master-0 kubenswrapper[37036]: I0312 14:38:24.951407 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 12 14:38:25.081208 master-0 kubenswrapper[37036]: I0312 14:38:25.081159 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 12 14:38:25.109780 master-0 kubenswrapper[37036]: I0312 14:38:25.109737 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 12 14:38:25.152666 master-0 kubenswrapper[37036]: I0312 14:38:25.152628 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 12 14:38:25.198546 master-0 kubenswrapper[37036]: I0312 14:38:25.198456 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 12 14:38:25.271311 master-0 kubenswrapper[37036]: I0312 14:38:25.271264 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 12 14:38:25.297392 master-0 kubenswrapper[37036]: I0312 14:38:25.297348 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 12 14:38:25.304405 master-0 kubenswrapper[37036]: I0312 14:38:25.304386 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 12 14:38:25.328805 master-0 kubenswrapper[37036]: I0312 14:38:25.328770 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 12 14:38:25.392412 master-0 kubenswrapper[37036]: I0312 14:38:25.392285 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 12 14:38:25.485123 master-0 kubenswrapper[37036]: I0312 14:38:25.485085 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 12 14:38:25.493074 master-0 kubenswrapper[37036]: I0312 14:38:25.493024 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 12 14:38:25.585707 master-0 kubenswrapper[37036]: I0312 14:38:25.585645 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 12 14:38:25.661835 master-0 kubenswrapper[37036]: I0312 14:38:25.661707 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 12 14:38:25.681816 master-0 kubenswrapper[37036]: I0312 14:38:25.681756 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 12 14:38:25.691791 master-0 kubenswrapper[37036]: I0312 14:38:25.691662 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-jggtb" Mar 12 14:38:25.823374 master-0 kubenswrapper[37036]: I0312 14:38:25.823329 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 12 14:38:25.913438 master-0 kubenswrapper[37036]: I0312 14:38:25.913334 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 12 14:38:25.921676 master-0 kubenswrapper[37036]: I0312 14:38:25.921638 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 12 14:38:26.055073 master-0 kubenswrapper[37036]: I0312 14:38:26.055009 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 12 14:38:26.066372 master-0 kubenswrapper[37036]: I0312 14:38:26.066320 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 12 14:38:26.159008 master-0 kubenswrapper[37036]: I0312 14:38:26.158959 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-ct5mf" Mar 12 14:38:26.235300 master-0 kubenswrapper[37036]: I0312 14:38:26.235194 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 12 14:38:26.263370 master-0 kubenswrapper[37036]: I0312 14:38:26.263332 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 12 14:38:26.392637 master-0 kubenswrapper[37036]: I0312 14:38:26.392566 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 12 14:38:26.437052 master-0 kubenswrapper[37036]: I0312 14:38:26.436998 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 12 14:38:26.475490 master-0 kubenswrapper[37036]: I0312 14:38:26.475433 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 12 14:38:26.549255 master-0 kubenswrapper[37036]: I0312 14:38:26.549202 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 12 14:38:26.626558 master-0 kubenswrapper[37036]: I0312 14:38:26.626503 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 12 14:38:26.660981 master-0 kubenswrapper[37036]: I0312 14:38:26.660725 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 12 14:38:26.732728 master-0 kubenswrapper[37036]: I0312 14:38:26.732681 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 12 14:38:26.823008 master-0 kubenswrapper[37036]: I0312 14:38:26.815990 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-t6rc9" Mar 12 14:38:26.831944 master-0 kubenswrapper[37036]: I0312 14:38:26.831774 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 12 14:38:26.856204 master-0 kubenswrapper[37036]: I0312 14:38:26.856140 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 12 14:38:26.928614 master-0 kubenswrapper[37036]: I0312 14:38:26.928537 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 12 14:38:26.934731 master-0 kubenswrapper[37036]: I0312 14:38:26.934696 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 12 14:38:26.959429 master-0 kubenswrapper[37036]: I0312 14:38:26.959362 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 12 14:38:27.039822 master-0 kubenswrapper[37036]: I0312 14:38:27.039749 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-kl5h5" Mar 12 14:38:27.078680 master-0 kubenswrapper[37036]: I0312 14:38:27.078538 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 12 14:38:27.126599 master-0 kubenswrapper[37036]: I0312 14:38:27.126516 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 12 14:38:27.187501 master-0 kubenswrapper[37036]: I0312 14:38:27.187454 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 12 14:38:27.276373 master-0 kubenswrapper[37036]: I0312 14:38:27.276311 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 12 14:38:27.359797 master-0 kubenswrapper[37036]: I0312 14:38:27.359666 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 12 14:38:27.380591 master-0 kubenswrapper[37036]: I0312 14:38:27.380526 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 12 14:38:27.384345 master-0 kubenswrapper[37036]: I0312 14:38:27.384305 37036 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 12 14:38:27.392120 master-0 kubenswrapper[37036]: I0312 14:38:27.392050 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-5cbd49d755-zmfxg" podStartSLOduration=42.647827249 podStartE2EDuration="45.392029233s" podCreationTimestamp="2026-03-12 14:37:42 +0000 UTC" firstStartedPulling="2026-03-12 14:37:43.622774974 +0000 UTC m=+122.630515911" lastFinishedPulling="2026-03-12 14:37:46.366976958 +0000 UTC m=+125.374717895" observedRunningTime="2026-03-12 14:38:06.223661948 +0000 UTC m=+145.231402895" watchObservedRunningTime="2026-03-12 14:38:27.392029233 +0000 UTC m=+166.399770170" Mar 12 14:38:27.392830 master-0 kubenswrapper[37036]: I0312 14:38:27.392801 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0","openshift-console/console-c847675b7-vfq5t","openshift-console/console-6b77f48c6d-w6489"] Mar 12 14:38:27.392884 master-0 kubenswrapper[37036]: I0312 14:38:27.392858 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 12 14:38:27.392884 master-0 kubenswrapper[37036]: I0312 14:38:27.392881 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-d7bc769d-7n7p2"] Mar 12 14:38:27.401376 master-0 kubenswrapper[37036]: I0312 14:38:27.400683 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 14:38:27.411228 master-0 kubenswrapper[37036]: I0312 14:38:27.411173 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 12 14:38:27.422206 master-0 kubenswrapper[37036]: I0312 14:38:27.422123 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=21.422098439 podStartE2EDuration="21.422098439s" podCreationTimestamp="2026-03-12 14:38:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:38:27.416556352 +0000 UTC m=+166.424297309" watchObservedRunningTime="2026-03-12 14:38:27.422098439 +0000 UTC m=+166.429839376" Mar 12 14:38:27.502697 master-0 kubenswrapper[37036]: I0312 14:38:27.502649 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 12 14:38:27.541614 master-0 kubenswrapper[37036]: I0312 14:38:27.541567 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 12 14:38:27.558642 master-0 kubenswrapper[37036]: I0312 14:38:27.558601 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 12 14:38:27.593893 master-0 kubenswrapper[37036]: I0312 14:38:27.593834 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 12 14:38:27.600941 master-0 kubenswrapper[37036]: I0312 14:38:27.600869 37036 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 14:38:27.601385 master-0 kubenswrapper[37036]: I0312 14:38:27.601358 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="a814bd60de133d95cf99630a978c017e" containerName="startup-monitor" containerID="cri-o://ac12e82d73b7c87bfc300714078d0fbfbd14306ee32b92a1a487af0f8c03b0e0" gracePeriod=5 Mar 12 14:38:27.611131 master-0 kubenswrapper[37036]: I0312 14:38:27.611039 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 12 14:38:27.649984 master-0 kubenswrapper[37036]: I0312 14:38:27.649951 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 12 14:38:27.654413 master-0 kubenswrapper[37036]: I0312 14:38:27.654158 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-vbgk2" Mar 12 14:38:27.671162 master-0 kubenswrapper[37036]: I0312 14:38:27.671113 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 12 14:38:27.746550 master-0 kubenswrapper[37036]: I0312 14:38:27.746431 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 14:38:27.747288 master-0 kubenswrapper[37036]: I0312 14:38:27.747260 37036 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 12 14:38:27.783399 master-0 kubenswrapper[37036]: I0312 14:38:27.783357 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 14:38:27.816485 master-0 kubenswrapper[37036]: I0312 14:38:27.816433 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 12 14:38:27.851196 master-0 kubenswrapper[37036]: I0312 14:38:27.851137 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 12 14:38:27.941495 master-0 kubenswrapper[37036]: I0312 14:38:27.941343 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 14:38:27.977459 master-0 kubenswrapper[37036]: I0312 14:38:27.977413 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 12 14:38:28.037363 master-0 kubenswrapper[37036]: I0312 14:38:28.037300 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 12 14:38:28.041382 master-0 kubenswrapper[37036]: I0312 14:38:28.041331 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-d28jx" Mar 12 14:38:28.045211 master-0 kubenswrapper[37036]: I0312 14:38:28.045161 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-cfs7s" Mar 12 14:38:28.060315 master-0 kubenswrapper[37036]: I0312 14:38:28.060259 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 12 14:38:28.119069 master-0 kubenswrapper[37036]: I0312 14:38:28.117929 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 12 14:38:28.140516 master-0 kubenswrapper[37036]: I0312 14:38:28.140427 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 12 14:38:28.175022 master-0 kubenswrapper[37036]: I0312 14:38:28.174964 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 12 14:38:28.206828 master-0 kubenswrapper[37036]: I0312 14:38:28.206668 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 12 14:38:28.240558 master-0 kubenswrapper[37036]: I0312 14:38:28.240499 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-28q99" Mar 12 14:38:28.274223 master-0 kubenswrapper[37036]: I0312 14:38:28.274178 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 12 14:38:28.372549 master-0 kubenswrapper[37036]: I0312 14:38:28.372502 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 12 14:38:28.409243 master-0 kubenswrapper[37036]: I0312 14:38:28.409201 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 12 14:38:28.422304 master-0 kubenswrapper[37036]: I0312 14:38:28.422255 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 12 14:38:28.595803 master-0 kubenswrapper[37036]: I0312 14:38:28.595751 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 12 14:38:28.648055 master-0 kubenswrapper[37036]: I0312 14:38:28.647994 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-q2s6g" Mar 12 14:38:28.671767 master-0 kubenswrapper[37036]: I0312 14:38:28.671715 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 12 14:38:28.694167 master-0 kubenswrapper[37036]: I0312 14:38:28.694126 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 12 14:38:28.709139 master-0 kubenswrapper[37036]: I0312 14:38:28.709091 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 12 14:38:28.903350 master-0 kubenswrapper[37036]: I0312 14:38:28.903154 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 12 14:38:28.932049 master-0 kubenswrapper[37036]: I0312 14:38:28.931868 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 12 14:38:28.959296 master-0 kubenswrapper[37036]: I0312 14:38:28.959234 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 12 14:38:29.008733 master-0 kubenswrapper[37036]: I0312 14:38:29.008658 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 12 14:38:29.044721 master-0 kubenswrapper[37036]: I0312 14:38:29.044638 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 12 14:38:29.054333 master-0 kubenswrapper[37036]: I0312 14:38:29.054255 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-gbqf6" Mar 12 14:38:29.056639 master-0 kubenswrapper[37036]: I0312 14:38:29.056575 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 12 14:38:29.251782 master-0 kubenswrapper[37036]: I0312 14:38:29.251707 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0323a60d-acb9-4209-a5a5-9b45cc819ac5" path="/var/lib/kubelet/pods/0323a60d-acb9-4209-a5a5-9b45cc819ac5/volumes" Mar 12 14:38:29.253347 master-0 kubenswrapper[37036]: I0312 14:38:29.253297 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dd55143-3e81-4eb5-9f83-b4c13614dd69" path="/var/lib/kubelet/pods/1dd55143-3e81-4eb5-9f83-b4c13614dd69/volumes" Mar 12 14:38:29.266239 master-0 kubenswrapper[37036]: I0312 14:38:29.266147 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 12 14:38:29.275794 master-0 kubenswrapper[37036]: I0312 14:38:29.275720 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 12 14:38:29.285058 master-0 kubenswrapper[37036]: I0312 14:38:29.285020 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 12 14:38:29.377152 master-0 kubenswrapper[37036]: I0312 14:38:29.377106 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 12 14:38:29.394589 master-0 kubenswrapper[37036]: I0312 14:38:29.394532 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 12 14:38:29.426177 master-0 kubenswrapper[37036]: I0312 14:38:29.426112 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 12 14:38:29.447029 master-0 kubenswrapper[37036]: I0312 14:38:29.446969 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 12 14:38:29.494509 master-0 kubenswrapper[37036]: I0312 14:38:29.494455 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 12 14:38:29.494974 master-0 kubenswrapper[37036]: I0312 14:38:29.494940 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 12 14:38:29.574783 master-0 kubenswrapper[37036]: I0312 14:38:29.574653 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 12 14:38:29.579151 master-0 kubenswrapper[37036]: I0312 14:38:29.579108 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 12 14:38:29.664141 master-0 kubenswrapper[37036]: I0312 14:38:29.664066 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 12 14:38:29.683270 master-0 kubenswrapper[37036]: I0312 14:38:29.683171 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 12 14:38:29.763696 master-0 kubenswrapper[37036]: I0312 14:38:29.763608 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 12 14:38:29.822745 master-0 kubenswrapper[37036]: I0312 14:38:29.822676 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 12 14:38:29.830288 master-0 kubenswrapper[37036]: I0312 14:38:29.830185 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 12 14:38:29.844198 master-0 kubenswrapper[37036]: I0312 14:38:29.844142 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 12 14:38:29.929205 master-0 kubenswrapper[37036]: I0312 14:38:29.929137 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 12 14:38:29.963296 master-0 kubenswrapper[37036]: I0312 14:38:29.963237 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 12 14:38:30.014889 master-0 kubenswrapper[37036]: I0312 14:38:30.014821 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 12 14:38:30.149735 master-0 kubenswrapper[37036]: I0312 14:38:30.149514 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 12 14:38:30.192584 master-0 kubenswrapper[37036]: I0312 14:38:30.192522 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 12 14:38:30.245130 master-0 kubenswrapper[37036]: I0312 14:38:30.245064 37036 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 12 14:38:30.257036 master-0 kubenswrapper[37036]: I0312 14:38:30.256976 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 12 14:38:30.262299 master-0 kubenswrapper[37036]: I0312 14:38:30.262253 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 12 14:38:30.313221 master-0 kubenswrapper[37036]: I0312 14:38:30.313166 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 12 14:38:30.502532 master-0 kubenswrapper[37036]: I0312 14:38:30.502483 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 12 14:38:30.590297 master-0 kubenswrapper[37036]: I0312 14:38:30.590247 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 12 14:38:30.725572 master-0 kubenswrapper[37036]: I0312 14:38:30.725485 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 12 14:38:30.775402 master-0 kubenswrapper[37036]: I0312 14:38:30.775292 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 12 14:38:30.819076 master-0 kubenswrapper[37036]: I0312 14:38:30.819021 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 12 14:38:30.898243 master-0 kubenswrapper[37036]: I0312 14:38:30.898181 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 12 14:38:30.939785 master-0 kubenswrapper[37036]: I0312 14:38:30.939743 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 12 14:38:30.954053 master-0 kubenswrapper[37036]: I0312 14:38:30.953887 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 12 14:38:30.975129 master-0 kubenswrapper[37036]: I0312 14:38:30.975048 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 12 14:38:31.047129 master-0 kubenswrapper[37036]: I0312 14:38:31.046997 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 12 14:38:31.102095 master-0 kubenswrapper[37036]: I0312 14:38:31.102039 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 12 14:38:31.137318 master-0 kubenswrapper[37036]: I0312 14:38:31.137275 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 12 14:38:31.376753 master-0 kubenswrapper[37036]: I0312 14:38:31.376595 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 12 14:38:31.454488 master-0 kubenswrapper[37036]: I0312 14:38:31.454426 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 12 14:38:31.494267 master-0 kubenswrapper[37036]: I0312 14:38:31.494187 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 12 14:38:31.503052 master-0 kubenswrapper[37036]: I0312 14:38:31.502991 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 12 14:38:31.504316 master-0 kubenswrapper[37036]: I0312 14:38:31.504271 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 12 14:38:31.539203 master-0 kubenswrapper[37036]: I0312 14:38:31.539151 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 12 14:38:31.568602 master-0 kubenswrapper[37036]: I0312 14:38:31.568534 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 12 14:38:31.701645 master-0 kubenswrapper[37036]: I0312 14:38:31.701522 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 12 14:38:31.816249 master-0 kubenswrapper[37036]: I0312 14:38:31.816183 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 14:38:31.825821 master-0 kubenswrapper[37036]: I0312 14:38:31.825781 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 12 14:38:32.054447 master-0 kubenswrapper[37036]: I0312 14:38:32.054349 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 14:38:32.090708 master-0 kubenswrapper[37036]: I0312 14:38:32.090637 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 12 14:38:32.117378 master-0 kubenswrapper[37036]: I0312 14:38:32.117316 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 12 14:38:32.195766 master-0 kubenswrapper[37036]: I0312 14:38:32.195725 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 12 14:38:32.301448 master-0 kubenswrapper[37036]: I0312 14:38:32.301407 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 12 14:38:32.305260 master-0 kubenswrapper[37036]: I0312 14:38:32.304997 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 12 14:38:32.542910 master-0 kubenswrapper[37036]: I0312 14:38:32.542830 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 12 14:38:32.839027 master-0 kubenswrapper[37036]: I0312 14:38:32.838824 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 12 14:38:33.197782 master-0 kubenswrapper[37036]: I0312 14:38:33.197725 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_a814bd60de133d95cf99630a978c017e/startup-monitor/0.log" Mar 12 14:38:33.198364 master-0 kubenswrapper[37036]: I0312 14:38:33.197830 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:38:33.287813 master-0 kubenswrapper[37036]: I0312 14:38:33.287757 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir\") pod \"a814bd60de133d95cf99630a978c017e\" (UID: \"a814bd60de133d95cf99630a978c017e\") " Mar 12 14:38:33.288031 master-0 kubenswrapper[37036]: I0312 14:38:33.287867 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "a814bd60de133d95cf99630a978c017e" (UID: "a814bd60de133d95cf99630a978c017e"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:38:33.288181 master-0 kubenswrapper[37036]: I0312 14:38:33.288146 37036 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:38:33.339687 master-0 kubenswrapper[37036]: I0312 14:38:33.339625 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_a814bd60de133d95cf99630a978c017e/startup-monitor/0.log" Mar 12 14:38:33.339687 master-0 kubenswrapper[37036]: I0312 14:38:33.339681 37036 generic.go:334] "Generic (PLEG): container finished" podID="a814bd60de133d95cf99630a978c017e" containerID="ac12e82d73b7c87bfc300714078d0fbfbd14306ee32b92a1a487af0f8c03b0e0" exitCode=137 Mar 12 14:38:33.339949 master-0 kubenswrapper[37036]: I0312 14:38:33.339723 37036 scope.go:117] "RemoveContainer" containerID="ac12e82d73b7c87bfc300714078d0fbfbd14306ee32b92a1a487af0f8c03b0e0" Mar 12 14:38:33.339949 master-0 kubenswrapper[37036]: I0312 14:38:33.339755 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 14:38:33.353185 master-0 kubenswrapper[37036]: I0312 14:38:33.352717 37036 scope.go:117] "RemoveContainer" containerID="ac12e82d73b7c87bfc300714078d0fbfbd14306ee32b92a1a487af0f8c03b0e0" Mar 12 14:38:33.353185 master-0 kubenswrapper[37036]: E0312 14:38:33.353144 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac12e82d73b7c87bfc300714078d0fbfbd14306ee32b92a1a487af0f8c03b0e0\": container with ID starting with ac12e82d73b7c87bfc300714078d0fbfbd14306ee32b92a1a487af0f8c03b0e0 not found: ID does not exist" containerID="ac12e82d73b7c87bfc300714078d0fbfbd14306ee32b92a1a487af0f8c03b0e0" Mar 12 14:38:33.353185 master-0 kubenswrapper[37036]: I0312 14:38:33.353173 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac12e82d73b7c87bfc300714078d0fbfbd14306ee32b92a1a487af0f8c03b0e0"} err="failed to get container status \"ac12e82d73b7c87bfc300714078d0fbfbd14306ee32b92a1a487af0f8c03b0e0\": rpc error: code = NotFound desc = could not find container \"ac12e82d73b7c87bfc300714078d0fbfbd14306ee32b92a1a487af0f8c03b0e0\": container with ID starting with ac12e82d73b7c87bfc300714078d0fbfbd14306ee32b92a1a487af0f8c03b0e0 not found: ID does not exist" Mar 12 14:38:33.369857 master-0 kubenswrapper[37036]: I0312 14:38:33.369793 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-2qg98" Mar 12 14:38:33.389156 master-0 kubenswrapper[37036]: I0312 14:38:33.388896 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log\") pod \"a814bd60de133d95cf99630a978c017e\" (UID: \"a814bd60de133d95cf99630a978c017e\") " Mar 12 14:38:33.389156 master-0 kubenswrapper[37036]: I0312 14:38:33.389089 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests\") pod \"a814bd60de133d95cf99630a978c017e\" (UID: \"a814bd60de133d95cf99630a978c017e\") " Mar 12 14:38:33.389156 master-0 kubenswrapper[37036]: I0312 14:38:33.388985 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log" (OuterVolumeSpecName: "var-log") pod "a814bd60de133d95cf99630a978c017e" (UID: "a814bd60de133d95cf99630a978c017e"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:38:33.389156 master-0 kubenswrapper[37036]: I0312 14:38:33.389123 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir\") pod \"a814bd60de133d95cf99630a978c017e\" (UID: \"a814bd60de133d95cf99630a978c017e\") " Mar 12 14:38:33.389832 master-0 kubenswrapper[37036]: I0312 14:38:33.389253 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock\") pod \"a814bd60de133d95cf99630a978c017e\" (UID: \"a814bd60de133d95cf99630a978c017e\") " Mar 12 14:38:33.389832 master-0 kubenswrapper[37036]: I0312 14:38:33.389475 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests" (OuterVolumeSpecName: "manifests") pod "a814bd60de133d95cf99630a978c017e" (UID: "a814bd60de133d95cf99630a978c017e"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:38:33.389832 master-0 kubenswrapper[37036]: I0312 14:38:33.389578 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock" (OuterVolumeSpecName: "var-lock") pod "a814bd60de133d95cf99630a978c017e" (UID: "a814bd60de133d95cf99630a978c017e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:38:33.389832 master-0 kubenswrapper[37036]: I0312 14:38:33.389718 37036 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log\") on node \"master-0\" DevicePath \"\"" Mar 12 14:38:33.389832 master-0 kubenswrapper[37036]: I0312 14:38:33.389737 37036 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests\") on node \"master-0\" DevicePath \"\"" Mar 12 14:38:33.389832 master-0 kubenswrapper[37036]: I0312 14:38:33.389749 37036 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 14:38:33.398568 master-0 kubenswrapper[37036]: I0312 14:38:33.398482 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "a814bd60de133d95cf99630a978c017e" (UID: "a814bd60de133d95cf99630a978c017e"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:38:33.494579 master-0 kubenswrapper[37036]: I0312 14:38:33.494508 37036 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:38:33.529869 master-0 kubenswrapper[37036]: I0312 14:38:33.529813 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 12 14:38:33.616691 master-0 kubenswrapper[37036]: I0312 14:38:33.616632 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 12 14:38:35.242837 master-0 kubenswrapper[37036]: I0312 14:38:35.242753 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a814bd60de133d95cf99630a978c017e" path="/var/lib/kubelet/pods/a814bd60de133d95cf99630a978c017e/volumes" Mar 12 14:38:35.982775 master-0 kubenswrapper[37036]: I0312 14:38:35.982722 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-79b7956d9f-qkhd4"] Mar 12 14:38:35.983009 master-0 kubenswrapper[37036]: E0312 14:38:35.982989 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0323a60d-acb9-4209-a5a5-9b45cc819ac5" containerName="console" Mar 12 14:38:35.983009 master-0 kubenswrapper[37036]: I0312 14:38:35.983001 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="0323a60d-acb9-4209-a5a5-9b45cc819ac5" containerName="console" Mar 12 14:38:35.983086 master-0 kubenswrapper[37036]: E0312 14:38:35.983022 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a814bd60de133d95cf99630a978c017e" containerName="startup-monitor" Mar 12 14:38:35.983086 master-0 kubenswrapper[37036]: I0312 14:38:35.983028 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="a814bd60de133d95cf99630a978c017e" containerName="startup-monitor" Mar 12 14:38:35.983086 master-0 kubenswrapper[37036]: E0312 14:38:35.983034 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd55143-3e81-4eb5-9f83-b4c13614dd69" containerName="console" Mar 12 14:38:35.983086 master-0 kubenswrapper[37036]: I0312 14:38:35.983042 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd55143-3e81-4eb5-9f83-b4c13614dd69" containerName="console" Mar 12 14:38:35.983086 master-0 kubenswrapper[37036]: E0312 14:38:35.983067 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b3151f-a9b1-43e7-9aec-96d4ff896bf2" containerName="installer" Mar 12 14:38:35.983086 master-0 kubenswrapper[37036]: I0312 14:38:35.983073 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b3151f-a9b1-43e7-9aec-96d4ff896bf2" containerName="installer" Mar 12 14:38:35.983327 master-0 kubenswrapper[37036]: I0312 14:38:35.983200 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dd55143-3e81-4eb5-9f83-b4c13614dd69" containerName="console" Mar 12 14:38:35.983327 master-0 kubenswrapper[37036]: I0312 14:38:35.983212 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="a814bd60de133d95cf99630a978c017e" containerName="startup-monitor" Mar 12 14:38:35.983327 master-0 kubenswrapper[37036]: I0312 14:38:35.983223 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="0323a60d-acb9-4209-a5a5-9b45cc819ac5" containerName="console" Mar 12 14:38:35.983327 master-0 kubenswrapper[37036]: I0312 14:38:35.983241 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b3151f-a9b1-43e7-9aec-96d4ff896bf2" containerName="installer" Mar 12 14:38:35.984706 master-0 kubenswrapper[37036]: I0312 14:38:35.984684 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:35.987083 master-0 kubenswrapper[37036]: I0312 14:38:35.987036 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 12 14:38:35.988006 master-0 kubenswrapper[37036]: I0312 14:38:35.987690 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 12 14:38:35.988006 master-0 kubenswrapper[37036]: I0312 14:38:35.987917 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 12 14:38:35.990872 master-0 kubenswrapper[37036]: I0312 14:38:35.990503 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 12 14:38:35.991590 master-0 kubenswrapper[37036]: I0312 14:38:35.991467 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 12 14:38:35.992408 master-0 kubenswrapper[37036]: I0312 14:38:35.991795 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-9o8vb7iumjpal" Mar 12 14:38:36.028929 master-0 kubenswrapper[37036]: I0312 14:38:36.028845 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/71a335bd-078a-4b8c-ae09-2e40765034d3-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-79b7956d9f-qkhd4\" (UID: \"71a335bd-078a-4b8c-ae09-2e40765034d3\") " pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:36.029130 master-0 kubenswrapper[37036]: I0312 14:38:36.028976 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb7v5\" (UniqueName: \"kubernetes.io/projected/71a335bd-078a-4b8c-ae09-2e40765034d3-kube-api-access-wb7v5\") pod \"thanos-querier-79b7956d9f-qkhd4\" (UID: \"71a335bd-078a-4b8c-ae09-2e40765034d3\") " pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:36.029130 master-0 kubenswrapper[37036]: I0312 14:38:36.029028 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/71a335bd-078a-4b8c-ae09-2e40765034d3-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-79b7956d9f-qkhd4\" (UID: \"71a335bd-078a-4b8c-ae09-2e40765034d3\") " pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:36.029130 master-0 kubenswrapper[37036]: I0312 14:38:36.029059 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/71a335bd-078a-4b8c-ae09-2e40765034d3-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-79b7956d9f-qkhd4\" (UID: \"71a335bd-078a-4b8c-ae09-2e40765034d3\") " pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:36.029130 master-0 kubenswrapper[37036]: I0312 14:38:36.029101 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/71a335bd-078a-4b8c-ae09-2e40765034d3-secret-grpc-tls\") pod \"thanos-querier-79b7956d9f-qkhd4\" (UID: \"71a335bd-078a-4b8c-ae09-2e40765034d3\") " pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:36.029130 master-0 kubenswrapper[37036]: I0312 14:38:36.029129 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/71a335bd-078a-4b8c-ae09-2e40765034d3-secret-thanos-querier-tls\") pod \"thanos-querier-79b7956d9f-qkhd4\" (UID: \"71a335bd-078a-4b8c-ae09-2e40765034d3\") " pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:36.029367 master-0 kubenswrapper[37036]: I0312 14:38:36.029299 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/71a335bd-078a-4b8c-ae09-2e40765034d3-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-79b7956d9f-qkhd4\" (UID: \"71a335bd-078a-4b8c-ae09-2e40765034d3\") " pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:36.029422 master-0 kubenswrapper[37036]: I0312 14:38:36.029400 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/71a335bd-078a-4b8c-ae09-2e40765034d3-metrics-client-ca\") pod \"thanos-querier-79b7956d9f-qkhd4\" (UID: \"71a335bd-078a-4b8c-ae09-2e40765034d3\") " pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:36.062191 master-0 kubenswrapper[37036]: I0312 14:38:36.062139 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-79b7956d9f-qkhd4"] Mar 12 14:38:36.130599 master-0 kubenswrapper[37036]: I0312 14:38:36.130522 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/71a335bd-078a-4b8c-ae09-2e40765034d3-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-79b7956d9f-qkhd4\" (UID: \"71a335bd-078a-4b8c-ae09-2e40765034d3\") " pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:36.130599 master-0 kubenswrapper[37036]: I0312 14:38:36.130589 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb7v5\" (UniqueName: \"kubernetes.io/projected/71a335bd-078a-4b8c-ae09-2e40765034d3-kube-api-access-wb7v5\") pod \"thanos-querier-79b7956d9f-qkhd4\" (UID: \"71a335bd-078a-4b8c-ae09-2e40765034d3\") " pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:36.131392 master-0 kubenswrapper[37036]: I0312 14:38:36.130887 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/71a335bd-078a-4b8c-ae09-2e40765034d3-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-79b7956d9f-qkhd4\" (UID: \"71a335bd-078a-4b8c-ae09-2e40765034d3\") " pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:36.131392 master-0 kubenswrapper[37036]: I0312 14:38:36.131100 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/71a335bd-078a-4b8c-ae09-2e40765034d3-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-79b7956d9f-qkhd4\" (UID: \"71a335bd-078a-4b8c-ae09-2e40765034d3\") " pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:36.131392 master-0 kubenswrapper[37036]: I0312 14:38:36.131242 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/71a335bd-078a-4b8c-ae09-2e40765034d3-secret-grpc-tls\") pod \"thanos-querier-79b7956d9f-qkhd4\" (UID: \"71a335bd-078a-4b8c-ae09-2e40765034d3\") " pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:36.131392 master-0 kubenswrapper[37036]: I0312 14:38:36.131316 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/71a335bd-078a-4b8c-ae09-2e40765034d3-secret-thanos-querier-tls\") pod \"thanos-querier-79b7956d9f-qkhd4\" (UID: \"71a335bd-078a-4b8c-ae09-2e40765034d3\") " pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:36.131626 master-0 kubenswrapper[37036]: I0312 14:38:36.131555 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/71a335bd-078a-4b8c-ae09-2e40765034d3-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-79b7956d9f-qkhd4\" (UID: \"71a335bd-078a-4b8c-ae09-2e40765034d3\") " pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:36.131626 master-0 kubenswrapper[37036]: I0312 14:38:36.131614 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/71a335bd-078a-4b8c-ae09-2e40765034d3-metrics-client-ca\") pod \"thanos-querier-79b7956d9f-qkhd4\" (UID: \"71a335bd-078a-4b8c-ae09-2e40765034d3\") " pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:36.132608 master-0 kubenswrapper[37036]: I0312 14:38:36.132575 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/71a335bd-078a-4b8c-ae09-2e40765034d3-metrics-client-ca\") pod \"thanos-querier-79b7956d9f-qkhd4\" (UID: \"71a335bd-078a-4b8c-ae09-2e40765034d3\") " pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:36.134401 master-0 kubenswrapper[37036]: I0312 14:38:36.134354 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/71a335bd-078a-4b8c-ae09-2e40765034d3-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-79b7956d9f-qkhd4\" (UID: \"71a335bd-078a-4b8c-ae09-2e40765034d3\") " pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:36.134867 master-0 kubenswrapper[37036]: I0312 14:38:36.134812 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/71a335bd-078a-4b8c-ae09-2e40765034d3-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-79b7956d9f-qkhd4\" (UID: \"71a335bd-078a-4b8c-ae09-2e40765034d3\") " pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:36.135176 master-0 kubenswrapper[37036]: I0312 14:38:36.135141 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/71a335bd-078a-4b8c-ae09-2e40765034d3-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-79b7956d9f-qkhd4\" (UID: \"71a335bd-078a-4b8c-ae09-2e40765034d3\") " pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:36.137002 master-0 kubenswrapper[37036]: I0312 14:38:36.136957 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/71a335bd-078a-4b8c-ae09-2e40765034d3-secret-grpc-tls\") pod \"thanos-querier-79b7956d9f-qkhd4\" (UID: \"71a335bd-078a-4b8c-ae09-2e40765034d3\") " pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:36.143749 master-0 kubenswrapper[37036]: I0312 14:38:36.143694 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/71a335bd-078a-4b8c-ae09-2e40765034d3-secret-thanos-querier-tls\") pod \"thanos-querier-79b7956d9f-qkhd4\" (UID: \"71a335bd-078a-4b8c-ae09-2e40765034d3\") " pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:36.143749 master-0 kubenswrapper[37036]: I0312 14:38:36.143710 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/71a335bd-078a-4b8c-ae09-2e40765034d3-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-79b7956d9f-qkhd4\" (UID: \"71a335bd-078a-4b8c-ae09-2e40765034d3\") " pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:36.148479 master-0 kubenswrapper[37036]: I0312 14:38:36.148416 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb7v5\" (UniqueName: \"kubernetes.io/projected/71a335bd-078a-4b8c-ae09-2e40765034d3-kube-api-access-wb7v5\") pod \"thanos-querier-79b7956d9f-qkhd4\" (UID: \"71a335bd-078a-4b8c-ae09-2e40765034d3\") " pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:36.299350 master-0 kubenswrapper[37036]: I0312 14:38:36.299271 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:36.690777 master-0 kubenswrapper[37036]: I0312 14:38:36.690698 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-79b7956d9f-qkhd4"] Mar 12 14:38:36.695323 master-0 kubenswrapper[37036]: W0312 14:38:36.695107 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71a335bd_078a_4b8c_ae09_2e40765034d3.slice/crio-a9e65978ea8b1eaace5558ec75200a37f0534acb7be2c5091551e1dc6bfb5aa1 WatchSource:0}: Error finding container a9e65978ea8b1eaace5558ec75200a37f0534acb7be2c5091551e1dc6bfb5aa1: Status 404 returned error can't find the container with id a9e65978ea8b1eaace5558ec75200a37f0534acb7be2c5091551e1dc6bfb5aa1 Mar 12 14:38:37.375390 master-0 kubenswrapper[37036]: I0312 14:38:37.375286 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" event={"ID":"71a335bd-078a-4b8c-ae09-2e40765034d3","Type":"ContainerStarted","Data":"a9e65978ea8b1eaace5558ec75200a37f0534acb7be2c5091551e1dc6bfb5aa1"} Mar 12 14:38:38.628247 master-0 kubenswrapper[37036]: I0312 14:38:38.628011 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-654cbcb7cd-n2kbl"] Mar 12 14:38:38.629417 master-0 kubenswrapper[37036]: I0312 14:38:38.629359 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:38:38.632443 master-0 kubenswrapper[37036]: I0312 14:38:38.632417 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-2fm9mkprqhut6" Mar 12 14:38:38.636801 master-0 kubenswrapper[37036]: I0312 14:38:38.636776 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-654cbcb7cd-n2kbl"] Mar 12 14:38:38.647592 master-0 kubenswrapper[37036]: I0312 14:38:38.647526 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-85b44c7984-pzbfq"] Mar 12 14:38:38.647790 master-0 kubenswrapper[37036]: I0312 14:38:38.647757 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" podUID="addf66af-4d97-4c1e-960d-ace98c27961b" containerName="metrics-server" containerID="cri-o://a005763593d95225e12c3935e14975d230d9626dd66e3c9c188263f4188a5757" gracePeriod=170 Mar 12 14:38:38.666787 master-0 kubenswrapper[37036]: I0312 14:38:38.666718 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/685f22f3-dec5-476b-98a0-0cb73da77a3f-metrics-server-audit-profiles\") pod \"metrics-server-654cbcb7cd-n2kbl\" (UID: \"685f22f3-dec5-476b-98a0-0cb73da77a3f\") " pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:38:38.666787 master-0 kubenswrapper[37036]: I0312 14:38:38.666789 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/685f22f3-dec5-476b-98a0-0cb73da77a3f-secret-metrics-client-certs\") pod \"metrics-server-654cbcb7cd-n2kbl\" (UID: \"685f22f3-dec5-476b-98a0-0cb73da77a3f\") " pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:38:38.667080 master-0 kubenswrapper[37036]: I0312 14:38:38.666824 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s5h7\" (UniqueName: \"kubernetes.io/projected/685f22f3-dec5-476b-98a0-0cb73da77a3f-kube-api-access-8s5h7\") pod \"metrics-server-654cbcb7cd-n2kbl\" (UID: \"685f22f3-dec5-476b-98a0-0cb73da77a3f\") " pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:38:38.667080 master-0 kubenswrapper[37036]: I0312 14:38:38.666862 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/685f22f3-dec5-476b-98a0-0cb73da77a3f-secret-metrics-server-tls\") pod \"metrics-server-654cbcb7cd-n2kbl\" (UID: \"685f22f3-dec5-476b-98a0-0cb73da77a3f\") " pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:38:38.667080 master-0 kubenswrapper[37036]: I0312 14:38:38.666888 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/685f22f3-dec5-476b-98a0-0cb73da77a3f-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-654cbcb7cd-n2kbl\" (UID: \"685f22f3-dec5-476b-98a0-0cb73da77a3f\") " pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:38:38.667080 master-0 kubenswrapper[37036]: I0312 14:38:38.666985 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/685f22f3-dec5-476b-98a0-0cb73da77a3f-audit-log\") pod \"metrics-server-654cbcb7cd-n2kbl\" (UID: \"685f22f3-dec5-476b-98a0-0cb73da77a3f\") " pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:38:38.667080 master-0 kubenswrapper[37036]: I0312 14:38:38.667020 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685f22f3-dec5-476b-98a0-0cb73da77a3f-client-ca-bundle\") pod \"metrics-server-654cbcb7cd-n2kbl\" (UID: \"685f22f3-dec5-476b-98a0-0cb73da77a3f\") " pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:38:38.768038 master-0 kubenswrapper[37036]: I0312 14:38:38.767973 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/685f22f3-dec5-476b-98a0-0cb73da77a3f-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-654cbcb7cd-n2kbl\" (UID: \"685f22f3-dec5-476b-98a0-0cb73da77a3f\") " pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:38:38.768309 master-0 kubenswrapper[37036]: I0312 14:38:38.768284 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/685f22f3-dec5-476b-98a0-0cb73da77a3f-audit-log\") pod \"metrics-server-654cbcb7cd-n2kbl\" (UID: \"685f22f3-dec5-476b-98a0-0cb73da77a3f\") " pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:38:38.768410 master-0 kubenswrapper[37036]: I0312 14:38:38.768325 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685f22f3-dec5-476b-98a0-0cb73da77a3f-client-ca-bundle\") pod \"metrics-server-654cbcb7cd-n2kbl\" (UID: \"685f22f3-dec5-476b-98a0-0cb73da77a3f\") " pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:38:38.768472 master-0 kubenswrapper[37036]: I0312 14:38:38.768409 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/685f22f3-dec5-476b-98a0-0cb73da77a3f-metrics-server-audit-profiles\") pod \"metrics-server-654cbcb7cd-n2kbl\" (UID: \"685f22f3-dec5-476b-98a0-0cb73da77a3f\") " pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:38:38.768472 master-0 kubenswrapper[37036]: I0312 14:38:38.768464 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/685f22f3-dec5-476b-98a0-0cb73da77a3f-secret-metrics-client-certs\") pod \"metrics-server-654cbcb7cd-n2kbl\" (UID: \"685f22f3-dec5-476b-98a0-0cb73da77a3f\") " pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:38:38.768570 master-0 kubenswrapper[37036]: I0312 14:38:38.768506 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s5h7\" (UniqueName: \"kubernetes.io/projected/685f22f3-dec5-476b-98a0-0cb73da77a3f-kube-api-access-8s5h7\") pod \"metrics-server-654cbcb7cd-n2kbl\" (UID: \"685f22f3-dec5-476b-98a0-0cb73da77a3f\") " pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:38:38.768570 master-0 kubenswrapper[37036]: I0312 14:38:38.768530 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/685f22f3-dec5-476b-98a0-0cb73da77a3f-secret-metrics-server-tls\") pod \"metrics-server-654cbcb7cd-n2kbl\" (UID: \"685f22f3-dec5-476b-98a0-0cb73da77a3f\") " pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:38:38.768687 master-0 kubenswrapper[37036]: I0312 14:38:38.768642 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/685f22f3-dec5-476b-98a0-0cb73da77a3f-audit-log\") pod \"metrics-server-654cbcb7cd-n2kbl\" (UID: \"685f22f3-dec5-476b-98a0-0cb73da77a3f\") " pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:38:38.769757 master-0 kubenswrapper[37036]: I0312 14:38:38.769735 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/685f22f3-dec5-476b-98a0-0cb73da77a3f-metrics-server-audit-profiles\") pod \"metrics-server-654cbcb7cd-n2kbl\" (UID: \"685f22f3-dec5-476b-98a0-0cb73da77a3f\") " pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:38:38.770037 master-0 kubenswrapper[37036]: I0312 14:38:38.770003 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/685f22f3-dec5-476b-98a0-0cb73da77a3f-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-654cbcb7cd-n2kbl\" (UID: \"685f22f3-dec5-476b-98a0-0cb73da77a3f\") " pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:38:38.772985 master-0 kubenswrapper[37036]: I0312 14:38:38.772169 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/685f22f3-dec5-476b-98a0-0cb73da77a3f-secret-metrics-client-certs\") pod \"metrics-server-654cbcb7cd-n2kbl\" (UID: \"685f22f3-dec5-476b-98a0-0cb73da77a3f\") " pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:38:38.772985 master-0 kubenswrapper[37036]: I0312 14:38:38.772939 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685f22f3-dec5-476b-98a0-0cb73da77a3f-client-ca-bundle\") pod \"metrics-server-654cbcb7cd-n2kbl\" (UID: \"685f22f3-dec5-476b-98a0-0cb73da77a3f\") " pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:38:38.773362 master-0 kubenswrapper[37036]: I0312 14:38:38.773325 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/685f22f3-dec5-476b-98a0-0cb73da77a3f-secret-metrics-server-tls\") pod \"metrics-server-654cbcb7cd-n2kbl\" (UID: \"685f22f3-dec5-476b-98a0-0cb73da77a3f\") " pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:38:38.783916 master-0 kubenswrapper[37036]: I0312 14:38:38.783864 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s5h7\" (UniqueName: \"kubernetes.io/projected/685f22f3-dec5-476b-98a0-0cb73da77a3f-kube-api-access-8s5h7\") pod \"metrics-server-654cbcb7cd-n2kbl\" (UID: \"685f22f3-dec5-476b-98a0-0cb73da77a3f\") " pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:38:38.946754 master-0 kubenswrapper[37036]: I0312 14:38:38.946700 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:38:39.339028 master-0 kubenswrapper[37036]: I0312 14:38:39.338976 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-654cbcb7cd-n2kbl"] Mar 12 14:38:39.347059 master-0 kubenswrapper[37036]: W0312 14:38:39.347020 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod685f22f3_dec5_476b_98a0_0cb73da77a3f.slice/crio-6c3209fe4b6442d61ce5dafc498bba0e4a14ee4b6508916661b071c18ece7a8a WatchSource:0}: Error finding container 6c3209fe4b6442d61ce5dafc498bba0e4a14ee4b6508916661b071c18ece7a8a: Status 404 returned error can't find the container with id 6c3209fe4b6442d61ce5dafc498bba0e4a14ee4b6508916661b071c18ece7a8a Mar 12 14:38:39.392382 master-0 kubenswrapper[37036]: I0312 14:38:39.392324 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" event={"ID":"685f22f3-dec5-476b-98a0-0cb73da77a3f","Type":"ContainerStarted","Data":"6c3209fe4b6442d61ce5dafc498bba0e4a14ee4b6508916661b071c18ece7a8a"} Mar 12 14:38:39.394795 master-0 kubenswrapper[37036]: I0312 14:38:39.394763 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" event={"ID":"71a335bd-078a-4b8c-ae09-2e40765034d3","Type":"ContainerStarted","Data":"dd4412335943fc32976549aed4eb02a61b9a01379178dc348f4f02f645714fb6"} Mar 12 14:38:39.394795 master-0 kubenswrapper[37036]: I0312 14:38:39.394786 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" event={"ID":"71a335bd-078a-4b8c-ae09-2e40765034d3","Type":"ContainerStarted","Data":"b7491aca5d8cd6235d8584428ec1db51ec6d927436df11ef34e63fd70e5abc7e"} Mar 12 14:38:39.394795 master-0 kubenswrapper[37036]: I0312 14:38:39.394796 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" event={"ID":"71a335bd-078a-4b8c-ae09-2e40765034d3","Type":"ContainerStarted","Data":"749453ab4e7d2ace9c0d56f120ec6cf59996714f4f1670369f19e50be6f766b3"} Mar 12 14:38:40.411411 master-0 kubenswrapper[37036]: I0312 14:38:40.411359 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" event={"ID":"685f22f3-dec5-476b-98a0-0cb73da77a3f","Type":"ContainerStarted","Data":"70c2df00b4e490d61f07deb99c611f5a59a0ce67947b36dbfba9372a91e48b23"} Mar 12 14:38:40.438774 master-0 kubenswrapper[37036]: I0312 14:38:40.438695 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" podStartSLOduration=2.438679776 podStartE2EDuration="2.438679776s" podCreationTimestamp="2026-03-12 14:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:38:40.438673336 +0000 UTC m=+179.446414283" watchObservedRunningTime="2026-03-12 14:38:40.438679776 +0000 UTC m=+179.446420713" Mar 12 14:38:41.420450 master-0 kubenswrapper[37036]: I0312 14:38:41.420384 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" event={"ID":"71a335bd-078a-4b8c-ae09-2e40765034d3","Type":"ContainerStarted","Data":"4d8d7c480434e57d9b96e9718fa9cbb7648ff913ba91af2af29cf895b01f0cc9"} Mar 12 14:38:41.420450 master-0 kubenswrapper[37036]: I0312 14:38:41.420449 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" event={"ID":"71a335bd-078a-4b8c-ae09-2e40765034d3","Type":"ContainerStarted","Data":"d31619ce039dacb4413cc3a03c07951df0073e48b51d63aec3f9e68459ba069c"} Mar 12 14:38:41.420917 master-0 kubenswrapper[37036]: I0312 14:38:41.420467 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" event={"ID":"71a335bd-078a-4b8c-ae09-2e40765034d3","Type":"ContainerStarted","Data":"cfe6933df8b66e692cd1b5f5063aaa9b2b9283cf82836d48af403d68a7a1ccb2"} Mar 12 14:38:41.449870 master-0 kubenswrapper[37036]: I0312 14:38:41.449715 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" podStartSLOduration=2.237814402 podStartE2EDuration="6.449696397s" podCreationTimestamp="2026-03-12 14:38:35 +0000 UTC" firstStartedPulling="2026-03-12 14:38:36.697880229 +0000 UTC m=+175.705621166" lastFinishedPulling="2026-03-12 14:38:40.909762224 +0000 UTC m=+179.917503161" observedRunningTime="2026-03-12 14:38:41.448438757 +0000 UTC m=+180.456179694" watchObservedRunningTime="2026-03-12 14:38:41.449696397 +0000 UTC m=+180.457437344" Mar 12 14:38:42.430767 master-0 kubenswrapper[37036]: I0312 14:38:42.430681 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:46.308080 master-0 kubenswrapper[37036]: I0312 14:38:46.307995 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-79b7956d9f-qkhd4" Mar 12 14:38:52.425857 master-0 kubenswrapper[37036]: I0312 14:38:52.425745 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-d7bc769d-7n7p2" podUID="d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687" containerName="console" containerID="cri-o://80c9073214d7c9dfb60e279e6fa079010abac62c8357abe680584ea3eb7ecac8" gracePeriod=15 Mar 12 14:38:52.865511 master-0 kubenswrapper[37036]: I0312 14:38:52.865463 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-d7bc769d-7n7p2_d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687/console/0.log" Mar 12 14:38:52.865652 master-0 kubenswrapper[37036]: I0312 14:38:52.865535 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:38:52.892543 master-0 kubenswrapper[37036]: I0312 14:38:52.892453 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-console-serving-cert\") pod \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " Mar 12 14:38:52.892543 master-0 kubenswrapper[37036]: I0312 14:38:52.892539 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-console-oauth-config\") pod \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " Mar 12 14:38:52.892811 master-0 kubenswrapper[37036]: I0312 14:38:52.892571 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mskzv\" (UniqueName: \"kubernetes.io/projected/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-kube-api-access-mskzv\") pod \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " Mar 12 14:38:52.892811 master-0 kubenswrapper[37036]: I0312 14:38:52.892635 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-trusted-ca-bundle\") pod \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " Mar 12 14:38:52.892811 master-0 kubenswrapper[37036]: I0312 14:38:52.892679 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-service-ca\") pod \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " Mar 12 14:38:52.892811 master-0 kubenswrapper[37036]: I0312 14:38:52.892735 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-oauth-serving-cert\") pod \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " Mar 12 14:38:52.892811 master-0 kubenswrapper[37036]: I0312 14:38:52.892780 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-console-config\") pod \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\" (UID: \"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687\") " Mar 12 14:38:52.893417 master-0 kubenswrapper[37036]: I0312 14:38:52.893358 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687" (UID: "d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:38:52.893492 master-0 kubenswrapper[37036]: I0312 14:38:52.893371 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-console-config" (OuterVolumeSpecName: "console-config") pod "d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687" (UID: "d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:38:52.893574 master-0 kubenswrapper[37036]: I0312 14:38:52.893509 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-service-ca" (OuterVolumeSpecName: "service-ca") pod "d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687" (UID: "d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:38:52.894234 master-0 kubenswrapper[37036]: I0312 14:38:52.893832 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687" (UID: "d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:38:52.895479 master-0 kubenswrapper[37036]: I0312 14:38:52.895445 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687" (UID: "d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:38:52.896438 master-0 kubenswrapper[37036]: I0312 14:38:52.896379 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-kube-api-access-mskzv" (OuterVolumeSpecName: "kube-api-access-mskzv") pod "d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687" (UID: "d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687"). InnerVolumeSpecName "kube-api-access-mskzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:38:52.896655 master-0 kubenswrapper[37036]: I0312 14:38:52.896591 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687" (UID: "d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:38:52.994092 master-0 kubenswrapper[37036]: I0312 14:38:52.993985 37036 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:38:52.994092 master-0 kubenswrapper[37036]: I0312 14:38:52.994023 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mskzv\" (UniqueName: \"kubernetes.io/projected/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-kube-api-access-mskzv\") on node \"master-0\" DevicePath \"\"" Mar 12 14:38:52.994092 master-0 kubenswrapper[37036]: I0312 14:38:52.994033 37036 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:38:52.994092 master-0 kubenswrapper[37036]: I0312 14:38:52.994042 37036 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:38:52.994092 master-0 kubenswrapper[37036]: I0312 14:38:52.994052 37036 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:38:52.994092 master-0 kubenswrapper[37036]: I0312 14:38:52.994060 37036 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-console-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:38:52.994092 master-0 kubenswrapper[37036]: I0312 14:38:52.994069 37036 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:38:53.503847 master-0 kubenswrapper[37036]: I0312 14:38:53.503798 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-d7bc769d-7n7p2_d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687/console/0.log" Mar 12 14:38:53.504374 master-0 kubenswrapper[37036]: I0312 14:38:53.503853 37036 generic.go:334] "Generic (PLEG): container finished" podID="d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687" containerID="80c9073214d7c9dfb60e279e6fa079010abac62c8357abe680584ea3eb7ecac8" exitCode=2 Mar 12 14:38:53.504374 master-0 kubenswrapper[37036]: I0312 14:38:53.503888 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-d7bc769d-7n7p2" event={"ID":"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687","Type":"ContainerDied","Data":"80c9073214d7c9dfb60e279e6fa079010abac62c8357abe680584ea3eb7ecac8"} Mar 12 14:38:53.504374 master-0 kubenswrapper[37036]: I0312 14:38:53.503930 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-d7bc769d-7n7p2" event={"ID":"d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687","Type":"ContainerDied","Data":"9d6e4aa19467aa0fd054b900eafce5c903e382b7c83456640b5767327c3cbfb0"} Mar 12 14:38:53.504374 master-0 kubenswrapper[37036]: I0312 14:38:53.503946 37036 scope.go:117] "RemoveContainer" containerID="80c9073214d7c9dfb60e279e6fa079010abac62c8357abe680584ea3eb7ecac8" Mar 12 14:38:53.504374 master-0 kubenswrapper[37036]: I0312 14:38:53.504029 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-d7bc769d-7n7p2" Mar 12 14:38:53.520818 master-0 kubenswrapper[37036]: I0312 14:38:53.520768 37036 scope.go:117] "RemoveContainer" containerID="80c9073214d7c9dfb60e279e6fa079010abac62c8357abe680584ea3eb7ecac8" Mar 12 14:38:53.521291 master-0 kubenswrapper[37036]: E0312 14:38:53.521267 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80c9073214d7c9dfb60e279e6fa079010abac62c8357abe680584ea3eb7ecac8\": container with ID starting with 80c9073214d7c9dfb60e279e6fa079010abac62c8357abe680584ea3eb7ecac8 not found: ID does not exist" containerID="80c9073214d7c9dfb60e279e6fa079010abac62c8357abe680584ea3eb7ecac8" Mar 12 14:38:53.521367 master-0 kubenswrapper[37036]: I0312 14:38:53.521293 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80c9073214d7c9dfb60e279e6fa079010abac62c8357abe680584ea3eb7ecac8"} err="failed to get container status \"80c9073214d7c9dfb60e279e6fa079010abac62c8357abe680584ea3eb7ecac8\": rpc error: code = NotFound desc = could not find container \"80c9073214d7c9dfb60e279e6fa079010abac62c8357abe680584ea3eb7ecac8\": container with ID starting with 80c9073214d7c9dfb60e279e6fa079010abac62c8357abe680584ea3eb7ecac8 not found: ID does not exist" Mar 12 14:38:53.531442 master-0 kubenswrapper[37036]: I0312 14:38:53.531385 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-d7bc769d-7n7p2"] Mar 12 14:38:53.537951 master-0 kubenswrapper[37036]: I0312 14:38:53.537893 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-d7bc769d-7n7p2"] Mar 12 14:38:55.244784 master-0 kubenswrapper[37036]: I0312 14:38:55.244711 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687" path="/var/lib/kubelet/pods/d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687/volumes" Mar 12 14:38:58.947185 master-0 kubenswrapper[37036]: I0312 14:38:58.947095 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:38:58.947185 master-0 kubenswrapper[37036]: I0312 14:38:58.947177 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:39:11.178853 master-0 kubenswrapper[37036]: I0312 14:39:11.178798 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 12 14:39:11.179496 master-0 kubenswrapper[37036]: E0312 14:39:11.179084 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687" containerName="console" Mar 12 14:39:11.179496 master-0 kubenswrapper[37036]: I0312 14:39:11.179096 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687" containerName="console" Mar 12 14:39:11.179496 master-0 kubenswrapper[37036]: I0312 14:39:11.179245 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="d192dc2b-d1d6-45fe-bdd1-9ceb6ec6e687" containerName="console" Mar 12 14:39:11.181040 master-0 kubenswrapper[37036]: I0312 14:39:11.181013 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.182865 master-0 kubenswrapper[37036]: I0312 14:39:11.182823 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 12 14:39:11.183762 master-0 kubenswrapper[37036]: I0312 14:39:11.183716 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 12 14:39:11.184325 master-0 kubenswrapper[37036]: I0312 14:39:11.184289 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 12 14:39:11.185554 master-0 kubenswrapper[37036]: I0312 14:39:11.185522 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 12 14:39:11.185722 master-0 kubenswrapper[37036]: I0312 14:39:11.185674 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 12 14:39:11.185780 master-0 kubenswrapper[37036]: I0312 14:39:11.185680 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 12 14:39:11.186142 master-0 kubenswrapper[37036]: I0312 14:39:11.186117 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 12 14:39:11.192860 master-0 kubenswrapper[37036]: I0312 14:39:11.192822 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 12 14:39:11.207775 master-0 kubenswrapper[37036]: I0312 14:39:11.207715 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 12 14:39:11.344120 master-0 kubenswrapper[37036]: I0312 14:39:11.344037 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b31526da-bd77-4d32-af73-1ceccaebdce7-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.344120 master-0 kubenswrapper[37036]: I0312 14:39:11.344101 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/b31526da-bd77-4d32-af73-1ceccaebdce7-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.344120 master-0 kubenswrapper[37036]: I0312 14:39:11.344130 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b31526da-bd77-4d32-af73-1ceccaebdce7-web-config\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.344457 master-0 kubenswrapper[37036]: I0312 14:39:11.344155 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b31526da-bd77-4d32-af73-1ceccaebdce7-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.344457 master-0 kubenswrapper[37036]: I0312 14:39:11.344287 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b31526da-bd77-4d32-af73-1ceccaebdce7-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.344457 master-0 kubenswrapper[37036]: I0312 14:39:11.344366 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b31526da-bd77-4d32-af73-1ceccaebdce7-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.344457 master-0 kubenswrapper[37036]: I0312 14:39:11.344441 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b31526da-bd77-4d32-af73-1ceccaebdce7-config-volume\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.344639 master-0 kubenswrapper[37036]: I0312 14:39:11.344476 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpx7l\" (UniqueName: \"kubernetes.io/projected/b31526da-bd77-4d32-af73-1ceccaebdce7-kube-api-access-zpx7l\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.344639 master-0 kubenswrapper[37036]: I0312 14:39:11.344519 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/b31526da-bd77-4d32-af73-1ceccaebdce7-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.344639 master-0 kubenswrapper[37036]: I0312 14:39:11.344547 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/b31526da-bd77-4d32-af73-1ceccaebdce7-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.344639 master-0 kubenswrapper[37036]: I0312 14:39:11.344575 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b31526da-bd77-4d32-af73-1ceccaebdce7-config-out\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.344814 master-0 kubenswrapper[37036]: I0312 14:39:11.344740 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b31526da-bd77-4d32-af73-1ceccaebdce7-tls-assets\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.446286 master-0 kubenswrapper[37036]: I0312 14:39:11.446157 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b31526da-bd77-4d32-af73-1ceccaebdce7-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.446286 master-0 kubenswrapper[37036]: I0312 14:39:11.446232 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/b31526da-bd77-4d32-af73-1ceccaebdce7-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.446286 master-0 kubenswrapper[37036]: I0312 14:39:11.446261 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b31526da-bd77-4d32-af73-1ceccaebdce7-web-config\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.446286 master-0 kubenswrapper[37036]: I0312 14:39:11.446280 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b31526da-bd77-4d32-af73-1ceccaebdce7-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.446603 master-0 kubenswrapper[37036]: I0312 14:39:11.446304 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b31526da-bd77-4d32-af73-1ceccaebdce7-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.446867 master-0 kubenswrapper[37036]: I0312 14:39:11.446458 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b31526da-bd77-4d32-af73-1ceccaebdce7-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.446945 master-0 kubenswrapper[37036]: I0312 14:39:11.446912 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b31526da-bd77-4d32-af73-1ceccaebdce7-config-volume\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.446945 master-0 kubenswrapper[37036]: I0312 14:39:11.446940 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpx7l\" (UniqueName: \"kubernetes.io/projected/b31526da-bd77-4d32-af73-1ceccaebdce7-kube-api-access-zpx7l\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.447008 master-0 kubenswrapper[37036]: I0312 14:39:11.446963 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/b31526da-bd77-4d32-af73-1ceccaebdce7-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.447008 master-0 kubenswrapper[37036]: I0312 14:39:11.446984 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/b31526da-bd77-4d32-af73-1ceccaebdce7-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.447008 master-0 kubenswrapper[37036]: I0312 14:39:11.446999 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b31526da-bd77-4d32-af73-1ceccaebdce7-config-out\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.447104 master-0 kubenswrapper[37036]: I0312 14:39:11.447060 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b31526da-bd77-4d32-af73-1ceccaebdce7-tls-assets\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.447104 master-0 kubenswrapper[37036]: I0312 14:39:11.447083 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b31526da-bd77-4d32-af73-1ceccaebdce7-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.447319 master-0 kubenswrapper[37036]: I0312 14:39:11.447288 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b31526da-bd77-4d32-af73-1ceccaebdce7-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.449471 master-0 kubenswrapper[37036]: I0312 14:39:11.447825 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/b31526da-bd77-4d32-af73-1ceccaebdce7-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.451590 master-0 kubenswrapper[37036]: I0312 14:39:11.451347 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b31526da-bd77-4d32-af73-1ceccaebdce7-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.452292 master-0 kubenswrapper[37036]: I0312 14:39:11.452265 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b31526da-bd77-4d32-af73-1ceccaebdce7-web-config\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.452838 master-0 kubenswrapper[37036]: I0312 14:39:11.452819 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b31526da-bd77-4d32-af73-1ceccaebdce7-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.453504 master-0 kubenswrapper[37036]: I0312 14:39:11.453465 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b31526da-bd77-4d32-af73-1ceccaebdce7-config-volume\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.454942 master-0 kubenswrapper[37036]: I0312 14:39:11.454429 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/b31526da-bd77-4d32-af73-1ceccaebdce7-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.500152 master-0 kubenswrapper[37036]: I0312 14:39:11.500096 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b31526da-bd77-4d32-af73-1ceccaebdce7-tls-assets\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.503928 master-0 kubenswrapper[37036]: I0312 14:39:11.501798 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/b31526da-bd77-4d32-af73-1ceccaebdce7-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.503928 master-0 kubenswrapper[37036]: I0312 14:39:11.501919 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b31526da-bd77-4d32-af73-1ceccaebdce7-config-out\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.539920 master-0 kubenswrapper[37036]: I0312 14:39:11.539430 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpx7l\" (UniqueName: \"kubernetes.io/projected/b31526da-bd77-4d32-af73-1ceccaebdce7-kube-api-access-zpx7l\") pod \"alertmanager-main-0\" (UID: \"b31526da-bd77-4d32-af73-1ceccaebdce7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:11.797792 master-0 kubenswrapper[37036]: I0312 14:39:11.797739 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 12 14:39:12.252008 master-0 kubenswrapper[37036]: I0312 14:39:12.251942 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 12 14:39:12.254680 master-0 kubenswrapper[37036]: W0312 14:39:12.254646 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb31526da_bd77_4d32_af73_1ceccaebdce7.slice/crio-7988920ac54e91333dc651d286bd1ac764ad4c192fc64c1cf83b151692389f40 WatchSource:0}: Error finding container 7988920ac54e91333dc651d286bd1ac764ad4c192fc64c1cf83b151692389f40: Status 404 returned error can't find the container with id 7988920ac54e91333dc651d286bd1ac764ad4c192fc64c1cf83b151692389f40 Mar 12 14:39:12.638695 master-0 kubenswrapper[37036]: I0312 14:39:12.638582 37036 generic.go:334] "Generic (PLEG): container finished" podID="b31526da-bd77-4d32-af73-1ceccaebdce7" containerID="b1bd258839859240967974b8f7648f015ad43addb9c20d6c0c85ab8b605a8086" exitCode=0 Mar 12 14:39:12.638695 master-0 kubenswrapper[37036]: I0312 14:39:12.638641 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b31526da-bd77-4d32-af73-1ceccaebdce7","Type":"ContainerDied","Data":"b1bd258839859240967974b8f7648f015ad43addb9c20d6c0c85ab8b605a8086"} Mar 12 14:39:12.639133 master-0 kubenswrapper[37036]: I0312 14:39:12.638707 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b31526da-bd77-4d32-af73-1ceccaebdce7","Type":"ContainerStarted","Data":"7988920ac54e91333dc651d286bd1ac764ad4c192fc64c1cf83b151692389f40"} Mar 12 14:39:15.661053 master-0 kubenswrapper[37036]: I0312 14:39:15.660985 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b31526da-bd77-4d32-af73-1ceccaebdce7","Type":"ContainerStarted","Data":"b0e79078caf2544259af5eaf3a0300661393a07fc9250f5fbddf250d5f578418"} Mar 12 14:39:15.661053 master-0 kubenswrapper[37036]: I0312 14:39:15.661049 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b31526da-bd77-4d32-af73-1ceccaebdce7","Type":"ContainerStarted","Data":"e19753f19715299f3cc096657b398166a0aa3603d811cc63e7f706bd511a43e3"} Mar 12 14:39:15.661053 master-0 kubenswrapper[37036]: I0312 14:39:15.661063 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b31526da-bd77-4d32-af73-1ceccaebdce7","Type":"ContainerStarted","Data":"b5dd229056dcdde49e5abdb160c86154bd2ccf2b5785d6ccdb2d17ffe0d2fab0"} Mar 12 14:39:15.661652 master-0 kubenswrapper[37036]: I0312 14:39:15.661074 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b31526da-bd77-4d32-af73-1ceccaebdce7","Type":"ContainerStarted","Data":"d61e0163f73148dc92f858fca6c7186d1d83929ee78395dc45794d6537955e47"} Mar 12 14:39:15.661652 master-0 kubenswrapper[37036]: I0312 14:39:15.661087 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b31526da-bd77-4d32-af73-1ceccaebdce7","Type":"ContainerStarted","Data":"1a9e61152142526c8a346f92c735e325ea366f32ac5b25acec20bb57f751d6f3"} Mar 12 14:39:15.661652 master-0 kubenswrapper[37036]: I0312 14:39:15.661098 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b31526da-bd77-4d32-af73-1ceccaebdce7","Type":"ContainerStarted","Data":"e036c505229382828ea4423266eccd78faf995dbd4c3c5c7a72ad840c40b48e7"} Mar 12 14:39:15.761343 master-0 kubenswrapper[37036]: I0312 14:39:15.761251 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.64820159 podStartE2EDuration="4.761229158s" podCreationTimestamp="2026-03-12 14:39:11 +0000 UTC" firstStartedPulling="2026-03-12 14:39:12.640236654 +0000 UTC m=+211.647977591" lastFinishedPulling="2026-03-12 14:39:14.753264222 +0000 UTC m=+213.761005159" observedRunningTime="2026-03-12 14:39:15.759998827 +0000 UTC m=+214.767739764" watchObservedRunningTime="2026-03-12 14:39:15.761229158 +0000 UTC m=+214.768970095" Mar 12 14:39:18.953715 master-0 kubenswrapper[37036]: I0312 14:39:18.953629 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:39:18.960207 master-0 kubenswrapper[37036]: I0312 14:39:18.960104 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-654cbcb7cd-n2kbl" Mar 12 14:39:41.387323 master-0 kubenswrapper[37036]: I0312 14:39:41.387261 37036 scope.go:117] "RemoveContainer" containerID="bd7899bffaf6aa78dc3ed5f5798ea564a1a0894027ca075b490729e999a8ce5b" Mar 12 14:39:44.020035 master-0 kubenswrapper[37036]: I0312 14:39:44.019967 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 12 14:39:44.025885 master-0 kubenswrapper[37036]: I0312 14:39:44.023307 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.027318 master-0 kubenswrapper[37036]: I0312 14:39:44.026966 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-4qd00lteku58r" Mar 12 14:39:44.027318 master-0 kubenswrapper[37036]: I0312 14:39:44.027213 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 12 14:39:44.027318 master-0 kubenswrapper[37036]: I0312 14:39:44.027264 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 12 14:39:44.030921 master-0 kubenswrapper[37036]: I0312 14:39:44.027575 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 12 14:39:44.030921 master-0 kubenswrapper[37036]: I0312 14:39:44.027701 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 12 14:39:44.030921 master-0 kubenswrapper[37036]: I0312 14:39:44.029430 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 12 14:39:44.030921 master-0 kubenswrapper[37036]: I0312 14:39:44.029580 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 12 14:39:44.030921 master-0 kubenswrapper[37036]: I0312 14:39:44.029691 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 12 14:39:44.030921 master-0 kubenswrapper[37036]: I0312 14:39:44.029774 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 12 14:39:44.032154 master-0 kubenswrapper[37036]: I0312 14:39:44.032119 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 12 14:39:44.033599 master-0 kubenswrapper[37036]: I0312 14:39:44.033475 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 12 14:39:44.040545 master-0 kubenswrapper[37036]: I0312 14:39:44.040484 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 12 14:39:44.046074 master-0 kubenswrapper[37036]: I0312 14:39:44.046015 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6876993e-91d1-4544-bd72-e2eb4f1e10d1-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.046322 master-0 kubenswrapper[37036]: I0312 14:39:44.046221 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6876993e-91d1-4544-bd72-e2eb4f1e10d1-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.046322 master-0 kubenswrapper[37036]: I0312 14:39:44.046291 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.046437 master-0 kubenswrapper[37036]: I0312 14:39:44.046327 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.046437 master-0 kubenswrapper[37036]: I0312 14:39:44.046378 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.046544 master-0 kubenswrapper[37036]: I0312 14:39:44.046432 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.046544 master-0 kubenswrapper[37036]: I0312 14:39:44.046493 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.046544 master-0 kubenswrapper[37036]: I0312 14:39:44.046529 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6876993e-91d1-4544-bd72-e2eb4f1e10d1-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.046676 master-0 kubenswrapper[37036]: I0312 14:39:44.046574 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.046676 master-0 kubenswrapper[37036]: I0312 14:39:44.046660 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.046768 master-0 kubenswrapper[37036]: I0312 14:39:44.046714 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6876993e-91d1-4544-bd72-e2eb4f1e10d1-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.046768 master-0 kubenswrapper[37036]: I0312 14:39:44.046748 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6876993e-91d1-4544-bd72-e2eb4f1e10d1-config-out\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.046854 master-0 kubenswrapper[37036]: I0312 14:39:44.046780 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-web-config\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.046854 master-0 kubenswrapper[37036]: I0312 14:39:44.046817 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6876993e-91d1-4544-bd72-e2eb4f1e10d1-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.047055 master-0 kubenswrapper[37036]: I0312 14:39:44.047002 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6876993e-91d1-4544-bd72-e2eb4f1e10d1-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.047122 master-0 kubenswrapper[37036]: I0312 14:39:44.047094 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6876993e-91d1-4544-bd72-e2eb4f1e10d1-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.047196 master-0 kubenswrapper[37036]: I0312 14:39:44.047172 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-726qv\" (UniqueName: \"kubernetes.io/projected/6876993e-91d1-4544-bd72-e2eb4f1e10d1-kube-api-access-726qv\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.047266 master-0 kubenswrapper[37036]: I0312 14:39:44.047217 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-config\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.048931 master-0 kubenswrapper[37036]: I0312 14:39:44.048861 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 12 14:39:44.148816 master-0 kubenswrapper[37036]: I0312 14:39:44.148745 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6876993e-91d1-4544-bd72-e2eb4f1e10d1-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.148816 master-0 kubenswrapper[37036]: I0312 14:39:44.148807 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.149104 master-0 kubenswrapper[37036]: I0312 14:39:44.148829 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.149104 master-0 kubenswrapper[37036]: I0312 14:39:44.148855 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.149104 master-0 kubenswrapper[37036]: I0312 14:39:44.148879 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.149104 master-0 kubenswrapper[37036]: I0312 14:39:44.148913 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.149104 master-0 kubenswrapper[37036]: I0312 14:39:44.148928 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6876993e-91d1-4544-bd72-e2eb4f1e10d1-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.149104 master-0 kubenswrapper[37036]: I0312 14:39:44.148948 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.149104 master-0 kubenswrapper[37036]: I0312 14:39:44.149005 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.150016 master-0 kubenswrapper[37036]: I0312 14:39:44.149992 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6876993e-91d1-4544-bd72-e2eb4f1e10d1-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.150074 master-0 kubenswrapper[37036]: I0312 14:39:44.150046 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6876993e-91d1-4544-bd72-e2eb4f1e10d1-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.150074 master-0 kubenswrapper[37036]: I0312 14:39:44.150068 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6876993e-91d1-4544-bd72-e2eb4f1e10d1-config-out\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.150143 master-0 kubenswrapper[37036]: I0312 14:39:44.150084 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-web-config\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.150143 master-0 kubenswrapper[37036]: I0312 14:39:44.150105 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6876993e-91d1-4544-bd72-e2eb4f1e10d1-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.150143 master-0 kubenswrapper[37036]: I0312 14:39:44.150141 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6876993e-91d1-4544-bd72-e2eb4f1e10d1-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.150255 master-0 kubenswrapper[37036]: I0312 14:39:44.150162 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6876993e-91d1-4544-bd72-e2eb4f1e10d1-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.150255 master-0 kubenswrapper[37036]: I0312 14:39:44.150185 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-726qv\" (UniqueName: \"kubernetes.io/projected/6876993e-91d1-4544-bd72-e2eb4f1e10d1-kube-api-access-726qv\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.150255 master-0 kubenswrapper[37036]: I0312 14:39:44.150203 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-config\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.150255 master-0 kubenswrapper[37036]: I0312 14:39:44.150221 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6876993e-91d1-4544-bd72-e2eb4f1e10d1-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.150578 master-0 kubenswrapper[37036]: I0312 14:39:44.150560 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6876993e-91d1-4544-bd72-e2eb4f1e10d1-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.154186 master-0 kubenswrapper[37036]: I0312 14:39:44.151937 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.154186 master-0 kubenswrapper[37036]: I0312 14:39:44.152489 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6876993e-91d1-4544-bd72-e2eb4f1e10d1-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.154186 master-0 kubenswrapper[37036]: I0312 14:39:44.153100 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6876993e-91d1-4544-bd72-e2eb4f1e10d1-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.154186 master-0 kubenswrapper[37036]: I0312 14:39:44.153184 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.154186 master-0 kubenswrapper[37036]: I0312 14:39:44.153374 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.154186 master-0 kubenswrapper[37036]: I0312 14:39:44.154026 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.154186 master-0 kubenswrapper[37036]: I0312 14:39:44.154154 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6876993e-91d1-4544-bd72-e2eb4f1e10d1-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.154462 master-0 kubenswrapper[37036]: I0312 14:39:44.154201 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.155947 master-0 kubenswrapper[37036]: I0312 14:39:44.155914 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6876993e-91d1-4544-bd72-e2eb4f1e10d1-config-out\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.158752 master-0 kubenswrapper[37036]: I0312 14:39:44.156227 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6876993e-91d1-4544-bd72-e2eb4f1e10d1-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.158752 master-0 kubenswrapper[37036]: I0312 14:39:44.157249 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-config\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.158752 master-0 kubenswrapper[37036]: I0312 14:39:44.157475 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-web-config\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.159378 master-0 kubenswrapper[37036]: I0312 14:39:44.159343 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.161480 master-0 kubenswrapper[37036]: I0312 14:39:44.161437 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6876993e-91d1-4544-bd72-e2eb4f1e10d1-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.163385 master-0 kubenswrapper[37036]: I0312 14:39:44.163353 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6876993e-91d1-4544-bd72-e2eb4f1e10d1-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.172214 master-0 kubenswrapper[37036]: I0312 14:39:44.172164 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-726qv\" (UniqueName: \"kubernetes.io/projected/6876993e-91d1-4544-bd72-e2eb4f1e10d1-kube-api-access-726qv\") pod \"prometheus-k8s-0\" (UID: \"6876993e-91d1-4544-bd72-e2eb4f1e10d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.351019 master-0 kubenswrapper[37036]: I0312 14:39:44.350855 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:39:44.780409 master-0 kubenswrapper[37036]: I0312 14:39:44.777056 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 12 14:39:44.882017 master-0 kubenswrapper[37036]: I0312 14:39:44.881960 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6876993e-91d1-4544-bd72-e2eb4f1e10d1","Type":"ContainerStarted","Data":"3a550f2feb9168852d3165cc35b91d8d83e7aaac82428072397c1977d9289359"} Mar 12 14:39:45.891301 master-0 kubenswrapper[37036]: I0312 14:39:45.891217 37036 generic.go:334] "Generic (PLEG): container finished" podID="6876993e-91d1-4544-bd72-e2eb4f1e10d1" containerID="0eca9332f32f063ad5a0677200dad236b82abc58b64edfec91ac956ff484f685" exitCode=0 Mar 12 14:39:45.891301 master-0 kubenswrapper[37036]: I0312 14:39:45.891274 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6876993e-91d1-4544-bd72-e2eb4f1e10d1","Type":"ContainerDied","Data":"0eca9332f32f063ad5a0677200dad236b82abc58b64edfec91ac956ff484f685"} Mar 12 14:39:49.924069 master-0 kubenswrapper[37036]: I0312 14:39:49.924014 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6876993e-91d1-4544-bd72-e2eb4f1e10d1","Type":"ContainerStarted","Data":"bbe921f5675ad8912031f1097c11e28ffe01de159afca0ab87dad80665da7b2b"} Mar 12 14:39:49.924069 master-0 kubenswrapper[37036]: I0312 14:39:49.924067 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6876993e-91d1-4544-bd72-e2eb4f1e10d1","Type":"ContainerStarted","Data":"49ad2a94ed6ba8a0bf4506315fcddcf857c0ac8964f2b029ad28ab71fafc0fa9"} Mar 12 14:39:49.924069 master-0 kubenswrapper[37036]: I0312 14:39:49.924080 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6876993e-91d1-4544-bd72-e2eb4f1e10d1","Type":"ContainerStarted","Data":"ee9535a5b5dd7cb782fd7dc36457d08c49d63d033a92c5f7a773c9cdb9f82898"} Mar 12 14:39:50.939971 master-0 kubenswrapper[37036]: I0312 14:39:50.939523 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6876993e-91d1-4544-bd72-e2eb4f1e10d1","Type":"ContainerStarted","Data":"8e9c230441aee74249e8b9e340587944d2eca2ab67923f84cead37945192c862"} Mar 12 14:39:50.939971 master-0 kubenswrapper[37036]: I0312 14:39:50.939593 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6876993e-91d1-4544-bd72-e2eb4f1e10d1","Type":"ContainerStarted","Data":"5a68d1498b6267a8a607c6ddcb6f1e30773e5b4617e77a51fe160b0a0e13dfa4"} Mar 12 14:39:50.939971 master-0 kubenswrapper[37036]: I0312 14:39:50.939613 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6876993e-91d1-4544-bd72-e2eb4f1e10d1","Type":"ContainerStarted","Data":"757c69364a7c69cd9b1342087f47b8eb8acac38ffcec93a63c4b1c2bfa7714d3"} Mar 12 14:39:50.989459 master-0 kubenswrapper[37036]: I0312 14:39:50.987236 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.48494928 podStartE2EDuration="7.987214295s" podCreationTimestamp="2026-03-12 14:39:43 +0000 UTC" firstStartedPulling="2026-03-12 14:39:45.89264258 +0000 UTC m=+244.900383517" lastFinishedPulling="2026-03-12 14:39:49.394907595 +0000 UTC m=+248.402648532" observedRunningTime="2026-03-12 14:39:50.984970559 +0000 UTC m=+249.992711526" watchObservedRunningTime="2026-03-12 14:39:50.987214295 +0000 UTC m=+249.994955232" Mar 12 14:39:54.351882 master-0 kubenswrapper[37036]: I0312 14:39:54.351786 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:40:41.216884 master-0 kubenswrapper[37036]: I0312 14:40:41.216832 37036 kubelet.go:1505] "Image garbage collection succeeded" Mar 12 14:40:44.352116 master-0 kubenswrapper[37036]: I0312 14:40:44.352061 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:40:44.382423 master-0 kubenswrapper[37036]: I0312 14:40:44.382361 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:40:45.384985 master-0 kubenswrapper[37036]: I0312 14:40:45.384935 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 14:41:09.510485 master-0 kubenswrapper[37036]: I0312 14:41:09.510426 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:41:09.569984 master-0 kubenswrapper[37036]: I0312 14:41:09.568827 37036 generic.go:334] "Generic (PLEG): container finished" podID="addf66af-4d97-4c1e-960d-ace98c27961b" containerID="a005763593d95225e12c3935e14975d230d9626dd66e3c9c188263f4188a5757" exitCode=0 Mar 12 14:41:09.569984 master-0 kubenswrapper[37036]: I0312 14:41:09.568875 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" Mar 12 14:41:09.569984 master-0 kubenswrapper[37036]: I0312 14:41:09.568877 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" event={"ID":"addf66af-4d97-4c1e-960d-ace98c27961b","Type":"ContainerDied","Data":"a005763593d95225e12c3935e14975d230d9626dd66e3c9c188263f4188a5757"} Mar 12 14:41:09.569984 master-0 kubenswrapper[37036]: I0312 14:41:09.569379 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-85b44c7984-pzbfq" event={"ID":"addf66af-4d97-4c1e-960d-ace98c27961b","Type":"ContainerDied","Data":"b4e230d3f789f82e2598481603b93fd52d829378a89dce8399b53642cd4db5c4"} Mar 12 14:41:09.569984 master-0 kubenswrapper[37036]: I0312 14:41:09.569399 37036 scope.go:117] "RemoveContainer" containerID="a005763593d95225e12c3935e14975d230d9626dd66e3c9c188263f4188a5757" Mar 12 14:41:09.585006 master-0 kubenswrapper[37036]: I0312 14:41:09.584918 37036 scope.go:117] "RemoveContainer" containerID="a005763593d95225e12c3935e14975d230d9626dd66e3c9c188263f4188a5757" Mar 12 14:41:09.585458 master-0 kubenswrapper[37036]: E0312 14:41:09.585428 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a005763593d95225e12c3935e14975d230d9626dd66e3c9c188263f4188a5757\": container with ID starting with a005763593d95225e12c3935e14975d230d9626dd66e3c9c188263f4188a5757 not found: ID does not exist" containerID="a005763593d95225e12c3935e14975d230d9626dd66e3c9c188263f4188a5757" Mar 12 14:41:09.585500 master-0 kubenswrapper[37036]: I0312 14:41:09.585461 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a005763593d95225e12c3935e14975d230d9626dd66e3c9c188263f4188a5757"} err="failed to get container status \"a005763593d95225e12c3935e14975d230d9626dd66e3c9c188263f4188a5757\": rpc error: code = NotFound desc = could not find container \"a005763593d95225e12c3935e14975d230d9626dd66e3c9c188263f4188a5757\": container with ID starting with a005763593d95225e12c3935e14975d230d9626dd66e3c9c188263f4188a5757 not found: ID does not exist" Mar 12 14:41:09.605037 master-0 kubenswrapper[37036]: I0312 14:41:09.604942 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-client-ca-bundle\") pod \"addf66af-4d97-4c1e-960d-ace98c27961b\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " Mar 12 14:41:09.605304 master-0 kubenswrapper[37036]: I0312 14:41:09.605053 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-configmap-kubelet-serving-ca-bundle\") pod \"addf66af-4d97-4c1e-960d-ace98c27961b\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " Mar 12 14:41:09.605304 master-0 kubenswrapper[37036]: I0312 14:41:09.605100 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/addf66af-4d97-4c1e-960d-ace98c27961b-audit-log\") pod \"addf66af-4d97-4c1e-960d-ace98c27961b\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " Mar 12 14:41:09.605304 master-0 kubenswrapper[37036]: I0312 14:41:09.605169 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-client-certs\") pod \"addf66af-4d97-4c1e-960d-ace98c27961b\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " Mar 12 14:41:09.605304 master-0 kubenswrapper[37036]: I0312 14:41:09.605255 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6d7w\" (UniqueName: \"kubernetes.io/projected/addf66af-4d97-4c1e-960d-ace98c27961b-kube-api-access-l6d7w\") pod \"addf66af-4d97-4c1e-960d-ace98c27961b\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " Mar 12 14:41:09.605304 master-0 kubenswrapper[37036]: I0312 14:41:09.605300 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-server-tls\") pod \"addf66af-4d97-4c1e-960d-ace98c27961b\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " Mar 12 14:41:09.605563 master-0 kubenswrapper[37036]: I0312 14:41:09.605450 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-metrics-server-audit-profiles\") pod \"addf66af-4d97-4c1e-960d-ace98c27961b\" (UID: \"addf66af-4d97-4c1e-960d-ace98c27961b\") " Mar 12 14:41:09.605723 master-0 kubenswrapper[37036]: I0312 14:41:09.605662 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/addf66af-4d97-4c1e-960d-ace98c27961b-audit-log" (OuterVolumeSpecName: "audit-log") pod "addf66af-4d97-4c1e-960d-ace98c27961b" (UID: "addf66af-4d97-4c1e-960d-ace98c27961b"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:41:09.605780 master-0 kubenswrapper[37036]: I0312 14:41:09.605713 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "addf66af-4d97-4c1e-960d-ace98c27961b" (UID: "addf66af-4d97-4c1e-960d-ace98c27961b"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:41:09.606403 master-0 kubenswrapper[37036]: I0312 14:41:09.606300 37036 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:41:09.606403 master-0 kubenswrapper[37036]: I0312 14:41:09.606338 37036 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/addf66af-4d97-4c1e-960d-ace98c27961b-audit-log\") on node \"master-0\" DevicePath \"\"" Mar 12 14:41:09.606886 master-0 kubenswrapper[37036]: I0312 14:41:09.606830 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "addf66af-4d97-4c1e-960d-ace98c27961b" (UID: "addf66af-4d97-4c1e-960d-ace98c27961b"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:41:09.623146 master-0 kubenswrapper[37036]: I0312 14:41:09.623055 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "addf66af-4d97-4c1e-960d-ace98c27961b" (UID: "addf66af-4d97-4c1e-960d-ace98c27961b"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:41:09.623146 master-0 kubenswrapper[37036]: I0312 14:41:09.623080 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/addf66af-4d97-4c1e-960d-ace98c27961b-kube-api-access-l6d7w" (OuterVolumeSpecName: "kube-api-access-l6d7w") pod "addf66af-4d97-4c1e-960d-ace98c27961b" (UID: "addf66af-4d97-4c1e-960d-ace98c27961b"). InnerVolumeSpecName "kube-api-access-l6d7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:41:09.623392 master-0 kubenswrapper[37036]: I0312 14:41:09.623217 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "addf66af-4d97-4c1e-960d-ace98c27961b" (UID: "addf66af-4d97-4c1e-960d-ace98c27961b"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:41:09.623600 master-0 kubenswrapper[37036]: I0312 14:41:09.623541 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "addf66af-4d97-4c1e-960d-ace98c27961b" (UID: "addf66af-4d97-4c1e-960d-ace98c27961b"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:41:09.707783 master-0 kubenswrapper[37036]: I0312 14:41:09.707713 37036 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/addf66af-4d97-4c1e-960d-ace98c27961b-metrics-server-audit-profiles\") on node \"master-0\" DevicePath \"\"" Mar 12 14:41:09.707783 master-0 kubenswrapper[37036]: I0312 14:41:09.707771 37036 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-client-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:41:09.708077 master-0 kubenswrapper[37036]: I0312 14:41:09.707793 37036 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:41:09.708077 master-0 kubenswrapper[37036]: I0312 14:41:09.707817 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6d7w\" (UniqueName: \"kubernetes.io/projected/addf66af-4d97-4c1e-960d-ace98c27961b-kube-api-access-l6d7w\") on node \"master-0\" DevicePath \"\"" Mar 12 14:41:09.708077 master-0 kubenswrapper[37036]: I0312 14:41:09.707837 37036 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/addf66af-4d97-4c1e-960d-ace98c27961b-secret-metrics-server-tls\") on node \"master-0\" DevicePath \"\"" Mar 12 14:41:09.912698 master-0 kubenswrapper[37036]: I0312 14:41:09.912631 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-85b44c7984-pzbfq"] Mar 12 14:41:09.920785 master-0 kubenswrapper[37036]: I0312 14:41:09.920731 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-85b44c7984-pzbfq"] Mar 12 14:41:10.008970 master-0 kubenswrapper[37036]: E0312 14:41:10.008908 37036 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaddf66af_4d97_4c1e_960d_ace98c27961b.slice/crio-b4e230d3f789f82e2598481603b93fd52d829378a89dce8399b53642cd4db5c4\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaddf66af_4d97_4c1e_960d_ace98c27961b.slice\": RecentStats: unable to find data in memory cache]" Mar 12 14:41:11.264404 master-0 kubenswrapper[37036]: I0312 14:41:11.262728 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="addf66af-4d97-4c1e-960d-ace98c27961b" path="/var/lib/kubelet/pods/addf66af-4d97-4c1e-960d-ace98c27961b/volumes" Mar 12 14:41:23.256428 master-0 kubenswrapper[37036]: I0312 14:41:23.256369 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d4ccfbcf-82w2p"] Mar 12 14:41:23.257053 master-0 kubenswrapper[37036]: E0312 14:41:23.256664 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="addf66af-4d97-4c1e-960d-ace98c27961b" containerName="metrics-server" Mar 12 14:41:23.257053 master-0 kubenswrapper[37036]: I0312 14:41:23.256679 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="addf66af-4d97-4c1e-960d-ace98c27961b" containerName="metrics-server" Mar 12 14:41:23.257053 master-0 kubenswrapper[37036]: I0312 14:41:23.256861 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="addf66af-4d97-4c1e-960d-ace98c27961b" containerName="metrics-server" Mar 12 14:41:23.257405 master-0 kubenswrapper[37036]: I0312 14:41:23.257379 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:23.275687 master-0 kubenswrapper[37036]: I0312 14:41:23.275636 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d4ccfbcf-82w2p"] Mar 12 14:41:23.315412 master-0 kubenswrapper[37036]: I0312 14:41:23.315365 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-trusted-ca-bundle\") pod \"console-64d4ccfbcf-82w2p\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:23.316198 master-0 kubenswrapper[37036]: I0312 14:41:23.316180 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-console-config\") pod \"console-64d4ccfbcf-82w2p\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:23.316616 master-0 kubenswrapper[37036]: I0312 14:41:23.316601 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9f6v\" (UniqueName: \"kubernetes.io/projected/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-kube-api-access-p9f6v\") pod \"console-64d4ccfbcf-82w2p\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:23.316723 master-0 kubenswrapper[37036]: I0312 14:41:23.316709 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-oauth-serving-cert\") pod \"console-64d4ccfbcf-82w2p\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:23.316995 master-0 kubenswrapper[37036]: I0312 14:41:23.316980 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-service-ca\") pod \"console-64d4ccfbcf-82w2p\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:23.317115 master-0 kubenswrapper[37036]: I0312 14:41:23.317099 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-console-serving-cert\") pod \"console-64d4ccfbcf-82w2p\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:23.317325 master-0 kubenswrapper[37036]: I0312 14:41:23.317276 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-console-oauth-config\") pod \"console-64d4ccfbcf-82w2p\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:23.419053 master-0 kubenswrapper[37036]: I0312 14:41:23.418998 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-console-config\") pod \"console-64d4ccfbcf-82w2p\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:23.419368 master-0 kubenswrapper[37036]: I0312 14:41:23.419345 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9f6v\" (UniqueName: \"kubernetes.io/projected/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-kube-api-access-p9f6v\") pod \"console-64d4ccfbcf-82w2p\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:23.419509 master-0 kubenswrapper[37036]: I0312 14:41:23.419492 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-oauth-serving-cert\") pod \"console-64d4ccfbcf-82w2p\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:23.419646 master-0 kubenswrapper[37036]: I0312 14:41:23.419629 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-service-ca\") pod \"console-64d4ccfbcf-82w2p\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:23.419766 master-0 kubenswrapper[37036]: I0312 14:41:23.419747 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-console-serving-cert\") pod \"console-64d4ccfbcf-82w2p\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:23.419885 master-0 kubenswrapper[37036]: I0312 14:41:23.419867 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-console-oauth-config\") pod \"console-64d4ccfbcf-82w2p\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:23.420106 master-0 kubenswrapper[37036]: I0312 14:41:23.420086 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-trusted-ca-bundle\") pod \"console-64d4ccfbcf-82w2p\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:23.420245 master-0 kubenswrapper[37036]: I0312 14:41:23.420086 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-console-config\") pod \"console-64d4ccfbcf-82w2p\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:23.420557 master-0 kubenswrapper[37036]: I0312 14:41:23.420505 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-oauth-serving-cert\") pod \"console-64d4ccfbcf-82w2p\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:23.420631 master-0 kubenswrapper[37036]: I0312 14:41:23.420607 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-service-ca\") pod \"console-64d4ccfbcf-82w2p\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:23.420868 master-0 kubenswrapper[37036]: I0312 14:41:23.420828 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-trusted-ca-bundle\") pod \"console-64d4ccfbcf-82w2p\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:23.423001 master-0 kubenswrapper[37036]: I0312 14:41:23.422923 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-console-oauth-config\") pod \"console-64d4ccfbcf-82w2p\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:23.423103 master-0 kubenswrapper[37036]: I0312 14:41:23.423049 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-console-serving-cert\") pod \"console-64d4ccfbcf-82w2p\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:23.439774 master-0 kubenswrapper[37036]: I0312 14:41:23.439721 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9f6v\" (UniqueName: \"kubernetes.io/projected/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-kube-api-access-p9f6v\") pod \"console-64d4ccfbcf-82w2p\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:23.578502 master-0 kubenswrapper[37036]: I0312 14:41:23.578303 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:23.998045 master-0 kubenswrapper[37036]: I0312 14:41:23.997950 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d4ccfbcf-82w2p"] Mar 12 14:41:24.694808 master-0 kubenswrapper[37036]: I0312 14:41:24.694749 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d4ccfbcf-82w2p" event={"ID":"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf","Type":"ContainerStarted","Data":"1c1bfefbf00413f0e63810fcf625b0dbe7a64e5b0ba154c6ffd67314866e31d5"} Mar 12 14:41:24.694808 master-0 kubenswrapper[37036]: I0312 14:41:24.694800 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d4ccfbcf-82w2p" event={"ID":"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf","Type":"ContainerStarted","Data":"f7335b93c32611dc18f2cd874ad5b43ab8e42e31f93e4d03d6fb8b78a630a081"} Mar 12 14:41:24.718369 master-0 kubenswrapper[37036]: I0312 14:41:24.718279 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d4ccfbcf-82w2p" podStartSLOduration=1.7182362150000001 podStartE2EDuration="1.718236215s" podCreationTimestamp="2026-03-12 14:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:41:24.713079817 +0000 UTC m=+343.720820774" watchObservedRunningTime="2026-03-12 14:41:24.718236215 +0000 UTC m=+343.725977162" Mar 12 14:41:33.578649 master-0 kubenswrapper[37036]: I0312 14:41:33.578537 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:33.578649 master-0 kubenswrapper[37036]: I0312 14:41:33.578656 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:33.585386 master-0 kubenswrapper[37036]: I0312 14:41:33.585327 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:33.769569 master-0 kubenswrapper[37036]: I0312 14:41:33.769497 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:41:33.854647 master-0 kubenswrapper[37036]: I0312 14:41:33.853858 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-59db58b99d-jwn7z"] Mar 12 14:41:46.734333 master-0 kubenswrapper[37036]: I0312 14:41:46.734246 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-rfr25"] Mar 12 14:41:46.736116 master-0 kubenswrapper[37036]: I0312 14:41:46.736090 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-78f6d7d749-rfr25" Mar 12 14:41:46.739023 master-0 kubenswrapper[37036]: I0312 14:41:46.738812 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"openshift-service-ca.crt" Mar 12 14:41:46.739372 master-0 kubenswrapper[37036]: I0312 14:41:46.739295 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"kube-root-ca.crt" Mar 12 14:41:46.739466 master-0 kubenswrapper[37036]: I0312 14:41:46.739408 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Mar 12 14:41:46.740528 master-0 kubenswrapper[37036]: I0312 14:41:46.740486 37036 reflector.go:368] Caches populated for *v1.Secret from object-"sushy-emulator"/"os-client-config" Mar 12 14:41:46.771076 master-0 kubenswrapper[37036]: I0312 14:41:46.770972 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-rfr25"] Mar 12 14:41:46.810191 master-0 kubenswrapper[37036]: I0312 14:41:46.810127 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/b77289fb-9e3c-448c-a62c-9cba16fb43b8-sushy-emulator-config\") pod \"sushy-emulator-78f6d7d749-rfr25\" (UID: \"b77289fb-9e3c-448c-a62c-9cba16fb43b8\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-rfr25" Mar 12 14:41:46.810191 master-0 kubenswrapper[37036]: I0312 14:41:46.810186 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdjlf\" (UniqueName: \"kubernetes.io/projected/b77289fb-9e3c-448c-a62c-9cba16fb43b8-kube-api-access-cdjlf\") pod \"sushy-emulator-78f6d7d749-rfr25\" (UID: \"b77289fb-9e3c-448c-a62c-9cba16fb43b8\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-rfr25" Mar 12 14:41:46.810497 master-0 kubenswrapper[37036]: I0312 14:41:46.810254 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/b77289fb-9e3c-448c-a62c-9cba16fb43b8-os-client-config\") pod \"sushy-emulator-78f6d7d749-rfr25\" (UID: \"b77289fb-9e3c-448c-a62c-9cba16fb43b8\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-rfr25" Mar 12 14:41:46.911749 master-0 kubenswrapper[37036]: I0312 14:41:46.911684 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/b77289fb-9e3c-448c-a62c-9cba16fb43b8-os-client-config\") pod \"sushy-emulator-78f6d7d749-rfr25\" (UID: \"b77289fb-9e3c-448c-a62c-9cba16fb43b8\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-rfr25" Mar 12 14:41:46.912306 master-0 kubenswrapper[37036]: I0312 14:41:46.912282 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/b77289fb-9e3c-448c-a62c-9cba16fb43b8-sushy-emulator-config\") pod \"sushy-emulator-78f6d7d749-rfr25\" (UID: \"b77289fb-9e3c-448c-a62c-9cba16fb43b8\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-rfr25" Mar 12 14:41:46.912408 master-0 kubenswrapper[37036]: I0312 14:41:46.912394 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdjlf\" (UniqueName: \"kubernetes.io/projected/b77289fb-9e3c-448c-a62c-9cba16fb43b8-kube-api-access-cdjlf\") pod \"sushy-emulator-78f6d7d749-rfr25\" (UID: \"b77289fb-9e3c-448c-a62c-9cba16fb43b8\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-rfr25" Mar 12 14:41:46.913321 master-0 kubenswrapper[37036]: I0312 14:41:46.913280 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/b77289fb-9e3c-448c-a62c-9cba16fb43b8-sushy-emulator-config\") pod \"sushy-emulator-78f6d7d749-rfr25\" (UID: \"b77289fb-9e3c-448c-a62c-9cba16fb43b8\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-rfr25" Mar 12 14:41:46.916743 master-0 kubenswrapper[37036]: I0312 14:41:46.916704 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/b77289fb-9e3c-448c-a62c-9cba16fb43b8-os-client-config\") pod \"sushy-emulator-78f6d7d749-rfr25\" (UID: \"b77289fb-9e3c-448c-a62c-9cba16fb43b8\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-rfr25" Mar 12 14:41:46.929661 master-0 kubenswrapper[37036]: I0312 14:41:46.929624 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdjlf\" (UniqueName: \"kubernetes.io/projected/b77289fb-9e3c-448c-a62c-9cba16fb43b8-kube-api-access-cdjlf\") pod \"sushy-emulator-78f6d7d749-rfr25\" (UID: \"b77289fb-9e3c-448c-a62c-9cba16fb43b8\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-rfr25" Mar 12 14:41:47.078714 master-0 kubenswrapper[37036]: I0312 14:41:47.078641 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-78f6d7d749-rfr25" Mar 12 14:41:47.544991 master-0 kubenswrapper[37036]: I0312 14:41:47.543938 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-rfr25"] Mar 12 14:41:47.555063 master-0 kubenswrapper[37036]: W0312 14:41:47.554931 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb77289fb_9e3c_448c_a62c_9cba16fb43b8.slice/crio-e31a799e13a52c9c8c63a4be41fd1241745cf44b5c0af7308e0ca38cd28ddc01 WatchSource:0}: Error finding container e31a799e13a52c9c8c63a4be41fd1241745cf44b5c0af7308e0ca38cd28ddc01: Status 404 returned error can't find the container with id e31a799e13a52c9c8c63a4be41fd1241745cf44b5c0af7308e0ca38cd28ddc01 Mar 12 14:41:47.557750 master-0 kubenswrapper[37036]: I0312 14:41:47.557705 37036 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 14:41:47.929409 master-0 kubenswrapper[37036]: I0312 14:41:47.929272 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-78f6d7d749-rfr25" event={"ID":"b77289fb-9e3c-448c-a62c-9cba16fb43b8","Type":"ContainerStarted","Data":"e31a799e13a52c9c8c63a4be41fd1241745cf44b5c0af7308e0ca38cd28ddc01"} Mar 12 14:41:54.982462 master-0 kubenswrapper[37036]: I0312 14:41:54.982333 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-78f6d7d749-rfr25" event={"ID":"b77289fb-9e3c-448c-a62c-9cba16fb43b8","Type":"ContainerStarted","Data":"c55ea3141ea4ca3938cbf08815970b6a7543c2b491c6a21df96d44bd4af01641"} Mar 12 14:41:57.079962 master-0 kubenswrapper[37036]: I0312 14:41:57.079836 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-78f6d7d749-rfr25" Mar 12 14:41:57.080562 master-0 kubenswrapper[37036]: I0312 14:41:57.080501 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-78f6d7d749-rfr25" Mar 12 14:41:57.096236 master-0 kubenswrapper[37036]: I0312 14:41:57.096162 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-78f6d7d749-rfr25" Mar 12 14:41:57.120582 master-0 kubenswrapper[37036]: I0312 14:41:57.120454 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-78f6d7d749-rfr25" podStartSLOduration=4.629810879 podStartE2EDuration="11.120419068s" podCreationTimestamp="2026-03-12 14:41:46 +0000 UTC" firstStartedPulling="2026-03-12 14:41:47.557661342 +0000 UTC m=+366.565402319" lastFinishedPulling="2026-03-12 14:41:54.048269571 +0000 UTC m=+373.056010508" observedRunningTime="2026-03-12 14:41:55.002874694 +0000 UTC m=+374.010615631" watchObservedRunningTime="2026-03-12 14:41:57.120419068 +0000 UTC m=+376.128160065" Mar 12 14:41:58.010760 master-0 kubenswrapper[37036]: I0312 14:41:58.010703 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-78f6d7d749-rfr25" Mar 12 14:41:58.911520 master-0 kubenswrapper[37036]: I0312 14:41:58.911393 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-59db58b99d-jwn7z" podUID="5125edfe-0ec5-4664-ae68-2c98e3187d79" containerName="console" containerID="cri-o://39e0a785ec6f848229b39d7d3d01faa94660e8c2e17cdb3be5b43efffe0573b8" gracePeriod=15 Mar 12 14:41:59.374954 master-0 kubenswrapper[37036]: I0312 14:41:59.374792 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-59db58b99d-jwn7z_5125edfe-0ec5-4664-ae68-2c98e3187d79/console/0.log" Mar 12 14:41:59.374954 master-0 kubenswrapper[37036]: I0312 14:41:59.374882 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:41:59.451587 master-0 kubenswrapper[37036]: I0312 14:41:59.451511 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5125edfe-0ec5-4664-ae68-2c98e3187d79-console-config\") pod \"5125edfe-0ec5-4664-ae68-2c98e3187d79\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " Mar 12 14:41:59.451834 master-0 kubenswrapper[37036]: I0312 14:41:59.451704 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5125edfe-0ec5-4664-ae68-2c98e3187d79-service-ca\") pod \"5125edfe-0ec5-4664-ae68-2c98e3187d79\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " Mar 12 14:41:59.451834 master-0 kubenswrapper[37036]: I0312 14:41:59.451753 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4snz\" (UniqueName: \"kubernetes.io/projected/5125edfe-0ec5-4664-ae68-2c98e3187d79-kube-api-access-p4snz\") pod \"5125edfe-0ec5-4664-ae68-2c98e3187d79\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " Mar 12 14:41:59.451834 master-0 kubenswrapper[37036]: I0312 14:41:59.451811 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5125edfe-0ec5-4664-ae68-2c98e3187d79-oauth-serving-cert\") pod \"5125edfe-0ec5-4664-ae68-2c98e3187d79\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " Mar 12 14:41:59.452012 master-0 kubenswrapper[37036]: I0312 14:41:59.451858 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5125edfe-0ec5-4664-ae68-2c98e3187d79-trusted-ca-bundle\") pod \"5125edfe-0ec5-4664-ae68-2c98e3187d79\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " Mar 12 14:41:59.452012 master-0 kubenswrapper[37036]: I0312 14:41:59.451934 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5125edfe-0ec5-4664-ae68-2c98e3187d79-console-serving-cert\") pod \"5125edfe-0ec5-4664-ae68-2c98e3187d79\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " Mar 12 14:41:59.452012 master-0 kubenswrapper[37036]: I0312 14:41:59.451975 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5125edfe-0ec5-4664-ae68-2c98e3187d79-console-oauth-config\") pod \"5125edfe-0ec5-4664-ae68-2c98e3187d79\" (UID: \"5125edfe-0ec5-4664-ae68-2c98e3187d79\") " Mar 12 14:41:59.452139 master-0 kubenswrapper[37036]: I0312 14:41:59.452081 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5125edfe-0ec5-4664-ae68-2c98e3187d79-console-config" (OuterVolumeSpecName: "console-config") pod "5125edfe-0ec5-4664-ae68-2c98e3187d79" (UID: "5125edfe-0ec5-4664-ae68-2c98e3187d79"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:41:59.452264 master-0 kubenswrapper[37036]: I0312 14:41:59.452224 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5125edfe-0ec5-4664-ae68-2c98e3187d79-service-ca" (OuterVolumeSpecName: "service-ca") pod "5125edfe-0ec5-4664-ae68-2c98e3187d79" (UID: "5125edfe-0ec5-4664-ae68-2c98e3187d79"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:41:59.452415 master-0 kubenswrapper[37036]: I0312 14:41:59.452380 37036 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5125edfe-0ec5-4664-ae68-2c98e3187d79-console-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:41:59.452415 master-0 kubenswrapper[37036]: I0312 14:41:59.452409 37036 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5125edfe-0ec5-4664-ae68-2c98e3187d79-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:41:59.452513 master-0 kubenswrapper[37036]: I0312 14:41:59.452367 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5125edfe-0ec5-4664-ae68-2c98e3187d79-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "5125edfe-0ec5-4664-ae68-2c98e3187d79" (UID: "5125edfe-0ec5-4664-ae68-2c98e3187d79"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:41:59.452876 master-0 kubenswrapper[37036]: I0312 14:41:59.452721 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5125edfe-0ec5-4664-ae68-2c98e3187d79-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5125edfe-0ec5-4664-ae68-2c98e3187d79" (UID: "5125edfe-0ec5-4664-ae68-2c98e3187d79"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:41:59.455170 master-0 kubenswrapper[37036]: I0312 14:41:59.455120 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5125edfe-0ec5-4664-ae68-2c98e3187d79-kube-api-access-p4snz" (OuterVolumeSpecName: "kube-api-access-p4snz") pod "5125edfe-0ec5-4664-ae68-2c98e3187d79" (UID: "5125edfe-0ec5-4664-ae68-2c98e3187d79"). InnerVolumeSpecName "kube-api-access-p4snz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:41:59.455680 master-0 kubenswrapper[37036]: I0312 14:41:59.455645 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5125edfe-0ec5-4664-ae68-2c98e3187d79-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "5125edfe-0ec5-4664-ae68-2c98e3187d79" (UID: "5125edfe-0ec5-4664-ae68-2c98e3187d79"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:41:59.464117 master-0 kubenswrapper[37036]: I0312 14:41:59.464066 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5125edfe-0ec5-4664-ae68-2c98e3187d79-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "5125edfe-0ec5-4664-ae68-2c98e3187d79" (UID: "5125edfe-0ec5-4664-ae68-2c98e3187d79"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:41:59.554331 master-0 kubenswrapper[37036]: I0312 14:41:59.554244 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4snz\" (UniqueName: \"kubernetes.io/projected/5125edfe-0ec5-4664-ae68-2c98e3187d79-kube-api-access-p4snz\") on node \"master-0\" DevicePath \"\"" Mar 12 14:41:59.554331 master-0 kubenswrapper[37036]: I0312 14:41:59.554300 37036 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5125edfe-0ec5-4664-ae68-2c98e3187d79-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:41:59.554331 master-0 kubenswrapper[37036]: I0312 14:41:59.554311 37036 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5125edfe-0ec5-4664-ae68-2c98e3187d79-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:41:59.554331 master-0 kubenswrapper[37036]: I0312 14:41:59.554319 37036 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5125edfe-0ec5-4664-ae68-2c98e3187d79-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:41:59.554331 master-0 kubenswrapper[37036]: I0312 14:41:59.554330 37036 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5125edfe-0ec5-4664-ae68-2c98e3187d79-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:42:00.023289 master-0 kubenswrapper[37036]: I0312 14:42:00.023218 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-59db58b99d-jwn7z_5125edfe-0ec5-4664-ae68-2c98e3187d79/console/0.log" Mar 12 14:42:00.023289 master-0 kubenswrapper[37036]: I0312 14:42:00.023292 37036 generic.go:334] "Generic (PLEG): container finished" podID="5125edfe-0ec5-4664-ae68-2c98e3187d79" containerID="39e0a785ec6f848229b39d7d3d01faa94660e8c2e17cdb3be5b43efffe0573b8" exitCode=2 Mar 12 14:42:00.023954 master-0 kubenswrapper[37036]: I0312 14:42:00.023390 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59db58b99d-jwn7z" event={"ID":"5125edfe-0ec5-4664-ae68-2c98e3187d79","Type":"ContainerDied","Data":"39e0a785ec6f848229b39d7d3d01faa94660e8c2e17cdb3be5b43efffe0573b8"} Mar 12 14:42:00.023954 master-0 kubenswrapper[37036]: I0312 14:42:00.023487 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59db58b99d-jwn7z" event={"ID":"5125edfe-0ec5-4664-ae68-2c98e3187d79","Type":"ContainerDied","Data":"578e2cad66d2e5b22a6881e70299814b8c5512b30af15fd467fc0d05faf8165e"} Mar 12 14:42:00.023954 master-0 kubenswrapper[37036]: I0312 14:42:00.023521 37036 scope.go:117] "RemoveContainer" containerID="39e0a785ec6f848229b39d7d3d01faa94660e8c2e17cdb3be5b43efffe0573b8" Mar 12 14:42:00.023954 master-0 kubenswrapper[37036]: I0312 14:42:00.023418 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59db58b99d-jwn7z" Mar 12 14:42:00.045097 master-0 kubenswrapper[37036]: I0312 14:42:00.045051 37036 scope.go:117] "RemoveContainer" containerID="39e0a785ec6f848229b39d7d3d01faa94660e8c2e17cdb3be5b43efffe0573b8" Mar 12 14:42:00.045576 master-0 kubenswrapper[37036]: E0312 14:42:00.045532 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39e0a785ec6f848229b39d7d3d01faa94660e8c2e17cdb3be5b43efffe0573b8\": container with ID starting with 39e0a785ec6f848229b39d7d3d01faa94660e8c2e17cdb3be5b43efffe0573b8 not found: ID does not exist" containerID="39e0a785ec6f848229b39d7d3d01faa94660e8c2e17cdb3be5b43efffe0573b8" Mar 12 14:42:00.045618 master-0 kubenswrapper[37036]: I0312 14:42:00.045583 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39e0a785ec6f848229b39d7d3d01faa94660e8c2e17cdb3be5b43efffe0573b8"} err="failed to get container status \"39e0a785ec6f848229b39d7d3d01faa94660e8c2e17cdb3be5b43efffe0573b8\": rpc error: code = NotFound desc = could not find container \"39e0a785ec6f848229b39d7d3d01faa94660e8c2e17cdb3be5b43efffe0573b8\": container with ID starting with 39e0a785ec6f848229b39d7d3d01faa94660e8c2e17cdb3be5b43efffe0573b8 not found: ID does not exist" Mar 12 14:42:00.087204 master-0 kubenswrapper[37036]: I0312 14:42:00.087116 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-59db58b99d-jwn7z"] Mar 12 14:42:00.092553 master-0 kubenswrapper[37036]: I0312 14:42:00.092493 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-59db58b99d-jwn7z"] Mar 12 14:42:01.244494 master-0 kubenswrapper[37036]: I0312 14:42:01.244428 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5125edfe-0ec5-4664-ae68-2c98e3187d79" path="/var/lib/kubelet/pods/5125edfe-0ec5-4664-ae68-2c98e3187d79/volumes" Mar 12 14:42:17.554696 master-0 kubenswrapper[37036]: I0312 14:42:17.554621 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-poller-747cb76f9c-59pt7"] Mar 12 14:42:17.555469 master-0 kubenswrapper[37036]: E0312 14:42:17.555004 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5125edfe-0ec5-4664-ae68-2c98e3187d79" containerName="console" Mar 12 14:42:17.555469 master-0 kubenswrapper[37036]: I0312 14:42:17.555024 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="5125edfe-0ec5-4664-ae68-2c98e3187d79" containerName="console" Mar 12 14:42:17.555469 master-0 kubenswrapper[37036]: I0312 14:42:17.555222 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="5125edfe-0ec5-4664-ae68-2c98e3187d79" containerName="console" Mar 12 14:42:17.556108 master-0 kubenswrapper[37036]: I0312 14:42:17.556073 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-747cb76f9c-59pt7" Mar 12 14:42:17.569051 master-0 kubenswrapper[37036]: I0312 14:42:17.568993 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-747cb76f9c-59pt7"] Mar 12 14:42:17.637942 master-0 kubenswrapper[37036]: I0312 14:42:17.637835 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/7b357fa7-3920-46e0-9a2f-b4c46c4affb3-os-client-config\") pod \"nova-console-poller-747cb76f9c-59pt7\" (UID: \"7b357fa7-3920-46e0-9a2f-b4c46c4affb3\") " pod="sushy-emulator/nova-console-poller-747cb76f9c-59pt7" Mar 12 14:42:17.638337 master-0 kubenswrapper[37036]: I0312 14:42:17.638311 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnckr\" (UniqueName: \"kubernetes.io/projected/7b357fa7-3920-46e0-9a2f-b4c46c4affb3-kube-api-access-hnckr\") pod \"nova-console-poller-747cb76f9c-59pt7\" (UID: \"7b357fa7-3920-46e0-9a2f-b4c46c4affb3\") " pod="sushy-emulator/nova-console-poller-747cb76f9c-59pt7" Mar 12 14:42:17.740158 master-0 kubenswrapper[37036]: I0312 14:42:17.740105 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/7b357fa7-3920-46e0-9a2f-b4c46c4affb3-os-client-config\") pod \"nova-console-poller-747cb76f9c-59pt7\" (UID: \"7b357fa7-3920-46e0-9a2f-b4c46c4affb3\") " pod="sushy-emulator/nova-console-poller-747cb76f9c-59pt7" Mar 12 14:42:17.740485 master-0 kubenswrapper[37036]: I0312 14:42:17.740465 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnckr\" (UniqueName: \"kubernetes.io/projected/7b357fa7-3920-46e0-9a2f-b4c46c4affb3-kube-api-access-hnckr\") pod \"nova-console-poller-747cb76f9c-59pt7\" (UID: \"7b357fa7-3920-46e0-9a2f-b4c46c4affb3\") " pod="sushy-emulator/nova-console-poller-747cb76f9c-59pt7" Mar 12 14:42:17.744323 master-0 kubenswrapper[37036]: I0312 14:42:17.744281 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/7b357fa7-3920-46e0-9a2f-b4c46c4affb3-os-client-config\") pod \"nova-console-poller-747cb76f9c-59pt7\" (UID: \"7b357fa7-3920-46e0-9a2f-b4c46c4affb3\") " pod="sushy-emulator/nova-console-poller-747cb76f9c-59pt7" Mar 12 14:42:17.760444 master-0 kubenswrapper[37036]: I0312 14:42:17.760386 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnckr\" (UniqueName: \"kubernetes.io/projected/7b357fa7-3920-46e0-9a2f-b4c46c4affb3-kube-api-access-hnckr\") pod \"nova-console-poller-747cb76f9c-59pt7\" (UID: \"7b357fa7-3920-46e0-9a2f-b4c46c4affb3\") " pod="sushy-emulator/nova-console-poller-747cb76f9c-59pt7" Mar 12 14:42:17.891477 master-0 kubenswrapper[37036]: I0312 14:42:17.891324 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-747cb76f9c-59pt7" Mar 12 14:42:18.328653 master-0 kubenswrapper[37036]: I0312 14:42:18.328599 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-747cb76f9c-59pt7"] Mar 12 14:42:18.339280 master-0 kubenswrapper[37036]: W0312 14:42:18.339196 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b357fa7_3920_46e0_9a2f_b4c46c4affb3.slice/crio-77f369143ac7ab12862cbaee2f8d938d86f2c6fe0b2533a4429a7e1e8c27c2cf WatchSource:0}: Error finding container 77f369143ac7ab12862cbaee2f8d938d86f2c6fe0b2533a4429a7e1e8c27c2cf: Status 404 returned error can't find the container with id 77f369143ac7ab12862cbaee2f8d938d86f2c6fe0b2533a4429a7e1e8c27c2cf Mar 12 14:42:19.086749 master-0 kubenswrapper[37036]: I0312 14:42:19.086648 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 12 14:42:19.088307 master-0 kubenswrapper[37036]: I0312 14:42:19.088269 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 14:42:19.090501 master-0 kubenswrapper[37036]: I0312 14:42:19.090304 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 12 14:42:19.113671 master-0 kubenswrapper[37036]: I0312 14:42:19.091486 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-wm5wg" Mar 12 14:42:19.113671 master-0 kubenswrapper[37036]: I0312 14:42:19.092740 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 12 14:42:19.164777 master-0 kubenswrapper[37036]: I0312 14:42:19.164697 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/436e173b-7237-4811-84f9-5f5f56d5625d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"436e173b-7237-4811-84f9-5f5f56d5625d\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 14:42:19.165115 master-0 kubenswrapper[37036]: I0312 14:42:19.165063 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/436e173b-7237-4811-84f9-5f5f56d5625d-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"436e173b-7237-4811-84f9-5f5f56d5625d\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 14:42:19.165181 master-0 kubenswrapper[37036]: I0312 14:42:19.165163 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/436e173b-7237-4811-84f9-5f5f56d5625d-var-lock\") pod \"installer-4-master-0\" (UID: \"436e173b-7237-4811-84f9-5f5f56d5625d\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 14:42:19.205495 master-0 kubenswrapper[37036]: I0312 14:42:19.205385 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-747cb76f9c-59pt7" event={"ID":"7b357fa7-3920-46e0-9a2f-b4c46c4affb3","Type":"ContainerStarted","Data":"77f369143ac7ab12862cbaee2f8d938d86f2c6fe0b2533a4429a7e1e8c27c2cf"} Mar 12 14:42:19.266787 master-0 kubenswrapper[37036]: I0312 14:42:19.266693 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/436e173b-7237-4811-84f9-5f5f56d5625d-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"436e173b-7237-4811-84f9-5f5f56d5625d\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 14:42:19.267047 master-0 kubenswrapper[37036]: I0312 14:42:19.266794 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/436e173b-7237-4811-84f9-5f5f56d5625d-var-lock\") pod \"installer-4-master-0\" (UID: \"436e173b-7237-4811-84f9-5f5f56d5625d\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 14:42:19.267047 master-0 kubenswrapper[37036]: I0312 14:42:19.266838 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/436e173b-7237-4811-84f9-5f5f56d5625d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"436e173b-7237-4811-84f9-5f5f56d5625d\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 14:42:19.267199 master-0 kubenswrapper[37036]: I0312 14:42:19.267113 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/436e173b-7237-4811-84f9-5f5f56d5625d-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"436e173b-7237-4811-84f9-5f5f56d5625d\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 14:42:19.267292 master-0 kubenswrapper[37036]: I0312 14:42:19.267138 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/436e173b-7237-4811-84f9-5f5f56d5625d-var-lock\") pod \"installer-4-master-0\" (UID: \"436e173b-7237-4811-84f9-5f5f56d5625d\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 14:42:19.295024 master-0 kubenswrapper[37036]: I0312 14:42:19.285708 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/436e173b-7237-4811-84f9-5f5f56d5625d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"436e173b-7237-4811-84f9-5f5f56d5625d\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 14:42:19.430759 master-0 kubenswrapper[37036]: I0312 14:42:19.430555 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 14:42:19.903045 master-0 kubenswrapper[37036]: I0312 14:42:19.903009 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 12 14:42:19.903668 master-0 kubenswrapper[37036]: W0312 14:42:19.903624 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod436e173b_7237_4811_84f9_5f5f56d5625d.slice/crio-143faf52969ac6eaa56472778bfda48be8fc59a1d52b133d6f6e94e21f12a1f3 WatchSource:0}: Error finding container 143faf52969ac6eaa56472778bfda48be8fc59a1d52b133d6f6e94e21f12a1f3: Status 404 returned error can't find the container with id 143faf52969ac6eaa56472778bfda48be8fc59a1d52b133d6f6e94e21f12a1f3 Mar 12 14:42:20.213132 master-0 kubenswrapper[37036]: I0312 14:42:20.212995 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"436e173b-7237-4811-84f9-5f5f56d5625d","Type":"ContainerStarted","Data":"143faf52969ac6eaa56472778bfda48be8fc59a1d52b133d6f6e94e21f12a1f3"} Mar 12 14:42:21.222438 master-0 kubenswrapper[37036]: I0312 14:42:21.222362 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"436e173b-7237-4811-84f9-5f5f56d5625d","Type":"ContainerStarted","Data":"9a41494081ce26755ac48819168149fc3a5dfcbbbd7c2b375b8b8ee57f106a3c"} Mar 12 14:42:21.291374 master-0 kubenswrapper[37036]: I0312 14:42:21.291258 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=2.291232912 podStartE2EDuration="2.291232912s" podCreationTimestamp="2026-03-12 14:42:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:42:21.285738185 +0000 UTC m=+400.293479122" watchObservedRunningTime="2026-03-12 14:42:21.291232912 +0000 UTC m=+400.298973859" Mar 12 14:42:23.583969 master-0 kubenswrapper[37036]: I0312 14:42:23.583693 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6b9b4765bb-vsz5x"] Mar 12 14:42:23.585061 master-0 kubenswrapper[37036]: I0312 14:42:23.585032 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:23.643624 master-0 kubenswrapper[37036]: I0312 14:42:23.643563 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b9b4765bb-vsz5x"] Mar 12 14:42:23.745038 master-0 kubenswrapper[37036]: I0312 14:42:23.744962 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp49n\" (UniqueName: \"kubernetes.io/projected/321f1912-4218-4afd-add3-ce16ef44420f-kube-api-access-zp49n\") pod \"console-6b9b4765bb-vsz5x\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:23.745263 master-0 kubenswrapper[37036]: I0312 14:42:23.745069 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/321f1912-4218-4afd-add3-ce16ef44420f-trusted-ca-bundle\") pod \"console-6b9b4765bb-vsz5x\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:23.745263 master-0 kubenswrapper[37036]: I0312 14:42:23.745102 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/321f1912-4218-4afd-add3-ce16ef44420f-service-ca\") pod \"console-6b9b4765bb-vsz5x\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:23.745263 master-0 kubenswrapper[37036]: I0312 14:42:23.745221 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/321f1912-4218-4afd-add3-ce16ef44420f-oauth-serving-cert\") pod \"console-6b9b4765bb-vsz5x\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:23.745371 master-0 kubenswrapper[37036]: I0312 14:42:23.745280 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/321f1912-4218-4afd-add3-ce16ef44420f-console-config\") pod \"console-6b9b4765bb-vsz5x\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:23.745447 master-0 kubenswrapper[37036]: I0312 14:42:23.745418 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/321f1912-4218-4afd-add3-ce16ef44420f-console-serving-cert\") pod \"console-6b9b4765bb-vsz5x\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:23.745485 master-0 kubenswrapper[37036]: I0312 14:42:23.745452 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/321f1912-4218-4afd-add3-ce16ef44420f-console-oauth-config\") pod \"console-6b9b4765bb-vsz5x\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:23.847514 master-0 kubenswrapper[37036]: I0312 14:42:23.847362 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/321f1912-4218-4afd-add3-ce16ef44420f-oauth-serving-cert\") pod \"console-6b9b4765bb-vsz5x\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:23.847514 master-0 kubenswrapper[37036]: I0312 14:42:23.847440 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/321f1912-4218-4afd-add3-ce16ef44420f-console-config\") pod \"console-6b9b4765bb-vsz5x\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:23.847514 master-0 kubenswrapper[37036]: I0312 14:42:23.847522 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/321f1912-4218-4afd-add3-ce16ef44420f-console-serving-cert\") pod \"console-6b9b4765bb-vsz5x\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:23.847803 master-0 kubenswrapper[37036]: I0312 14:42:23.847550 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/321f1912-4218-4afd-add3-ce16ef44420f-console-oauth-config\") pod \"console-6b9b4765bb-vsz5x\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:23.847803 master-0 kubenswrapper[37036]: I0312 14:42:23.847604 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp49n\" (UniqueName: \"kubernetes.io/projected/321f1912-4218-4afd-add3-ce16ef44420f-kube-api-access-zp49n\") pod \"console-6b9b4765bb-vsz5x\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:23.847803 master-0 kubenswrapper[37036]: I0312 14:42:23.847656 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/321f1912-4218-4afd-add3-ce16ef44420f-trusted-ca-bundle\") pod \"console-6b9b4765bb-vsz5x\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:23.847803 master-0 kubenswrapper[37036]: I0312 14:42:23.847694 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/321f1912-4218-4afd-add3-ce16ef44420f-service-ca\") pod \"console-6b9b4765bb-vsz5x\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:23.849152 master-0 kubenswrapper[37036]: I0312 14:42:23.849116 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/321f1912-4218-4afd-add3-ce16ef44420f-service-ca\") pod \"console-6b9b4765bb-vsz5x\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:23.849430 master-0 kubenswrapper[37036]: I0312 14:42:23.849387 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/321f1912-4218-4afd-add3-ce16ef44420f-console-config\") pod \"console-6b9b4765bb-vsz5x\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:23.849915 master-0 kubenswrapper[37036]: I0312 14:42:23.849864 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/321f1912-4218-4afd-add3-ce16ef44420f-oauth-serving-cert\") pod \"console-6b9b4765bb-vsz5x\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:23.850222 master-0 kubenswrapper[37036]: I0312 14:42:23.850194 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/321f1912-4218-4afd-add3-ce16ef44420f-trusted-ca-bundle\") pod \"console-6b9b4765bb-vsz5x\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:23.852228 master-0 kubenswrapper[37036]: I0312 14:42:23.852196 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/321f1912-4218-4afd-add3-ce16ef44420f-console-oauth-config\") pod \"console-6b9b4765bb-vsz5x\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:23.853711 master-0 kubenswrapper[37036]: I0312 14:42:23.853657 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/321f1912-4218-4afd-add3-ce16ef44420f-console-serving-cert\") pod \"console-6b9b4765bb-vsz5x\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:23.865585 master-0 kubenswrapper[37036]: I0312 14:42:23.865539 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp49n\" (UniqueName: \"kubernetes.io/projected/321f1912-4218-4afd-add3-ce16ef44420f-kube-api-access-zp49n\") pod \"console-6b9b4765bb-vsz5x\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:23.961679 master-0 kubenswrapper[37036]: I0312 14:42:23.961606 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:24.257383 master-0 kubenswrapper[37036]: I0312 14:42:24.257336 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-747cb76f9c-59pt7" event={"ID":"7b357fa7-3920-46e0-9a2f-b4c46c4affb3","Type":"ContainerStarted","Data":"d98221c4638375f513f35cadcecdd5b68a9e08bf4ddce8eb0c09dddb00cf0c0b"} Mar 12 14:42:24.359843 master-0 kubenswrapper[37036]: I0312 14:42:24.359780 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b9b4765bb-vsz5x"] Mar 12 14:42:25.266667 master-0 kubenswrapper[37036]: I0312 14:42:25.266602 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b9b4765bb-vsz5x" event={"ID":"321f1912-4218-4afd-add3-ce16ef44420f","Type":"ContainerStarted","Data":"5e381331e26e635662a3062f1769b24d075c4f94bd20233b38d347328db44ac3"} Mar 12 14:42:25.268819 master-0 kubenswrapper[37036]: I0312 14:42:25.268264 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b9b4765bb-vsz5x" event={"ID":"321f1912-4218-4afd-add3-ce16ef44420f","Type":"ContainerStarted","Data":"a2421fa142535539e6f10fc6ae916d7380a4af54642a7b09be8ea550b442eecf"} Mar 12 14:42:25.268923 master-0 kubenswrapper[37036]: I0312 14:42:25.268822 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-747cb76f9c-59pt7" event={"ID":"7b357fa7-3920-46e0-9a2f-b4c46c4affb3","Type":"ContainerStarted","Data":"d76e99cfe1b0e7e244046bc9d0202a0fa4a2dbee79a22ede18e72e8d089215f3"} Mar 12 14:42:25.289527 master-0 kubenswrapper[37036]: I0312 14:42:25.289425 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6b9b4765bb-vsz5x" podStartSLOduration=2.28939962 podStartE2EDuration="2.28939962s" podCreationTimestamp="2026-03-12 14:42:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:42:25.286675342 +0000 UTC m=+404.294416309" watchObservedRunningTime="2026-03-12 14:42:25.28939962 +0000 UTC m=+404.297140567" Mar 12 14:42:25.310764 master-0 kubenswrapper[37036]: I0312 14:42:25.310570 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-poller-747cb76f9c-59pt7" podStartSLOduration=2.437337544 podStartE2EDuration="8.310552407s" podCreationTimestamp="2026-03-12 14:42:17 +0000 UTC" firstStartedPulling="2026-03-12 14:42:18.342119399 +0000 UTC m=+397.349860336" lastFinishedPulling="2026-03-12 14:42:24.215334262 +0000 UTC m=+403.223075199" observedRunningTime="2026-03-12 14:42:25.30626269 +0000 UTC m=+404.314003647" watchObservedRunningTime="2026-03-12 14:42:25.310552407 +0000 UTC m=+404.318293344" Mar 12 14:42:33.962440 master-0 kubenswrapper[37036]: I0312 14:42:33.962322 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:33.963646 master-0 kubenswrapper[37036]: I0312 14:42:33.963126 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:33.971328 master-0 kubenswrapper[37036]: I0312 14:42:33.971258 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:34.591662 master-0 kubenswrapper[37036]: I0312 14:42:34.591604 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:42:34.722927 master-0 kubenswrapper[37036]: I0312 14:42:34.721269 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-64d4ccfbcf-82w2p"] Mar 12 14:42:50.413795 master-0 kubenswrapper[37036]: I0312 14:42:50.413725 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-recorder-5756bd54c7-x8hn7"] Mar 12 14:42:50.415229 master-0 kubenswrapper[37036]: I0312 14:42:50.415196 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-5756bd54c7-x8hn7" Mar 12 14:42:50.427555 master-0 kubenswrapper[37036]: I0312 14:42:50.427507 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-5756bd54c7-x8hn7"] Mar 12 14:42:50.607727 master-0 kubenswrapper[37036]: I0312 14:42:50.607647 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/976fffb6-62f4-455d-b934-8cb1a5b175f9-nova-console-recordings-pv\") pod \"nova-console-recorder-5756bd54c7-x8hn7\" (UID: \"976fffb6-62f4-455d-b934-8cb1a5b175f9\") " pod="sushy-emulator/nova-console-recorder-5756bd54c7-x8hn7" Mar 12 14:42:50.607727 master-0 kubenswrapper[37036]: I0312 14:42:50.607718 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n724j\" (UniqueName: \"kubernetes.io/projected/976fffb6-62f4-455d-b934-8cb1a5b175f9-kube-api-access-n724j\") pod \"nova-console-recorder-5756bd54c7-x8hn7\" (UID: \"976fffb6-62f4-455d-b934-8cb1a5b175f9\") " pod="sushy-emulator/nova-console-recorder-5756bd54c7-x8hn7" Mar 12 14:42:50.608063 master-0 kubenswrapper[37036]: I0312 14:42:50.607759 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/976fffb6-62f4-455d-b934-8cb1a5b175f9-os-client-config\") pod \"nova-console-recorder-5756bd54c7-x8hn7\" (UID: \"976fffb6-62f4-455d-b934-8cb1a5b175f9\") " pod="sushy-emulator/nova-console-recorder-5756bd54c7-x8hn7" Mar 12 14:42:50.709463 master-0 kubenswrapper[37036]: I0312 14:42:50.709342 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/976fffb6-62f4-455d-b934-8cb1a5b175f9-nova-console-recordings-pv\") pod \"nova-console-recorder-5756bd54c7-x8hn7\" (UID: \"976fffb6-62f4-455d-b934-8cb1a5b175f9\") " pod="sushy-emulator/nova-console-recorder-5756bd54c7-x8hn7" Mar 12 14:42:50.709463 master-0 kubenswrapper[37036]: I0312 14:42:50.709395 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n724j\" (UniqueName: \"kubernetes.io/projected/976fffb6-62f4-455d-b934-8cb1a5b175f9-kube-api-access-n724j\") pod \"nova-console-recorder-5756bd54c7-x8hn7\" (UID: \"976fffb6-62f4-455d-b934-8cb1a5b175f9\") " pod="sushy-emulator/nova-console-recorder-5756bd54c7-x8hn7" Mar 12 14:42:50.709463 master-0 kubenswrapper[37036]: I0312 14:42:50.709431 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/976fffb6-62f4-455d-b934-8cb1a5b175f9-os-client-config\") pod \"nova-console-recorder-5756bd54c7-x8hn7\" (UID: \"976fffb6-62f4-455d-b934-8cb1a5b175f9\") " pod="sushy-emulator/nova-console-recorder-5756bd54c7-x8hn7" Mar 12 14:42:50.721570 master-0 kubenswrapper[37036]: I0312 14:42:50.721521 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/976fffb6-62f4-455d-b934-8cb1a5b175f9-os-client-config\") pod \"nova-console-recorder-5756bd54c7-x8hn7\" (UID: \"976fffb6-62f4-455d-b934-8cb1a5b175f9\") " pod="sushy-emulator/nova-console-recorder-5756bd54c7-x8hn7" Mar 12 14:42:50.726263 master-0 kubenswrapper[37036]: I0312 14:42:50.726192 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n724j\" (UniqueName: \"kubernetes.io/projected/976fffb6-62f4-455d-b934-8cb1a5b175f9-kube-api-access-n724j\") pod \"nova-console-recorder-5756bd54c7-x8hn7\" (UID: \"976fffb6-62f4-455d-b934-8cb1a5b175f9\") " pod="sushy-emulator/nova-console-recorder-5756bd54c7-x8hn7" Mar 12 14:42:51.310593 master-0 kubenswrapper[37036]: I0312 14:42:51.310535 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/976fffb6-62f4-455d-b934-8cb1a5b175f9-nova-console-recordings-pv\") pod \"nova-console-recorder-5756bd54c7-x8hn7\" (UID: \"976fffb6-62f4-455d-b934-8cb1a5b175f9\") " pod="sushy-emulator/nova-console-recorder-5756bd54c7-x8hn7" Mar 12 14:42:51.333315 master-0 kubenswrapper[37036]: I0312 14:42:51.333256 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-5756bd54c7-x8hn7" Mar 12 14:42:51.773108 master-0 kubenswrapper[37036]: I0312 14:42:51.773043 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-5756bd54c7-x8hn7"] Mar 12 14:42:51.784008 master-0 kubenswrapper[37036]: W0312 14:42:51.783927 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod976fffb6_62f4_455d_b934_8cb1a5b175f9.slice/crio-8ac2f38d768cc5e863653e44a077eb6809261f76feee1918674fde743ecb11f0 WatchSource:0}: Error finding container 8ac2f38d768cc5e863653e44a077eb6809261f76feee1918674fde743ecb11f0: Status 404 returned error can't find the container with id 8ac2f38d768cc5e863653e44a077eb6809261f76feee1918674fde743ecb11f0 Mar 12 14:42:52.754874 master-0 kubenswrapper[37036]: I0312 14:42:52.754807 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-5756bd54c7-x8hn7" event={"ID":"976fffb6-62f4-455d-b934-8cb1a5b175f9","Type":"ContainerStarted","Data":"8ac2f38d768cc5e863653e44a077eb6809261f76feee1918674fde743ecb11f0"} Mar 12 14:42:52.754874 master-0 kubenswrapper[37036]: I0312 14:42:52.754867 37036 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 14:42:52.755194 master-0 kubenswrapper[37036]: I0312 14:42:52.755115 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="965d6e0e3f611771f8ba2352415f565a" containerName="cluster-policy-controller" containerID="cri-o://b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28" gracePeriod=30 Mar 12 14:42:52.755256 master-0 kubenswrapper[37036]: I0312 14:42:52.755210 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="965d6e0e3f611771f8ba2352415f565a" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795" gracePeriod=30 Mar 12 14:42:52.755294 master-0 kubenswrapper[37036]: I0312 14:42:52.755205 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="965d6e0e3f611771f8ba2352415f565a" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0" gracePeriod=30 Mar 12 14:42:52.755401 master-0 kubenswrapper[37036]: I0312 14:42:52.755263 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="965d6e0e3f611771f8ba2352415f565a" containerName="kube-controller-manager" containerID="cri-o://cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa" gracePeriod=30 Mar 12 14:42:52.757837 master-0 kubenswrapper[37036]: I0312 14:42:52.757786 37036 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 14:42:52.758267 master-0 kubenswrapper[37036]: E0312 14:42:52.758241 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="965d6e0e3f611771f8ba2352415f565a" containerName="kube-controller-manager" Mar 12 14:42:52.758267 master-0 kubenswrapper[37036]: I0312 14:42:52.758266 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="965d6e0e3f611771f8ba2352415f565a" containerName="kube-controller-manager" Mar 12 14:42:52.758420 master-0 kubenswrapper[37036]: E0312 14:42:52.758282 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="965d6e0e3f611771f8ba2352415f565a" containerName="kube-controller-manager" Mar 12 14:42:52.758420 master-0 kubenswrapper[37036]: I0312 14:42:52.758294 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="965d6e0e3f611771f8ba2352415f565a" containerName="kube-controller-manager" Mar 12 14:42:52.758420 master-0 kubenswrapper[37036]: E0312 14:42:52.758322 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="965d6e0e3f611771f8ba2352415f565a" containerName="cluster-policy-controller" Mar 12 14:42:52.758420 master-0 kubenswrapper[37036]: I0312 14:42:52.758330 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="965d6e0e3f611771f8ba2352415f565a" containerName="cluster-policy-controller" Mar 12 14:42:52.758420 master-0 kubenswrapper[37036]: E0312 14:42:52.758350 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="965d6e0e3f611771f8ba2352415f565a" containerName="kube-controller-manager-cert-syncer" Mar 12 14:42:52.758420 master-0 kubenswrapper[37036]: I0312 14:42:52.758359 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="965d6e0e3f611771f8ba2352415f565a" containerName="kube-controller-manager-cert-syncer" Mar 12 14:42:52.758420 master-0 kubenswrapper[37036]: E0312 14:42:52.758386 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="965d6e0e3f611771f8ba2352415f565a" containerName="kube-controller-manager-recovery-controller" Mar 12 14:42:52.758420 master-0 kubenswrapper[37036]: I0312 14:42:52.758395 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="965d6e0e3f611771f8ba2352415f565a" containerName="kube-controller-manager-recovery-controller" Mar 12 14:42:52.759532 master-0 kubenswrapper[37036]: I0312 14:42:52.758924 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="965d6e0e3f611771f8ba2352415f565a" containerName="cluster-policy-controller" Mar 12 14:42:52.759532 master-0 kubenswrapper[37036]: I0312 14:42:52.758959 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="965d6e0e3f611771f8ba2352415f565a" containerName="kube-controller-manager" Mar 12 14:42:52.759532 master-0 kubenswrapper[37036]: I0312 14:42:52.759177 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="965d6e0e3f611771f8ba2352415f565a" containerName="kube-controller-manager" Mar 12 14:42:52.759532 master-0 kubenswrapper[37036]: I0312 14:42:52.759213 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="965d6e0e3f611771f8ba2352415f565a" containerName="kube-controller-manager-cert-syncer" Mar 12 14:42:52.759532 master-0 kubenswrapper[37036]: I0312 14:42:52.759231 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="965d6e0e3f611771f8ba2352415f565a" containerName="kube-controller-manager-recovery-controller" Mar 12 14:42:52.887285 master-0 kubenswrapper[37036]: I0312 14:42:52.886438 37036 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 12 14:42:52.887285 master-0 kubenswrapper[37036]: I0312 14:42:52.886510 37036 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="965d6e0e3f611771f8ba2352415f565a" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 12 14:42:52.887285 master-0 kubenswrapper[37036]: I0312 14:42:52.887071 37036 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" start-of-body= Mar 12 14:42:52.887285 master-0 kubenswrapper[37036]: I0312 14:42:52.887145 37036 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="965d6e0e3f611771f8ba2352415f565a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Mar 12 14:42:52.945177 master-0 kubenswrapper[37036]: I0312 14:42:52.945134 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_965d6e0e3f611771f8ba2352415f565a/kube-controller-manager-cert-syncer/0.log" Mar 12 14:42:52.946723 master-0 kubenswrapper[37036]: I0312 14:42:52.946689 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_965d6e0e3f611771f8ba2352415f565a/kube-controller-manager/0.log" Mar 12 14:42:52.946812 master-0 kubenswrapper[37036]: I0312 14:42:52.946786 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:42:52.954251 master-0 kubenswrapper[37036]: I0312 14:42:52.951103 37036 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="965d6e0e3f611771f8ba2352415f565a" podUID="2eb2829edc45341b7e6764f2c4ff9d1f" Mar 12 14:42:52.955734 master-0 kubenswrapper[37036]: I0312 14:42:52.955513 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/965d6e0e3f611771f8ba2352415f565a-resource-dir\") pod \"965d6e0e3f611771f8ba2352415f565a\" (UID: \"965d6e0e3f611771f8ba2352415f565a\") " Mar 12 14:42:52.955734 master-0 kubenswrapper[37036]: I0312 14:42:52.955569 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/965d6e0e3f611771f8ba2352415f565a-cert-dir\") pod \"965d6e0e3f611771f8ba2352415f565a\" (UID: \"965d6e0e3f611771f8ba2352415f565a\") " Mar 12 14:42:52.955734 master-0 kubenswrapper[37036]: I0312 14:42:52.955654 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/965d6e0e3f611771f8ba2352415f565a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "965d6e0e3f611771f8ba2352415f565a" (UID: "965d6e0e3f611771f8ba2352415f565a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:42:52.955734 master-0 kubenswrapper[37036]: I0312 14:42:52.955722 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/965d6e0e3f611771f8ba2352415f565a-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "965d6e0e3f611771f8ba2352415f565a" (UID: "965d6e0e3f611771f8ba2352415f565a"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:42:52.955997 master-0 kubenswrapper[37036]: I0312 14:42:52.955701 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2829edc45341b7e6764f2c4ff9d1f-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"2eb2829edc45341b7e6764f2c4ff9d1f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:42:52.956274 master-0 kubenswrapper[37036]: I0312 14:42:52.956062 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2829edc45341b7e6764f2c4ff9d1f-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"2eb2829edc45341b7e6764f2c4ff9d1f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:42:52.956548 master-0 kubenswrapper[37036]: I0312 14:42:52.956516 37036 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/965d6e0e3f611771f8ba2352415f565a-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:42:52.956548 master-0 kubenswrapper[37036]: I0312 14:42:52.956543 37036 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/965d6e0e3f611771f8ba2352415f565a-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:42:53.059634 master-0 kubenswrapper[37036]: I0312 14:42:53.058316 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2829edc45341b7e6764f2c4ff9d1f-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"2eb2829edc45341b7e6764f2c4ff9d1f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:42:53.059634 master-0 kubenswrapper[37036]: I0312 14:42:53.058401 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2829edc45341b7e6764f2c4ff9d1f-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"2eb2829edc45341b7e6764f2c4ff9d1f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:42:53.059634 master-0 kubenswrapper[37036]: I0312 14:42:53.058513 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2829edc45341b7e6764f2c4ff9d1f-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"2eb2829edc45341b7e6764f2c4ff9d1f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:42:53.059634 master-0 kubenswrapper[37036]: I0312 14:42:53.058552 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2829edc45341b7e6764f2c4ff9d1f-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"2eb2829edc45341b7e6764f2c4ff9d1f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:42:53.242237 master-0 kubenswrapper[37036]: I0312 14:42:53.242187 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="965d6e0e3f611771f8ba2352415f565a" path="/var/lib/kubelet/pods/965d6e0e3f611771f8ba2352415f565a/volumes" Mar 12 14:42:53.767801 master-0 kubenswrapper[37036]: I0312 14:42:53.767752 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_965d6e0e3f611771f8ba2352415f565a/kube-controller-manager-cert-syncer/0.log" Mar 12 14:42:53.769368 master-0 kubenswrapper[37036]: I0312 14:42:53.769330 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_965d6e0e3f611771f8ba2352415f565a/kube-controller-manager/0.log" Mar 12 14:42:53.769451 master-0 kubenswrapper[37036]: I0312 14:42:53.769410 37036 generic.go:334] "Generic (PLEG): container finished" podID="965d6e0e3f611771f8ba2352415f565a" containerID="cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa" exitCode=0 Mar 12 14:42:53.769451 master-0 kubenswrapper[37036]: I0312 14:42:53.769443 37036 generic.go:334] "Generic (PLEG): container finished" podID="965d6e0e3f611771f8ba2352415f565a" containerID="c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0" exitCode=0 Mar 12 14:42:53.769544 master-0 kubenswrapper[37036]: I0312 14:42:53.769460 37036 generic.go:334] "Generic (PLEG): container finished" podID="965d6e0e3f611771f8ba2352415f565a" containerID="5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795" exitCode=2 Mar 12 14:42:53.769544 master-0 kubenswrapper[37036]: I0312 14:42:53.769473 37036 generic.go:334] "Generic (PLEG): container finished" podID="965d6e0e3f611771f8ba2352415f565a" containerID="b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28" exitCode=0 Mar 12 14:42:53.769544 master-0 kubenswrapper[37036]: I0312 14:42:53.769494 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:42:53.769679 master-0 kubenswrapper[37036]: I0312 14:42:53.769590 37036 scope.go:117] "RemoveContainer" containerID="cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa" Mar 12 14:42:53.771564 master-0 kubenswrapper[37036]: I0312 14:42:53.771373 37036 generic.go:334] "Generic (PLEG): container finished" podID="436e173b-7237-4811-84f9-5f5f56d5625d" containerID="9a41494081ce26755ac48819168149fc3a5dfcbbbd7c2b375b8b8ee57f106a3c" exitCode=0 Mar 12 14:42:53.771564 master-0 kubenswrapper[37036]: I0312 14:42:53.771473 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"436e173b-7237-4811-84f9-5f5f56d5625d","Type":"ContainerDied","Data":"9a41494081ce26755ac48819168149fc3a5dfcbbbd7c2b375b8b8ee57f106a3c"} Mar 12 14:42:53.772935 master-0 kubenswrapper[37036]: I0312 14:42:53.772886 37036 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="965d6e0e3f611771f8ba2352415f565a" podUID="2eb2829edc45341b7e6764f2c4ff9d1f" Mar 12 14:42:53.790257 master-0 kubenswrapper[37036]: I0312 14:42:53.790210 37036 scope.go:117] "RemoveContainer" containerID="c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0" Mar 12 14:42:53.816862 master-0 kubenswrapper[37036]: I0312 14:42:53.816820 37036 scope.go:117] "RemoveContainer" containerID="5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795" Mar 12 14:42:53.832011 master-0 kubenswrapper[37036]: I0312 14:42:53.831965 37036 scope.go:117] "RemoveContainer" containerID="b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28" Mar 12 14:42:53.845351 master-0 kubenswrapper[37036]: I0312 14:42:53.845294 37036 scope.go:117] "RemoveContainer" containerID="504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd" Mar 12 14:42:53.859010 master-0 kubenswrapper[37036]: I0312 14:42:53.858963 37036 scope.go:117] "RemoveContainer" containerID="cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa" Mar 12 14:42:53.859532 master-0 kubenswrapper[37036]: E0312 14:42:53.859493 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa\": container with ID starting with cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa not found: ID does not exist" containerID="cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa" Mar 12 14:42:53.859610 master-0 kubenswrapper[37036]: I0312 14:42:53.859542 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa"} err="failed to get container status \"cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa\": rpc error: code = NotFound desc = could not find container \"cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa\": container with ID starting with cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa not found: ID does not exist" Mar 12 14:42:53.859610 master-0 kubenswrapper[37036]: I0312 14:42:53.859571 37036 scope.go:117] "RemoveContainer" containerID="c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0" Mar 12 14:42:53.859935 master-0 kubenswrapper[37036]: E0312 14:42:53.859885 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0\": container with ID starting with c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0 not found: ID does not exist" containerID="c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0" Mar 12 14:42:53.860085 master-0 kubenswrapper[37036]: I0312 14:42:53.860052 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0"} err="failed to get container status \"c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0\": rpc error: code = NotFound desc = could not find container \"c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0\": container with ID starting with c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0 not found: ID does not exist" Mar 12 14:42:53.860202 master-0 kubenswrapper[37036]: I0312 14:42:53.860183 37036 scope.go:117] "RemoveContainer" containerID="5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795" Mar 12 14:42:53.860691 master-0 kubenswrapper[37036]: E0312 14:42:53.860650 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795\": container with ID starting with 5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795 not found: ID does not exist" containerID="5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795" Mar 12 14:42:53.860814 master-0 kubenswrapper[37036]: I0312 14:42:53.860787 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795"} err="failed to get container status \"5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795\": rpc error: code = NotFound desc = could not find container \"5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795\": container with ID starting with 5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795 not found: ID does not exist" Mar 12 14:42:53.860940 master-0 kubenswrapper[37036]: I0312 14:42:53.860924 37036 scope.go:117] "RemoveContainer" containerID="b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28" Mar 12 14:42:53.861387 master-0 kubenswrapper[37036]: E0312 14:42:53.861303 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28\": container with ID starting with b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28 not found: ID does not exist" containerID="b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28" Mar 12 14:42:53.861387 master-0 kubenswrapper[37036]: I0312 14:42:53.861355 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28"} err="failed to get container status \"b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28\": rpc error: code = NotFound desc = could not find container \"b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28\": container with ID starting with b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28 not found: ID does not exist" Mar 12 14:42:53.861523 master-0 kubenswrapper[37036]: I0312 14:42:53.861393 37036 scope.go:117] "RemoveContainer" containerID="504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd" Mar 12 14:42:53.861680 master-0 kubenswrapper[37036]: E0312 14:42:53.861636 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd\": container with ID starting with 504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd not found: ID does not exist" containerID="504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd" Mar 12 14:42:53.861680 master-0 kubenswrapper[37036]: I0312 14:42:53.861667 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd"} err="failed to get container status \"504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd\": rpc error: code = NotFound desc = could not find container \"504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd\": container with ID starting with 504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd not found: ID does not exist" Mar 12 14:42:53.861803 master-0 kubenswrapper[37036]: I0312 14:42:53.861684 37036 scope.go:117] "RemoveContainer" containerID="cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa" Mar 12 14:42:53.862006 master-0 kubenswrapper[37036]: I0312 14:42:53.861960 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa"} err="failed to get container status \"cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa\": rpc error: code = NotFound desc = could not find container \"cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa\": container with ID starting with cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa not found: ID does not exist" Mar 12 14:42:53.862006 master-0 kubenswrapper[37036]: I0312 14:42:53.861994 37036 scope.go:117] "RemoveContainer" containerID="c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0" Mar 12 14:42:53.862308 master-0 kubenswrapper[37036]: I0312 14:42:53.862264 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0"} err="failed to get container status \"c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0\": rpc error: code = NotFound desc = could not find container \"c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0\": container with ID starting with c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0 not found: ID does not exist" Mar 12 14:42:53.862308 master-0 kubenswrapper[37036]: I0312 14:42:53.862293 37036 scope.go:117] "RemoveContainer" containerID="5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795" Mar 12 14:42:53.862782 master-0 kubenswrapper[37036]: I0312 14:42:53.862745 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795"} err="failed to get container status \"5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795\": rpc error: code = NotFound desc = could not find container \"5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795\": container with ID starting with 5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795 not found: ID does not exist" Mar 12 14:42:53.862782 master-0 kubenswrapper[37036]: I0312 14:42:53.862776 37036 scope.go:117] "RemoveContainer" containerID="b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28" Mar 12 14:42:53.863109 master-0 kubenswrapper[37036]: I0312 14:42:53.863069 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28"} err="failed to get container status \"b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28\": rpc error: code = NotFound desc = could not find container \"b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28\": container with ID starting with b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28 not found: ID does not exist" Mar 12 14:42:53.863215 master-0 kubenswrapper[37036]: I0312 14:42:53.863098 37036 scope.go:117] "RemoveContainer" containerID="504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd" Mar 12 14:42:53.863396 master-0 kubenswrapper[37036]: I0312 14:42:53.863360 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd"} err="failed to get container status \"504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd\": rpc error: code = NotFound desc = could not find container \"504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd\": container with ID starting with 504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd not found: ID does not exist" Mar 12 14:42:53.863471 master-0 kubenswrapper[37036]: I0312 14:42:53.863394 37036 scope.go:117] "RemoveContainer" containerID="cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa" Mar 12 14:42:53.863671 master-0 kubenswrapper[37036]: I0312 14:42:53.863643 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa"} err="failed to get container status \"cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa\": rpc error: code = NotFound desc = could not find container \"cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa\": container with ID starting with cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa not found: ID does not exist" Mar 12 14:42:53.863746 master-0 kubenswrapper[37036]: I0312 14:42:53.863669 37036 scope.go:117] "RemoveContainer" containerID="c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0" Mar 12 14:42:53.863966 master-0 kubenswrapper[37036]: I0312 14:42:53.863935 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0"} err="failed to get container status \"c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0\": rpc error: code = NotFound desc = could not find container \"c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0\": container with ID starting with c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0 not found: ID does not exist" Mar 12 14:42:53.864055 master-0 kubenswrapper[37036]: I0312 14:42:53.863963 37036 scope.go:117] "RemoveContainer" containerID="5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795" Mar 12 14:42:53.864258 master-0 kubenswrapper[37036]: I0312 14:42:53.864231 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795"} err="failed to get container status \"5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795\": rpc error: code = NotFound desc = could not find container \"5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795\": container with ID starting with 5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795 not found: ID does not exist" Mar 12 14:42:53.864369 master-0 kubenswrapper[37036]: I0312 14:42:53.864350 37036 scope.go:117] "RemoveContainer" containerID="b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28" Mar 12 14:42:53.864777 master-0 kubenswrapper[37036]: I0312 14:42:53.864735 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28"} err="failed to get container status \"b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28\": rpc error: code = NotFound desc = could not find container \"b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28\": container with ID starting with b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28 not found: ID does not exist" Mar 12 14:42:53.864777 master-0 kubenswrapper[37036]: I0312 14:42:53.864764 37036 scope.go:117] "RemoveContainer" containerID="504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd" Mar 12 14:42:53.865123 master-0 kubenswrapper[37036]: I0312 14:42:53.865095 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd"} err="failed to get container status \"504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd\": rpc error: code = NotFound desc = could not find container \"504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd\": container with ID starting with 504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd not found: ID does not exist" Mar 12 14:42:53.865123 master-0 kubenswrapper[37036]: I0312 14:42:53.865121 37036 scope.go:117] "RemoveContainer" containerID="cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa" Mar 12 14:42:53.865462 master-0 kubenswrapper[37036]: I0312 14:42:53.865436 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa"} err="failed to get container status \"cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa\": rpc error: code = NotFound desc = could not find container \"cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa\": container with ID starting with cb51e9bb1744681187a6484f188dc3bc225409d86f59e1d4df817700881c10aa not found: ID does not exist" Mar 12 14:42:53.865581 master-0 kubenswrapper[37036]: I0312 14:42:53.865563 37036 scope.go:117] "RemoveContainer" containerID="c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0" Mar 12 14:42:53.865942 master-0 kubenswrapper[37036]: I0312 14:42:53.865913 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0"} err="failed to get container status \"c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0\": rpc error: code = NotFound desc = could not find container \"c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0\": container with ID starting with c326c23f80eabfd879d1b447cee343ee4016f46bd77841b5099e9f81f5f658b0 not found: ID does not exist" Mar 12 14:42:53.865942 master-0 kubenswrapper[37036]: I0312 14:42:53.865934 37036 scope.go:117] "RemoveContainer" containerID="5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795" Mar 12 14:42:53.866161 master-0 kubenswrapper[37036]: I0312 14:42:53.866123 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795"} err="failed to get container status \"5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795\": rpc error: code = NotFound desc = could not find container \"5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795\": container with ID starting with 5a777677fec36e49bbe64c9b05b44eff50b1c023c77c06c6445a67c99994a795 not found: ID does not exist" Mar 12 14:42:53.866161 master-0 kubenswrapper[37036]: I0312 14:42:53.866145 37036 scope.go:117] "RemoveContainer" containerID="b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28" Mar 12 14:42:53.866379 master-0 kubenswrapper[37036]: I0312 14:42:53.866347 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28"} err="failed to get container status \"b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28\": rpc error: code = NotFound desc = could not find container \"b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28\": container with ID starting with b1eb935db4c0be68abd4cc014ad08aa7adf7a5087305d5ff89fa17bc8e119d28 not found: ID does not exist" Mar 12 14:42:53.866379 master-0 kubenswrapper[37036]: I0312 14:42:53.866369 37036 scope.go:117] "RemoveContainer" containerID="504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd" Mar 12 14:42:53.866880 master-0 kubenswrapper[37036]: I0312 14:42:53.866851 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd"} err="failed to get container status \"504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd\": rpc error: code = NotFound desc = could not find container \"504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd\": container with ID starting with 504306903ba69992729ff0c67d4162b6b702e741350e177ef97d894f5d5364fd not found: ID does not exist" Mar 12 14:42:55.068175 master-0 kubenswrapper[37036]: I0312 14:42:55.068109 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 14:42:55.185503 master-0 kubenswrapper[37036]: I0312 14:42:55.185448 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/436e173b-7237-4811-84f9-5f5f56d5625d-var-lock\") pod \"436e173b-7237-4811-84f9-5f5f56d5625d\" (UID: \"436e173b-7237-4811-84f9-5f5f56d5625d\") " Mar 12 14:42:55.185761 master-0 kubenswrapper[37036]: I0312 14:42:55.185543 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/436e173b-7237-4811-84f9-5f5f56d5625d-kubelet-dir\") pod \"436e173b-7237-4811-84f9-5f5f56d5625d\" (UID: \"436e173b-7237-4811-84f9-5f5f56d5625d\") " Mar 12 14:42:55.185761 master-0 kubenswrapper[37036]: I0312 14:42:55.185546 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/436e173b-7237-4811-84f9-5f5f56d5625d-var-lock" (OuterVolumeSpecName: "var-lock") pod "436e173b-7237-4811-84f9-5f5f56d5625d" (UID: "436e173b-7237-4811-84f9-5f5f56d5625d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:42:55.185761 master-0 kubenswrapper[37036]: I0312 14:42:55.185622 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/436e173b-7237-4811-84f9-5f5f56d5625d-kube-api-access\") pod \"436e173b-7237-4811-84f9-5f5f56d5625d\" (UID: \"436e173b-7237-4811-84f9-5f5f56d5625d\") " Mar 12 14:42:55.185761 master-0 kubenswrapper[37036]: I0312 14:42:55.185633 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/436e173b-7237-4811-84f9-5f5f56d5625d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "436e173b-7237-4811-84f9-5f5f56d5625d" (UID: "436e173b-7237-4811-84f9-5f5f56d5625d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:42:55.186054 master-0 kubenswrapper[37036]: I0312 14:42:55.186030 37036 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/436e173b-7237-4811-84f9-5f5f56d5625d-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 14:42:55.186054 master-0 kubenswrapper[37036]: I0312 14:42:55.186049 37036 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/436e173b-7237-4811-84f9-5f5f56d5625d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:42:55.188425 master-0 kubenswrapper[37036]: I0312 14:42:55.188384 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/436e173b-7237-4811-84f9-5f5f56d5625d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "436e173b-7237-4811-84f9-5f5f56d5625d" (UID: "436e173b-7237-4811-84f9-5f5f56d5625d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:42:55.287723 master-0 kubenswrapper[37036]: I0312 14:42:55.287602 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/436e173b-7237-4811-84f9-5f5f56d5625d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 14:42:55.788875 master-0 kubenswrapper[37036]: I0312 14:42:55.788817 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"436e173b-7237-4811-84f9-5f5f56d5625d","Type":"ContainerDied","Data":"143faf52969ac6eaa56472778bfda48be8fc59a1d52b133d6f6e94e21f12a1f3"} Mar 12 14:42:55.788875 master-0 kubenswrapper[37036]: I0312 14:42:55.788860 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="143faf52969ac6eaa56472778bfda48be8fc59a1d52b133d6f6e94e21f12a1f3" Mar 12 14:42:55.789186 master-0 kubenswrapper[37036]: I0312 14:42:55.788927 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 14:42:59.757273 master-0 kubenswrapper[37036]: I0312 14:42:59.757085 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-64d4ccfbcf-82w2p" podUID="cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf" containerName="console" containerID="cri-o://1c1bfefbf00413f0e63810fcf625b0dbe7a64e5b0ba154c6ffd67314866e31d5" gracePeriod=15 Mar 12 14:43:01.836942 master-0 kubenswrapper[37036]: I0312 14:43:01.832639 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-64d4ccfbcf-82w2p_cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf/console/0.log" Mar 12 14:43:01.836942 master-0 kubenswrapper[37036]: I0312 14:43:01.832721 37036 generic.go:334] "Generic (PLEG): container finished" podID="cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf" containerID="1c1bfefbf00413f0e63810fcf625b0dbe7a64e5b0ba154c6ffd67314866e31d5" exitCode=2 Mar 12 14:43:01.836942 master-0 kubenswrapper[37036]: I0312 14:43:01.832751 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d4ccfbcf-82w2p" event={"ID":"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf","Type":"ContainerDied","Data":"1c1bfefbf00413f0e63810fcf625b0dbe7a64e5b0ba154c6ffd67314866e31d5"} Mar 12 14:43:02.056687 master-0 kubenswrapper[37036]: I0312 14:43:02.056653 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-64d4ccfbcf-82w2p_cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf/console/0.log" Mar 12 14:43:02.057021 master-0 kubenswrapper[37036]: I0312 14:43:02.056717 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:43:02.126183 master-0 kubenswrapper[37036]: I0312 14:43:02.126120 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-trusted-ca-bundle\") pod \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " Mar 12 14:43:02.126183 master-0 kubenswrapper[37036]: I0312 14:43:02.126181 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-console-oauth-config\") pod \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " Mar 12 14:43:02.126449 master-0 kubenswrapper[37036]: I0312 14:43:02.126207 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-console-config\") pod \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " Mar 12 14:43:02.126449 master-0 kubenswrapper[37036]: I0312 14:43:02.126232 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-console-serving-cert\") pod \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " Mar 12 14:43:02.126449 master-0 kubenswrapper[37036]: I0312 14:43:02.126252 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-service-ca\") pod \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " Mar 12 14:43:02.126449 master-0 kubenswrapper[37036]: I0312 14:43:02.126274 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-oauth-serving-cert\") pod \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " Mar 12 14:43:02.126449 master-0 kubenswrapper[37036]: I0312 14:43:02.126342 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9f6v\" (UniqueName: \"kubernetes.io/projected/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-kube-api-access-p9f6v\") pod \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\" (UID: \"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf\") " Mar 12 14:43:02.127345 master-0 kubenswrapper[37036]: I0312 14:43:02.127322 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-service-ca" (OuterVolumeSpecName: "service-ca") pod "cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf" (UID: "cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:43:02.127524 master-0 kubenswrapper[37036]: I0312 14:43:02.127477 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf" (UID: "cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:43:02.127625 master-0 kubenswrapper[37036]: I0312 14:43:02.127605 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf" (UID: "cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:43:02.129233 master-0 kubenswrapper[37036]: I0312 14:43:02.129117 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-console-config" (OuterVolumeSpecName: "console-config") pod "cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf" (UID: "cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:43:02.129843 master-0 kubenswrapper[37036]: I0312 14:43:02.129811 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf" (UID: "cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:43:02.130367 master-0 kubenswrapper[37036]: I0312 14:43:02.130339 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-kube-api-access-p9f6v" (OuterVolumeSpecName: "kube-api-access-p9f6v") pod "cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf" (UID: "cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf"). InnerVolumeSpecName "kube-api-access-p9f6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:43:02.145537 master-0 kubenswrapper[37036]: I0312 14:43:02.145488 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf" (UID: "cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:43:02.228544 master-0 kubenswrapper[37036]: I0312 14:43:02.228444 37036 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:43:02.228544 master-0 kubenswrapper[37036]: I0312 14:43:02.228497 37036 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:43:02.228544 master-0 kubenswrapper[37036]: I0312 14:43:02.228511 37036 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-console-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:43:02.228544 master-0 kubenswrapper[37036]: I0312 14:43:02.228524 37036 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:43:02.228544 master-0 kubenswrapper[37036]: I0312 14:43:02.228537 37036 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:43:02.228544 master-0 kubenswrapper[37036]: I0312 14:43:02.228549 37036 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:43:02.228544 master-0 kubenswrapper[37036]: I0312 14:43:02.228561 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9f6v\" (UniqueName: \"kubernetes.io/projected/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf-kube-api-access-p9f6v\") on node \"master-0\" DevicePath \"\"" Mar 12 14:43:02.848509 master-0 kubenswrapper[37036]: I0312 14:43:02.848412 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-5756bd54c7-x8hn7" event={"ID":"976fffb6-62f4-455d-b934-8cb1a5b175f9","Type":"ContainerStarted","Data":"c279b3f7538d001fcbe770ca6dc7afbb05a98d15530d344cc19a913eda28707b"} Mar 12 14:43:02.848509 master-0 kubenswrapper[37036]: I0312 14:43:02.848494 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-5756bd54c7-x8hn7" event={"ID":"976fffb6-62f4-455d-b934-8cb1a5b175f9","Type":"ContainerStarted","Data":"709c58710577e928b48be639f5d22eae6a3300807d06d2b0e638717947042296"} Mar 12 14:43:02.853479 master-0 kubenswrapper[37036]: I0312 14:43:02.853426 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-64d4ccfbcf-82w2p_cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf/console/0.log" Mar 12 14:43:02.853612 master-0 kubenswrapper[37036]: I0312 14:43:02.853532 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d4ccfbcf-82w2p" event={"ID":"cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf","Type":"ContainerDied","Data":"f7335b93c32611dc18f2cd874ad5b43ab8e42e31f93e4d03d6fb8b78a630a081"} Mar 12 14:43:02.853612 master-0 kubenswrapper[37036]: I0312 14:43:02.853600 37036 scope.go:117] "RemoveContainer" containerID="1c1bfefbf00413f0e63810fcf625b0dbe7a64e5b0ba154c6ffd67314866e31d5" Mar 12 14:43:02.853799 master-0 kubenswrapper[37036]: I0312 14:43:02.853766 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d4ccfbcf-82w2p" Mar 12 14:43:02.883887 master-0 kubenswrapper[37036]: I0312 14:43:02.883764 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-recorder-5756bd54c7-x8hn7" podStartSLOduration=2.06359545 podStartE2EDuration="12.883726828s" podCreationTimestamp="2026-03-12 14:42:50 +0000 UTC" firstStartedPulling="2026-03-12 14:42:51.78574045 +0000 UTC m=+430.793481397" lastFinishedPulling="2026-03-12 14:43:02.605871838 +0000 UTC m=+441.613612775" observedRunningTime="2026-03-12 14:43:02.869608816 +0000 UTC m=+441.877349793" watchObservedRunningTime="2026-03-12 14:43:02.883726828 +0000 UTC m=+441.891467825" Mar 12 14:43:02.917066 master-0 kubenswrapper[37036]: I0312 14:43:02.916980 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-64d4ccfbcf-82w2p"] Mar 12 14:43:02.924047 master-0 kubenswrapper[37036]: I0312 14:43:02.923985 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-64d4ccfbcf-82w2p"] Mar 12 14:43:03.242573 master-0 kubenswrapper[37036]: I0312 14:43:03.242467 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf" path="/var/lib/kubelet/pods/cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf/volumes" Mar 12 14:43:07.234492 master-0 kubenswrapper[37036]: I0312 14:43:07.234437 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:43:07.250494 master-0 kubenswrapper[37036]: I0312 14:43:07.250448 37036 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="2a3f1a1a-ad96-4e43-9f96-96f8312ad769" Mar 12 14:43:07.250633 master-0 kubenswrapper[37036]: I0312 14:43:07.250534 37036 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="2a3f1a1a-ad96-4e43-9f96-96f8312ad769" Mar 12 14:43:07.272262 master-0 kubenswrapper[37036]: I0312 14:43:07.272191 37036 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:43:07.272832 master-0 kubenswrapper[37036]: I0312 14:43:07.272797 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 14:43:07.280202 master-0 kubenswrapper[37036]: I0312 14:43:07.280137 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 14:43:07.290965 master-0 kubenswrapper[37036]: I0312 14:43:07.290877 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:43:07.312005 master-0 kubenswrapper[37036]: I0312 14:43:07.311880 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 14:43:07.312507 master-0 kubenswrapper[37036]: W0312 14:43:07.312389 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2eb2829edc45341b7e6764f2c4ff9d1f.slice/crio-291e894245e0fd8d0b2c996d75d17abad37b879b936820810732c9721875b8f1 WatchSource:0}: Error finding container 291e894245e0fd8d0b2c996d75d17abad37b879b936820810732c9721875b8f1: Status 404 returned error can't find the container with id 291e894245e0fd8d0b2c996d75d17abad37b879b936820810732c9721875b8f1 Mar 12 14:43:07.896198 master-0 kubenswrapper[37036]: I0312 14:43:07.896149 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"2eb2829edc45341b7e6764f2c4ff9d1f","Type":"ContainerStarted","Data":"4218d6c86b0f8047ddc3fdeffa96ac88bac5d46a5e777fc558e2b026e1200501"} Mar 12 14:43:07.896304 master-0 kubenswrapper[37036]: I0312 14:43:07.896205 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"2eb2829edc45341b7e6764f2c4ff9d1f","Type":"ContainerStarted","Data":"97fdd59b290ca32ba661a2cfa73fc3f604c96b5ffccb0b99c87fb7b8a6fe399c"} Mar 12 14:43:07.896304 master-0 kubenswrapper[37036]: I0312 14:43:07.896218 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"2eb2829edc45341b7e6764f2c4ff9d1f","Type":"ContainerStarted","Data":"291e894245e0fd8d0b2c996d75d17abad37b879b936820810732c9721875b8f1"} Mar 12 14:43:08.907195 master-0 kubenswrapper[37036]: I0312 14:43:08.907123 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"2eb2829edc45341b7e6764f2c4ff9d1f","Type":"ContainerStarted","Data":"bb9b531d90989ebfece235dce9dd627a9c52ded7da8b22ce2e5c249a10b72cca"} Mar 12 14:43:08.907195 master-0 kubenswrapper[37036]: I0312 14:43:08.907193 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"2eb2829edc45341b7e6764f2c4ff9d1f","Type":"ContainerStarted","Data":"002f50bcc1f17834421924df4255bba561610aa905d11f854273b8d0ae83e549"} Mar 12 14:43:08.928552 master-0 kubenswrapper[37036]: I0312 14:43:08.928408 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=1.92838741 podStartE2EDuration="1.92838741s" podCreationTimestamp="2026-03-12 14:43:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:43:08.927991791 +0000 UTC m=+447.935732728" watchObservedRunningTime="2026-03-12 14:43:08.92838741 +0000 UTC m=+447.936128347" Mar 12 14:43:17.292740 master-0 kubenswrapper[37036]: I0312 14:43:17.292647 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:43:17.293319 master-0 kubenswrapper[37036]: I0312 14:43:17.292754 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:43:17.293319 master-0 kubenswrapper[37036]: I0312 14:43:17.292784 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:43:17.293319 master-0 kubenswrapper[37036]: I0312 14:43:17.292809 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:43:17.296877 master-0 kubenswrapper[37036]: I0312 14:43:17.296835 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:43:17.298632 master-0 kubenswrapper[37036]: I0312 14:43:17.298573 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:43:17.974114 master-0 kubenswrapper[37036]: I0312 14:43:17.974054 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:43:17.975242 master-0 kubenswrapper[37036]: I0312 14:43:17.975174 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 14:43:30.691874 master-0 kubenswrapper[37036]: I0312 14:43:30.691817 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x"] Mar 12 14:43:30.692599 master-0 kubenswrapper[37036]: E0312 14:43:30.692129 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="436e173b-7237-4811-84f9-5f5f56d5625d" containerName="installer" Mar 12 14:43:30.692599 master-0 kubenswrapper[37036]: I0312 14:43:30.692145 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="436e173b-7237-4811-84f9-5f5f56d5625d" containerName="installer" Mar 12 14:43:30.692599 master-0 kubenswrapper[37036]: E0312 14:43:30.692205 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf" containerName="console" Mar 12 14:43:30.692599 master-0 kubenswrapper[37036]: I0312 14:43:30.692214 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf" containerName="console" Mar 12 14:43:30.692599 master-0 kubenswrapper[37036]: I0312 14:43:30.692376 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="436e173b-7237-4811-84f9-5f5f56d5625d" containerName="installer" Mar 12 14:43:30.692599 master-0 kubenswrapper[37036]: I0312 14:43:30.692402 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc6d7a20-b171-4d99-bedf-84b6dbf6d9cf" containerName="console" Mar 12 14:43:30.693586 master-0 kubenswrapper[37036]: I0312 14:43:30.693563 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x" Mar 12 14:43:30.695704 master-0 kubenswrapper[37036]: I0312 14:43:30.695674 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-9qbhc" Mar 12 14:43:30.710411 master-0 kubenswrapper[37036]: I0312 14:43:30.710351 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x"] Mar 12 14:43:30.867683 master-0 kubenswrapper[37036]: I0312 14:43:30.867605 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c4b36f1-fd12-4130-a093-66ffe247ec39-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x\" (UID: \"2c4b36f1-fd12-4130-a093-66ffe247ec39\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x" Mar 12 14:43:30.867958 master-0 kubenswrapper[37036]: I0312 14:43:30.867788 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c4b36f1-fd12-4130-a093-66ffe247ec39-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x\" (UID: \"2c4b36f1-fd12-4130-a093-66ffe247ec39\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x" Mar 12 14:43:30.868124 master-0 kubenswrapper[37036]: I0312 14:43:30.868056 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5r74\" (UniqueName: \"kubernetes.io/projected/2c4b36f1-fd12-4130-a093-66ffe247ec39-kube-api-access-f5r74\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x\" (UID: \"2c4b36f1-fd12-4130-a093-66ffe247ec39\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x" Mar 12 14:43:30.969808 master-0 kubenswrapper[37036]: I0312 14:43:30.969660 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c4b36f1-fd12-4130-a093-66ffe247ec39-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x\" (UID: \"2c4b36f1-fd12-4130-a093-66ffe247ec39\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x" Mar 12 14:43:30.969808 master-0 kubenswrapper[37036]: I0312 14:43:30.969765 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5r74\" (UniqueName: \"kubernetes.io/projected/2c4b36f1-fd12-4130-a093-66ffe247ec39-kube-api-access-f5r74\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x\" (UID: \"2c4b36f1-fd12-4130-a093-66ffe247ec39\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x" Mar 12 14:43:30.970075 master-0 kubenswrapper[37036]: I0312 14:43:30.969837 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c4b36f1-fd12-4130-a093-66ffe247ec39-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x\" (UID: \"2c4b36f1-fd12-4130-a093-66ffe247ec39\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x" Mar 12 14:43:30.970494 master-0 kubenswrapper[37036]: I0312 14:43:30.970465 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c4b36f1-fd12-4130-a093-66ffe247ec39-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x\" (UID: \"2c4b36f1-fd12-4130-a093-66ffe247ec39\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x" Mar 12 14:43:30.970575 master-0 kubenswrapper[37036]: I0312 14:43:30.970459 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c4b36f1-fd12-4130-a093-66ffe247ec39-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x\" (UID: \"2c4b36f1-fd12-4130-a093-66ffe247ec39\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x" Mar 12 14:43:30.998926 master-0 kubenswrapper[37036]: I0312 14:43:30.998821 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5r74\" (UniqueName: \"kubernetes.io/projected/2c4b36f1-fd12-4130-a093-66ffe247ec39-kube-api-access-f5r74\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x\" (UID: \"2c4b36f1-fd12-4130-a093-66ffe247ec39\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x" Mar 12 14:43:31.023242 master-0 kubenswrapper[37036]: I0312 14:43:31.023197 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x" Mar 12 14:43:31.468539 master-0 kubenswrapper[37036]: I0312 14:43:31.468484 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x"] Mar 12 14:43:31.478038 master-0 kubenswrapper[37036]: W0312 14:43:31.473446 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c4b36f1_fd12_4130_a093_66ffe247ec39.slice/crio-aaf2290b650ad9a22a7927a4a5c03a8cf26fc3b34551f19e16cc35bfcf24b42e WatchSource:0}: Error finding container aaf2290b650ad9a22a7927a4a5c03a8cf26fc3b34551f19e16cc35bfcf24b42e: Status 404 returned error can't find the container with id aaf2290b650ad9a22a7927a4a5c03a8cf26fc3b34551f19e16cc35bfcf24b42e Mar 12 14:43:32.076865 master-0 kubenswrapper[37036]: I0312 14:43:32.076796 37036 generic.go:334] "Generic (PLEG): container finished" podID="2c4b36f1-fd12-4130-a093-66ffe247ec39" containerID="f7a153b1cf2fc6f74a84c4927d13009e25c5dbaa62c033da86a603d0656eefdd" exitCode=0 Mar 12 14:43:32.076865 master-0 kubenswrapper[37036]: I0312 14:43:32.076860 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x" event={"ID":"2c4b36f1-fd12-4130-a093-66ffe247ec39","Type":"ContainerDied","Data":"f7a153b1cf2fc6f74a84c4927d13009e25c5dbaa62c033da86a603d0656eefdd"} Mar 12 14:43:32.077705 master-0 kubenswrapper[37036]: I0312 14:43:32.076891 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x" event={"ID":"2c4b36f1-fd12-4130-a093-66ffe247ec39","Type":"ContainerStarted","Data":"aaf2290b650ad9a22a7927a4a5c03a8cf26fc3b34551f19e16cc35bfcf24b42e"} Mar 12 14:43:34.093878 master-0 kubenswrapper[37036]: I0312 14:43:34.093796 37036 generic.go:334] "Generic (PLEG): container finished" podID="2c4b36f1-fd12-4130-a093-66ffe247ec39" containerID="e37bc2ace7da7409fff8b49df8544be87b919c5c1d1e2e887308d9e8e0d376a1" exitCode=0 Mar 12 14:43:34.093878 master-0 kubenswrapper[37036]: I0312 14:43:34.093877 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x" event={"ID":"2c4b36f1-fd12-4130-a093-66ffe247ec39","Type":"ContainerDied","Data":"e37bc2ace7da7409fff8b49df8544be87b919c5c1d1e2e887308d9e8e0d376a1"} Mar 12 14:43:35.105093 master-0 kubenswrapper[37036]: I0312 14:43:35.105009 37036 generic.go:334] "Generic (PLEG): container finished" podID="2c4b36f1-fd12-4130-a093-66ffe247ec39" containerID="862daf0f0dc6ede3714529840ab26038cf1e2889edb33aff06fd42299491f397" exitCode=0 Mar 12 14:43:35.105992 master-0 kubenswrapper[37036]: I0312 14:43:35.105921 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x" event={"ID":"2c4b36f1-fd12-4130-a093-66ffe247ec39","Type":"ContainerDied","Data":"862daf0f0dc6ede3714529840ab26038cf1e2889edb33aff06fd42299491f397"} Mar 12 14:43:36.386713 master-0 kubenswrapper[37036]: I0312 14:43:36.386623 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x" Mar 12 14:43:36.478627 master-0 kubenswrapper[37036]: I0312 14:43:36.478561 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5r74\" (UniqueName: \"kubernetes.io/projected/2c4b36f1-fd12-4130-a093-66ffe247ec39-kube-api-access-f5r74\") pod \"2c4b36f1-fd12-4130-a093-66ffe247ec39\" (UID: \"2c4b36f1-fd12-4130-a093-66ffe247ec39\") " Mar 12 14:43:36.478822 master-0 kubenswrapper[37036]: I0312 14:43:36.478649 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c4b36f1-fd12-4130-a093-66ffe247ec39-bundle\") pod \"2c4b36f1-fd12-4130-a093-66ffe247ec39\" (UID: \"2c4b36f1-fd12-4130-a093-66ffe247ec39\") " Mar 12 14:43:36.478822 master-0 kubenswrapper[37036]: I0312 14:43:36.478706 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c4b36f1-fd12-4130-a093-66ffe247ec39-util\") pod \"2c4b36f1-fd12-4130-a093-66ffe247ec39\" (UID: \"2c4b36f1-fd12-4130-a093-66ffe247ec39\") " Mar 12 14:43:36.479576 master-0 kubenswrapper[37036]: I0312 14:43:36.479535 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c4b36f1-fd12-4130-a093-66ffe247ec39-bundle" (OuterVolumeSpecName: "bundle") pod "2c4b36f1-fd12-4130-a093-66ffe247ec39" (UID: "2c4b36f1-fd12-4130-a093-66ffe247ec39"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:43:36.490962 master-0 kubenswrapper[37036]: I0312 14:43:36.481768 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c4b36f1-fd12-4130-a093-66ffe247ec39-kube-api-access-f5r74" (OuterVolumeSpecName: "kube-api-access-f5r74") pod "2c4b36f1-fd12-4130-a093-66ffe247ec39" (UID: "2c4b36f1-fd12-4130-a093-66ffe247ec39"). InnerVolumeSpecName "kube-api-access-f5r74". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:43:36.492606 master-0 kubenswrapper[37036]: I0312 14:43:36.492548 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c4b36f1-fd12-4130-a093-66ffe247ec39-util" (OuterVolumeSpecName: "util") pod "2c4b36f1-fd12-4130-a093-66ffe247ec39" (UID: "2c4b36f1-fd12-4130-a093-66ffe247ec39"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:43:36.580202 master-0 kubenswrapper[37036]: I0312 14:43:36.580126 37036 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c4b36f1-fd12-4130-a093-66ffe247ec39-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:43:36.580202 master-0 kubenswrapper[37036]: I0312 14:43:36.580168 37036 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c4b36f1-fd12-4130-a093-66ffe247ec39-util\") on node \"master-0\" DevicePath \"\"" Mar 12 14:43:36.580202 master-0 kubenswrapper[37036]: I0312 14:43:36.580182 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5r74\" (UniqueName: \"kubernetes.io/projected/2c4b36f1-fd12-4130-a093-66ffe247ec39-kube-api-access-f5r74\") on node \"master-0\" DevicePath \"\"" Mar 12 14:43:37.121576 master-0 kubenswrapper[37036]: I0312 14:43:37.121504 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x" event={"ID":"2c4b36f1-fd12-4130-a093-66ffe247ec39","Type":"ContainerDied","Data":"aaf2290b650ad9a22a7927a4a5c03a8cf26fc3b34551f19e16cc35bfcf24b42e"} Mar 12 14:43:37.121576 master-0 kubenswrapper[37036]: I0312 14:43:37.121556 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qmm8x" Mar 12 14:43:37.121576 master-0 kubenswrapper[37036]: I0312 14:43:37.121568 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaf2290b650ad9a22a7927a4a5c03a8cf26fc3b34551f19e16cc35bfcf24b42e" Mar 12 14:43:43.902737 master-0 kubenswrapper[37036]: I0312 14:43:43.902668 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-5fb5f9d48b-dmdd7"] Mar 12 14:43:43.903463 master-0 kubenswrapper[37036]: E0312 14:43:43.902968 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4b36f1-fd12-4130-a093-66ffe247ec39" containerName="pull" Mar 12 14:43:43.903463 master-0 kubenswrapper[37036]: I0312 14:43:43.902981 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4b36f1-fd12-4130-a093-66ffe247ec39" containerName="pull" Mar 12 14:43:43.903463 master-0 kubenswrapper[37036]: E0312 14:43:43.903020 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4b36f1-fd12-4130-a093-66ffe247ec39" containerName="util" Mar 12 14:43:43.903463 master-0 kubenswrapper[37036]: I0312 14:43:43.903027 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4b36f1-fd12-4130-a093-66ffe247ec39" containerName="util" Mar 12 14:43:43.903463 master-0 kubenswrapper[37036]: E0312 14:43:43.903037 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4b36f1-fd12-4130-a093-66ffe247ec39" containerName="extract" Mar 12 14:43:43.903463 master-0 kubenswrapper[37036]: I0312 14:43:43.903043 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4b36f1-fd12-4130-a093-66ffe247ec39" containerName="extract" Mar 12 14:43:43.903463 master-0 kubenswrapper[37036]: I0312 14:43:43.903213 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c4b36f1-fd12-4130-a093-66ffe247ec39" containerName="extract" Mar 12 14:43:43.903886 master-0 kubenswrapper[37036]: I0312 14:43:43.903830 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-5fb5f9d48b-dmdd7" Mar 12 14:43:43.905985 master-0 kubenswrapper[37036]: I0312 14:43:43.905954 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Mar 12 14:43:43.906156 master-0 kubenswrapper[37036]: I0312 14:43:43.906135 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Mar 12 14:43:43.906273 master-0 kubenswrapper[37036]: I0312 14:43:43.906252 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Mar 12 14:43:43.906377 master-0 kubenswrapper[37036]: I0312 14:43:43.906358 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Mar 12 14:43:43.906499 master-0 kubenswrapper[37036]: I0312 14:43:43.906476 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Mar 12 14:43:43.919030 master-0 kubenswrapper[37036]: I0312 14:43:43.918972 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-5fb5f9d48b-dmdd7"] Mar 12 14:43:43.991434 master-0 kubenswrapper[37036]: I0312 14:43:43.991369 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c5c4c98b-ee2a-4414-ab32-72d8782396ca-webhook-cert\") pod \"lvms-operator-5fb5f9d48b-dmdd7\" (UID: \"c5c4c98b-ee2a-4414-ab32-72d8782396ca\") " pod="openshift-storage/lvms-operator-5fb5f9d48b-dmdd7" Mar 12 14:43:43.991689 master-0 kubenswrapper[37036]: I0312 14:43:43.991459 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xklmm\" (UniqueName: \"kubernetes.io/projected/c5c4c98b-ee2a-4414-ab32-72d8782396ca-kube-api-access-xklmm\") pod \"lvms-operator-5fb5f9d48b-dmdd7\" (UID: \"c5c4c98b-ee2a-4414-ab32-72d8782396ca\") " pod="openshift-storage/lvms-operator-5fb5f9d48b-dmdd7" Mar 12 14:43:43.991689 master-0 kubenswrapper[37036]: I0312 14:43:43.991501 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5c4c98b-ee2a-4414-ab32-72d8782396ca-socket-dir\") pod \"lvms-operator-5fb5f9d48b-dmdd7\" (UID: \"c5c4c98b-ee2a-4414-ab32-72d8782396ca\") " pod="openshift-storage/lvms-operator-5fb5f9d48b-dmdd7" Mar 12 14:43:43.991689 master-0 kubenswrapper[37036]: I0312 14:43:43.991534 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/c5c4c98b-ee2a-4414-ab32-72d8782396ca-metrics-cert\") pod \"lvms-operator-5fb5f9d48b-dmdd7\" (UID: \"c5c4c98b-ee2a-4414-ab32-72d8782396ca\") " pod="openshift-storage/lvms-operator-5fb5f9d48b-dmdd7" Mar 12 14:43:43.991689 master-0 kubenswrapper[37036]: I0312 14:43:43.991610 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c5c4c98b-ee2a-4414-ab32-72d8782396ca-apiservice-cert\") pod \"lvms-operator-5fb5f9d48b-dmdd7\" (UID: \"c5c4c98b-ee2a-4414-ab32-72d8782396ca\") " pod="openshift-storage/lvms-operator-5fb5f9d48b-dmdd7" Mar 12 14:43:44.093258 master-0 kubenswrapper[37036]: I0312 14:43:44.093177 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c5c4c98b-ee2a-4414-ab32-72d8782396ca-apiservice-cert\") pod \"lvms-operator-5fb5f9d48b-dmdd7\" (UID: \"c5c4c98b-ee2a-4414-ab32-72d8782396ca\") " pod="openshift-storage/lvms-operator-5fb5f9d48b-dmdd7" Mar 12 14:43:44.093678 master-0 kubenswrapper[37036]: I0312 14:43:44.093604 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c5c4c98b-ee2a-4414-ab32-72d8782396ca-webhook-cert\") pod \"lvms-operator-5fb5f9d48b-dmdd7\" (UID: \"c5c4c98b-ee2a-4414-ab32-72d8782396ca\") " pod="openshift-storage/lvms-operator-5fb5f9d48b-dmdd7" Mar 12 14:43:44.093945 master-0 kubenswrapper[37036]: I0312 14:43:44.093922 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xklmm\" (UniqueName: \"kubernetes.io/projected/c5c4c98b-ee2a-4414-ab32-72d8782396ca-kube-api-access-xklmm\") pod \"lvms-operator-5fb5f9d48b-dmdd7\" (UID: \"c5c4c98b-ee2a-4414-ab32-72d8782396ca\") " pod="openshift-storage/lvms-operator-5fb5f9d48b-dmdd7" Mar 12 14:43:44.094129 master-0 kubenswrapper[37036]: I0312 14:43:44.094111 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5c4c98b-ee2a-4414-ab32-72d8782396ca-socket-dir\") pod \"lvms-operator-5fb5f9d48b-dmdd7\" (UID: \"c5c4c98b-ee2a-4414-ab32-72d8782396ca\") " pod="openshift-storage/lvms-operator-5fb5f9d48b-dmdd7" Mar 12 14:43:44.094582 master-0 kubenswrapper[37036]: I0312 14:43:44.094551 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/c5c4c98b-ee2a-4414-ab32-72d8782396ca-metrics-cert\") pod \"lvms-operator-5fb5f9d48b-dmdd7\" (UID: \"c5c4c98b-ee2a-4414-ab32-72d8782396ca\") " pod="openshift-storage/lvms-operator-5fb5f9d48b-dmdd7" Mar 12 14:43:44.094786 master-0 kubenswrapper[37036]: I0312 14:43:44.094769 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5c4c98b-ee2a-4414-ab32-72d8782396ca-socket-dir\") pod \"lvms-operator-5fb5f9d48b-dmdd7\" (UID: \"c5c4c98b-ee2a-4414-ab32-72d8782396ca\") " pod="openshift-storage/lvms-operator-5fb5f9d48b-dmdd7" Mar 12 14:43:44.099733 master-0 kubenswrapper[37036]: I0312 14:43:44.099688 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c5c4c98b-ee2a-4414-ab32-72d8782396ca-webhook-cert\") pod \"lvms-operator-5fb5f9d48b-dmdd7\" (UID: \"c5c4c98b-ee2a-4414-ab32-72d8782396ca\") " pod="openshift-storage/lvms-operator-5fb5f9d48b-dmdd7" Mar 12 14:43:44.103755 master-0 kubenswrapper[37036]: I0312 14:43:44.103692 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/c5c4c98b-ee2a-4414-ab32-72d8782396ca-metrics-cert\") pod \"lvms-operator-5fb5f9d48b-dmdd7\" (UID: \"c5c4c98b-ee2a-4414-ab32-72d8782396ca\") " pod="openshift-storage/lvms-operator-5fb5f9d48b-dmdd7" Mar 12 14:43:44.107920 master-0 kubenswrapper[37036]: I0312 14:43:44.104267 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c5c4c98b-ee2a-4414-ab32-72d8782396ca-apiservice-cert\") pod \"lvms-operator-5fb5f9d48b-dmdd7\" (UID: \"c5c4c98b-ee2a-4414-ab32-72d8782396ca\") " pod="openshift-storage/lvms-operator-5fb5f9d48b-dmdd7" Mar 12 14:43:44.133723 master-0 kubenswrapper[37036]: I0312 14:43:44.133653 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xklmm\" (UniqueName: \"kubernetes.io/projected/c5c4c98b-ee2a-4414-ab32-72d8782396ca-kube-api-access-xklmm\") pod \"lvms-operator-5fb5f9d48b-dmdd7\" (UID: \"c5c4c98b-ee2a-4414-ab32-72d8782396ca\") " pod="openshift-storage/lvms-operator-5fb5f9d48b-dmdd7" Mar 12 14:43:44.222002 master-0 kubenswrapper[37036]: I0312 14:43:44.221837 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-5fb5f9d48b-dmdd7" Mar 12 14:43:44.615762 master-0 kubenswrapper[37036]: W0312 14:43:44.615711 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5c4c98b_ee2a_4414_ab32_72d8782396ca.slice/crio-30269d08267a38e380b173bd0c650ab3e0beeda8a8163ac65d9071fc50e2ab87 WatchSource:0}: Error finding container 30269d08267a38e380b173bd0c650ab3e0beeda8a8163ac65d9071fc50e2ab87: Status 404 returned error can't find the container with id 30269d08267a38e380b173bd0c650ab3e0beeda8a8163ac65d9071fc50e2ab87 Mar 12 14:43:44.618867 master-0 kubenswrapper[37036]: I0312 14:43:44.618805 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-5fb5f9d48b-dmdd7"] Mar 12 14:43:45.173212 master-0 kubenswrapper[37036]: I0312 14:43:45.173138 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-5fb5f9d48b-dmdd7" event={"ID":"c5c4c98b-ee2a-4414-ab32-72d8782396ca","Type":"ContainerStarted","Data":"30269d08267a38e380b173bd0c650ab3e0beeda8a8163ac65d9071fc50e2ab87"} Mar 12 14:43:49.203435 master-0 kubenswrapper[37036]: I0312 14:43:49.203369 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-5fb5f9d48b-dmdd7" event={"ID":"c5c4c98b-ee2a-4414-ab32-72d8782396ca","Type":"ContainerStarted","Data":"9abcdb8d5de975259db711166c24c24420c700fdf5f7aa209ce322ed29d8812a"} Mar 12 14:43:49.204059 master-0 kubenswrapper[37036]: I0312 14:43:49.203698 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-5fb5f9d48b-dmdd7" Mar 12 14:43:49.207469 master-0 kubenswrapper[37036]: I0312 14:43:49.207413 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-5fb5f9d48b-dmdd7" Mar 12 14:43:49.257960 master-0 kubenswrapper[37036]: I0312 14:43:49.257852 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-5fb5f9d48b-dmdd7" podStartSLOduration=2.154776682 podStartE2EDuration="6.257828323s" podCreationTimestamp="2026-03-12 14:43:43 +0000 UTC" firstStartedPulling="2026-03-12 14:43:44.618062637 +0000 UTC m=+483.625803584" lastFinishedPulling="2026-03-12 14:43:48.721114288 +0000 UTC m=+487.728855225" observedRunningTime="2026-03-12 14:43:49.228342109 +0000 UTC m=+488.236083046" watchObservedRunningTime="2026-03-12 14:43:49.257828323 +0000 UTC m=+488.265569260" Mar 12 14:43:53.070415 master-0 kubenswrapper[37036]: I0312 14:43:53.070335 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4"] Mar 12 14:43:53.072931 master-0 kubenswrapper[37036]: I0312 14:43:53.072866 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4" Mar 12 14:43:53.075481 master-0 kubenswrapper[37036]: I0312 14:43:53.075424 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-9qbhc" Mar 12 14:43:53.081186 master-0 kubenswrapper[37036]: I0312 14:43:53.081135 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4"] Mar 12 14:43:53.162819 master-0 kubenswrapper[37036]: I0312 14:43:53.162765 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0f47aa02-212a-41fc-9069-504b571229aa-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4\" (UID: \"0f47aa02-212a-41fc-9069-504b571229aa\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4" Mar 12 14:43:53.163076 master-0 kubenswrapper[37036]: I0312 14:43:53.162827 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4xms\" (UniqueName: \"kubernetes.io/projected/0f47aa02-212a-41fc-9069-504b571229aa-kube-api-access-r4xms\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4\" (UID: \"0f47aa02-212a-41fc-9069-504b571229aa\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4" Mar 12 14:43:53.163076 master-0 kubenswrapper[37036]: I0312 14:43:53.162850 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0f47aa02-212a-41fc-9069-504b571229aa-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4\" (UID: \"0f47aa02-212a-41fc-9069-504b571229aa\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4" Mar 12 14:43:53.263852 master-0 kubenswrapper[37036]: I0312 14:43:53.263783 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0f47aa02-212a-41fc-9069-504b571229aa-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4\" (UID: \"0f47aa02-212a-41fc-9069-504b571229aa\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4" Mar 12 14:43:53.263852 master-0 kubenswrapper[37036]: I0312 14:43:53.263835 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4xms\" (UniqueName: \"kubernetes.io/projected/0f47aa02-212a-41fc-9069-504b571229aa-kube-api-access-r4xms\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4\" (UID: \"0f47aa02-212a-41fc-9069-504b571229aa\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4" Mar 12 14:43:53.263852 master-0 kubenswrapper[37036]: I0312 14:43:53.263858 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0f47aa02-212a-41fc-9069-504b571229aa-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4\" (UID: \"0f47aa02-212a-41fc-9069-504b571229aa\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4" Mar 12 14:43:53.264461 master-0 kubenswrapper[37036]: I0312 14:43:53.264364 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0f47aa02-212a-41fc-9069-504b571229aa-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4\" (UID: \"0f47aa02-212a-41fc-9069-504b571229aa\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4" Mar 12 14:43:53.264461 master-0 kubenswrapper[37036]: I0312 14:43:53.264406 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0f47aa02-212a-41fc-9069-504b571229aa-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4\" (UID: \"0f47aa02-212a-41fc-9069-504b571229aa\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4" Mar 12 14:43:53.280364 master-0 kubenswrapper[37036]: I0312 14:43:53.280318 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4xms\" (UniqueName: \"kubernetes.io/projected/0f47aa02-212a-41fc-9069-504b571229aa-kube-api-access-r4xms\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4\" (UID: \"0f47aa02-212a-41fc-9069-504b571229aa\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4" Mar 12 14:43:53.401609 master-0 kubenswrapper[37036]: I0312 14:43:53.401475 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4" Mar 12 14:43:53.830811 master-0 kubenswrapper[37036]: I0312 14:43:53.829161 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4"] Mar 12 14:43:54.238468 master-0 kubenswrapper[37036]: I0312 14:43:54.238424 37036 generic.go:334] "Generic (PLEG): container finished" podID="0f47aa02-212a-41fc-9069-504b571229aa" containerID="5cd122138982a41a3cc3a0b1b466a32327090be38c86370e129f67da2a717b35" exitCode=0 Mar 12 14:43:54.239096 master-0 kubenswrapper[37036]: I0312 14:43:54.238482 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4" event={"ID":"0f47aa02-212a-41fc-9069-504b571229aa","Type":"ContainerDied","Data":"5cd122138982a41a3cc3a0b1b466a32327090be38c86370e129f67da2a717b35"} Mar 12 14:43:54.239096 master-0 kubenswrapper[37036]: I0312 14:43:54.238575 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4" event={"ID":"0f47aa02-212a-41fc-9069-504b571229aa","Type":"ContainerStarted","Data":"90bdf827bcf46e4a8ca2590abe21dd9c8b6ceaf3998e8c639ec4f75f466da579"} Mar 12 14:43:54.676718 master-0 kubenswrapper[37036]: I0312 14:43:54.676561 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw"] Mar 12 14:43:54.678098 master-0 kubenswrapper[37036]: I0312 14:43:54.678071 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw" Mar 12 14:43:54.690777 master-0 kubenswrapper[37036]: I0312 14:43:54.690718 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnxfv\" (UniqueName: \"kubernetes.io/projected/80026a08-d982-4f4b-a2bf-b35fd1f77279-kube-api-access-cnxfv\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw\" (UID: \"80026a08-d982-4f4b-a2bf-b35fd1f77279\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw" Mar 12 14:43:54.690999 master-0 kubenswrapper[37036]: I0312 14:43:54.690831 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/80026a08-d982-4f4b-a2bf-b35fd1f77279-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw\" (UID: \"80026a08-d982-4f4b-a2bf-b35fd1f77279\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw" Mar 12 14:43:54.690999 master-0 kubenswrapper[37036]: I0312 14:43:54.690933 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/80026a08-d982-4f4b-a2bf-b35fd1f77279-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw\" (UID: \"80026a08-d982-4f4b-a2bf-b35fd1f77279\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw" Mar 12 14:43:54.693584 master-0 kubenswrapper[37036]: I0312 14:43:54.693542 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw"] Mar 12 14:43:54.792302 master-0 kubenswrapper[37036]: I0312 14:43:54.792246 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnxfv\" (UniqueName: \"kubernetes.io/projected/80026a08-d982-4f4b-a2bf-b35fd1f77279-kube-api-access-cnxfv\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw\" (UID: \"80026a08-d982-4f4b-a2bf-b35fd1f77279\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw" Mar 12 14:43:54.792530 master-0 kubenswrapper[37036]: I0312 14:43:54.792320 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/80026a08-d982-4f4b-a2bf-b35fd1f77279-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw\" (UID: \"80026a08-d982-4f4b-a2bf-b35fd1f77279\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw" Mar 12 14:43:54.792530 master-0 kubenswrapper[37036]: I0312 14:43:54.792365 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/80026a08-d982-4f4b-a2bf-b35fd1f77279-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw\" (UID: \"80026a08-d982-4f4b-a2bf-b35fd1f77279\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw" Mar 12 14:43:54.792846 master-0 kubenswrapper[37036]: I0312 14:43:54.792822 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/80026a08-d982-4f4b-a2bf-b35fd1f77279-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw\" (UID: \"80026a08-d982-4f4b-a2bf-b35fd1f77279\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw" Mar 12 14:43:54.793073 master-0 kubenswrapper[37036]: I0312 14:43:54.793053 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/80026a08-d982-4f4b-a2bf-b35fd1f77279-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw\" (UID: \"80026a08-d982-4f4b-a2bf-b35fd1f77279\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw" Mar 12 14:43:54.808110 master-0 kubenswrapper[37036]: I0312 14:43:54.808044 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnxfv\" (UniqueName: \"kubernetes.io/projected/80026a08-d982-4f4b-a2bf-b35fd1f77279-kube-api-access-cnxfv\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw\" (UID: \"80026a08-d982-4f4b-a2bf-b35fd1f77279\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw" Mar 12 14:43:55.036751 master-0 kubenswrapper[37036]: I0312 14:43:55.036647 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw" Mar 12 14:43:55.430383 master-0 kubenswrapper[37036]: I0312 14:43:55.430324 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw"] Mar 12 14:43:55.437577 master-0 kubenswrapper[37036]: W0312 14:43:55.437516 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80026a08_d982_4f4b_a2bf_b35fd1f77279.slice/crio-6bd7624edac6ab6df89b3e74c3e87e83a1368aaf3185d798ca7736051a5419e8 WatchSource:0}: Error finding container 6bd7624edac6ab6df89b3e74c3e87e83a1368aaf3185d798ca7736051a5419e8: Status 404 returned error can't find the container with id 6bd7624edac6ab6df89b3e74c3e87e83a1368aaf3185d798ca7736051a5419e8 Mar 12 14:43:55.678454 master-0 kubenswrapper[37036]: I0312 14:43:55.678400 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p"] Mar 12 14:43:55.680160 master-0 kubenswrapper[37036]: I0312 14:43:55.680079 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p" Mar 12 14:43:55.687464 master-0 kubenswrapper[37036]: I0312 14:43:55.687410 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p"] Mar 12 14:43:55.810184 master-0 kubenswrapper[37036]: I0312 14:43:55.810113 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsws5\" (UniqueName: \"kubernetes.io/projected/fd075974-a975-4e1b-af3b-ba8bdf4678e0-kube-api-access-hsws5\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p\" (UID: \"fd075974-a975-4e1b-af3b-ba8bdf4678e0\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p" Mar 12 14:43:55.810454 master-0 kubenswrapper[37036]: I0312 14:43:55.810218 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd075974-a975-4e1b-af3b-ba8bdf4678e0-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p\" (UID: \"fd075974-a975-4e1b-af3b-ba8bdf4678e0\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p" Mar 12 14:43:55.810454 master-0 kubenswrapper[37036]: I0312 14:43:55.810354 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd075974-a975-4e1b-af3b-ba8bdf4678e0-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p\" (UID: \"fd075974-a975-4e1b-af3b-ba8bdf4678e0\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p" Mar 12 14:43:55.911953 master-0 kubenswrapper[37036]: I0312 14:43:55.911866 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd075974-a975-4e1b-af3b-ba8bdf4678e0-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p\" (UID: \"fd075974-a975-4e1b-af3b-ba8bdf4678e0\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p" Mar 12 14:43:55.912168 master-0 kubenswrapper[37036]: I0312 14:43:55.912060 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsws5\" (UniqueName: \"kubernetes.io/projected/fd075974-a975-4e1b-af3b-ba8bdf4678e0-kube-api-access-hsws5\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p\" (UID: \"fd075974-a975-4e1b-af3b-ba8bdf4678e0\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p" Mar 12 14:43:55.912168 master-0 kubenswrapper[37036]: I0312 14:43:55.912119 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd075974-a975-4e1b-af3b-ba8bdf4678e0-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p\" (UID: \"fd075974-a975-4e1b-af3b-ba8bdf4678e0\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p" Mar 12 14:43:55.912450 master-0 kubenswrapper[37036]: I0312 14:43:55.912417 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd075974-a975-4e1b-af3b-ba8bdf4678e0-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p\" (UID: \"fd075974-a975-4e1b-af3b-ba8bdf4678e0\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p" Mar 12 14:43:55.912572 master-0 kubenswrapper[37036]: I0312 14:43:55.912548 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd075974-a975-4e1b-af3b-ba8bdf4678e0-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p\" (UID: \"fd075974-a975-4e1b-af3b-ba8bdf4678e0\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p" Mar 12 14:43:55.928763 master-0 kubenswrapper[37036]: I0312 14:43:55.928715 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsws5\" (UniqueName: \"kubernetes.io/projected/fd075974-a975-4e1b-af3b-ba8bdf4678e0-kube-api-access-hsws5\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p\" (UID: \"fd075974-a975-4e1b-af3b-ba8bdf4678e0\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p" Mar 12 14:43:56.036086 master-0 kubenswrapper[37036]: I0312 14:43:56.036021 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p" Mar 12 14:43:56.261082 master-0 kubenswrapper[37036]: I0312 14:43:56.261034 37036 generic.go:334] "Generic (PLEG): container finished" podID="80026a08-d982-4f4b-a2bf-b35fd1f77279" containerID="78859716bb4d462f891e797bd6af72c1c297699309247c549a40518f19caf605" exitCode=0 Mar 12 14:43:56.261082 master-0 kubenswrapper[37036]: I0312 14:43:56.261080 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw" event={"ID":"80026a08-d982-4f4b-a2bf-b35fd1f77279","Type":"ContainerDied","Data":"78859716bb4d462f891e797bd6af72c1c297699309247c549a40518f19caf605"} Mar 12 14:43:56.261386 master-0 kubenswrapper[37036]: I0312 14:43:56.261106 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw" event={"ID":"80026a08-d982-4f4b-a2bf-b35fd1f77279","Type":"ContainerStarted","Data":"6bd7624edac6ab6df89b3e74c3e87e83a1368aaf3185d798ca7736051a5419e8"} Mar 12 14:43:56.428448 master-0 kubenswrapper[37036]: I0312 14:43:56.428391 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p"] Mar 12 14:43:57.569836 master-0 kubenswrapper[37036]: W0312 14:43:57.569777 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd075974_a975_4e1b_af3b_ba8bdf4678e0.slice/crio-92ce7d8e6848c55ea07d508d3d04b9e0eb7e9c0e9ef6531b93d86ea3f7e16822 WatchSource:0}: Error finding container 92ce7d8e6848c55ea07d508d3d04b9e0eb7e9c0e9ef6531b93d86ea3f7e16822: Status 404 returned error can't find the container with id 92ce7d8e6848c55ea07d508d3d04b9e0eb7e9c0e9ef6531b93d86ea3f7e16822 Mar 12 14:43:58.276933 master-0 kubenswrapper[37036]: I0312 14:43:58.276847 37036 generic.go:334] "Generic (PLEG): container finished" podID="0f47aa02-212a-41fc-9069-504b571229aa" containerID="99a520590dde235aa1d9bd752a7ce4b2956566640a80c666b05985b098694def" exitCode=0 Mar 12 14:43:58.277190 master-0 kubenswrapper[37036]: I0312 14:43:58.276942 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4" event={"ID":"0f47aa02-212a-41fc-9069-504b571229aa","Type":"ContainerDied","Data":"99a520590dde235aa1d9bd752a7ce4b2956566640a80c666b05985b098694def"} Mar 12 14:43:58.279735 master-0 kubenswrapper[37036]: I0312 14:43:58.279682 37036 generic.go:334] "Generic (PLEG): container finished" podID="80026a08-d982-4f4b-a2bf-b35fd1f77279" containerID="0d0196ce0534991336f1321ab7bbe989a4c181708b0553d95d8507276eb1284d" exitCode=0 Mar 12 14:43:58.279823 master-0 kubenswrapper[37036]: I0312 14:43:58.279779 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw" event={"ID":"80026a08-d982-4f4b-a2bf-b35fd1f77279","Type":"ContainerDied","Data":"0d0196ce0534991336f1321ab7bbe989a4c181708b0553d95d8507276eb1284d"} Mar 12 14:43:58.283115 master-0 kubenswrapper[37036]: I0312 14:43:58.282112 37036 generic.go:334] "Generic (PLEG): container finished" podID="fd075974-a975-4e1b-af3b-ba8bdf4678e0" containerID="1707d97483ddf79d3f0a97b8a11a9a146f77bbeb8a7efad5d29624ec246dc9a9" exitCode=0 Mar 12 14:43:58.283115 master-0 kubenswrapper[37036]: I0312 14:43:58.282146 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p" event={"ID":"fd075974-a975-4e1b-af3b-ba8bdf4678e0","Type":"ContainerDied","Data":"1707d97483ddf79d3f0a97b8a11a9a146f77bbeb8a7efad5d29624ec246dc9a9"} Mar 12 14:43:58.283115 master-0 kubenswrapper[37036]: I0312 14:43:58.282166 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p" event={"ID":"fd075974-a975-4e1b-af3b-ba8bdf4678e0","Type":"ContainerStarted","Data":"92ce7d8e6848c55ea07d508d3d04b9e0eb7e9c0e9ef6531b93d86ea3f7e16822"} Mar 12 14:43:59.291390 master-0 kubenswrapper[37036]: I0312 14:43:59.291283 37036 generic.go:334] "Generic (PLEG): container finished" podID="80026a08-d982-4f4b-a2bf-b35fd1f77279" containerID="edd9280aba13a4cae80d71132609bfa755d6912ab9247646739522688fe5091e" exitCode=0 Mar 12 14:43:59.292375 master-0 kubenswrapper[37036]: I0312 14:43:59.291377 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw" event={"ID":"80026a08-d982-4f4b-a2bf-b35fd1f77279","Type":"ContainerDied","Data":"edd9280aba13a4cae80d71132609bfa755d6912ab9247646739522688fe5091e"} Mar 12 14:43:59.295757 master-0 kubenswrapper[37036]: I0312 14:43:59.295645 37036 generic.go:334] "Generic (PLEG): container finished" podID="0f47aa02-212a-41fc-9069-504b571229aa" containerID="61421d0b524150afbca9c87ac76de712c76af0648881af93c7b8da6b6e552e43" exitCode=0 Mar 12 14:43:59.295947 master-0 kubenswrapper[37036]: I0312 14:43:59.295771 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4" event={"ID":"0f47aa02-212a-41fc-9069-504b571229aa","Type":"ContainerDied","Data":"61421d0b524150afbca9c87ac76de712c76af0648881af93c7b8da6b6e552e43"} Mar 12 14:44:00.305287 master-0 kubenswrapper[37036]: I0312 14:44:00.305225 37036 generic.go:334] "Generic (PLEG): container finished" podID="fd075974-a975-4e1b-af3b-ba8bdf4678e0" containerID="0ae80f987593dded874fa74a30ab54ff93702e0f8cb797c03800e9f1535e3651" exitCode=0 Mar 12 14:44:00.306087 master-0 kubenswrapper[37036]: I0312 14:44:00.305268 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p" event={"ID":"fd075974-a975-4e1b-af3b-ba8bdf4678e0","Type":"ContainerDied","Data":"0ae80f987593dded874fa74a30ab54ff93702e0f8cb797c03800e9f1535e3651"} Mar 12 14:44:00.664096 master-0 kubenswrapper[37036]: I0312 14:44:00.664007 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4" Mar 12 14:44:00.673816 master-0 kubenswrapper[37036]: I0312 14:44:00.673760 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw" Mar 12 14:44:00.803429 master-0 kubenswrapper[37036]: I0312 14:44:00.803366 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4xms\" (UniqueName: \"kubernetes.io/projected/0f47aa02-212a-41fc-9069-504b571229aa-kube-api-access-r4xms\") pod \"0f47aa02-212a-41fc-9069-504b571229aa\" (UID: \"0f47aa02-212a-41fc-9069-504b571229aa\") " Mar 12 14:44:00.803429 master-0 kubenswrapper[37036]: I0312 14:44:00.803412 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnxfv\" (UniqueName: \"kubernetes.io/projected/80026a08-d982-4f4b-a2bf-b35fd1f77279-kube-api-access-cnxfv\") pod \"80026a08-d982-4f4b-a2bf-b35fd1f77279\" (UID: \"80026a08-d982-4f4b-a2bf-b35fd1f77279\") " Mar 12 14:44:00.803768 master-0 kubenswrapper[37036]: I0312 14:44:00.803516 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/80026a08-d982-4f4b-a2bf-b35fd1f77279-bundle\") pod \"80026a08-d982-4f4b-a2bf-b35fd1f77279\" (UID: \"80026a08-d982-4f4b-a2bf-b35fd1f77279\") " Mar 12 14:44:00.803768 master-0 kubenswrapper[37036]: I0312 14:44:00.803618 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/80026a08-d982-4f4b-a2bf-b35fd1f77279-util\") pod \"80026a08-d982-4f4b-a2bf-b35fd1f77279\" (UID: \"80026a08-d982-4f4b-a2bf-b35fd1f77279\") " Mar 12 14:44:00.803768 master-0 kubenswrapper[37036]: I0312 14:44:00.803643 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0f47aa02-212a-41fc-9069-504b571229aa-bundle\") pod \"0f47aa02-212a-41fc-9069-504b571229aa\" (UID: \"0f47aa02-212a-41fc-9069-504b571229aa\") " Mar 12 14:44:00.803768 master-0 kubenswrapper[37036]: I0312 14:44:00.803660 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0f47aa02-212a-41fc-9069-504b571229aa-util\") pod \"0f47aa02-212a-41fc-9069-504b571229aa\" (UID: \"0f47aa02-212a-41fc-9069-504b571229aa\") " Mar 12 14:44:00.805170 master-0 kubenswrapper[37036]: I0312 14:44:00.805131 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f47aa02-212a-41fc-9069-504b571229aa-bundle" (OuterVolumeSpecName: "bundle") pod "0f47aa02-212a-41fc-9069-504b571229aa" (UID: "0f47aa02-212a-41fc-9069-504b571229aa"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:44:00.806475 master-0 kubenswrapper[37036]: I0312 14:44:00.806384 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f47aa02-212a-41fc-9069-504b571229aa-kube-api-access-r4xms" (OuterVolumeSpecName: "kube-api-access-r4xms") pod "0f47aa02-212a-41fc-9069-504b571229aa" (UID: "0f47aa02-212a-41fc-9069-504b571229aa"). InnerVolumeSpecName "kube-api-access-r4xms". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:44:00.806774 master-0 kubenswrapper[37036]: I0312 14:44:00.806715 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80026a08-d982-4f4b-a2bf-b35fd1f77279-bundle" (OuterVolumeSpecName: "bundle") pod "80026a08-d982-4f4b-a2bf-b35fd1f77279" (UID: "80026a08-d982-4f4b-a2bf-b35fd1f77279"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:44:00.807363 master-0 kubenswrapper[37036]: I0312 14:44:00.807313 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80026a08-d982-4f4b-a2bf-b35fd1f77279-kube-api-access-cnxfv" (OuterVolumeSpecName: "kube-api-access-cnxfv") pod "80026a08-d982-4f4b-a2bf-b35fd1f77279" (UID: "80026a08-d982-4f4b-a2bf-b35fd1f77279"). InnerVolumeSpecName "kube-api-access-cnxfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:44:00.811165 master-0 kubenswrapper[37036]: I0312 14:44:00.811109 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f47aa02-212a-41fc-9069-504b571229aa-util" (OuterVolumeSpecName: "util") pod "0f47aa02-212a-41fc-9069-504b571229aa" (UID: "0f47aa02-212a-41fc-9069-504b571229aa"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:44:00.812391 master-0 kubenswrapper[37036]: I0312 14:44:00.812342 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80026a08-d982-4f4b-a2bf-b35fd1f77279-util" (OuterVolumeSpecName: "util") pod "80026a08-d982-4f4b-a2bf-b35fd1f77279" (UID: "80026a08-d982-4f4b-a2bf-b35fd1f77279"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:44:00.905894 master-0 kubenswrapper[37036]: I0312 14:44:00.905807 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4xms\" (UniqueName: \"kubernetes.io/projected/0f47aa02-212a-41fc-9069-504b571229aa-kube-api-access-r4xms\") on node \"master-0\" DevicePath \"\"" Mar 12 14:44:00.906431 master-0 kubenswrapper[37036]: I0312 14:44:00.906400 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnxfv\" (UniqueName: \"kubernetes.io/projected/80026a08-d982-4f4b-a2bf-b35fd1f77279-kube-api-access-cnxfv\") on node \"master-0\" DevicePath \"\"" Mar 12 14:44:00.906518 master-0 kubenswrapper[37036]: I0312 14:44:00.906442 37036 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/80026a08-d982-4f4b-a2bf-b35fd1f77279-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:44:00.906518 master-0 kubenswrapper[37036]: I0312 14:44:00.906463 37036 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/80026a08-d982-4f4b-a2bf-b35fd1f77279-util\") on node \"master-0\" DevicePath \"\"" Mar 12 14:44:00.906518 master-0 kubenswrapper[37036]: I0312 14:44:00.906485 37036 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0f47aa02-212a-41fc-9069-504b571229aa-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:44:00.906518 master-0 kubenswrapper[37036]: I0312 14:44:00.906502 37036 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0f47aa02-212a-41fc-9069-504b571229aa-util\") on node \"master-0\" DevicePath \"\"" Mar 12 14:44:01.315979 master-0 kubenswrapper[37036]: I0312 14:44:01.315667 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw" Mar 12 14:44:01.316469 master-0 kubenswrapper[37036]: I0312 14:44:01.316006 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ht8fw" event={"ID":"80026a08-d982-4f4b-a2bf-b35fd1f77279","Type":"ContainerDied","Data":"6bd7624edac6ab6df89b3e74c3e87e83a1368aaf3185d798ca7736051a5419e8"} Mar 12 14:44:01.316469 master-0 kubenswrapper[37036]: I0312 14:44:01.316052 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bd7624edac6ab6df89b3e74c3e87e83a1368aaf3185d798ca7736051a5419e8" Mar 12 14:44:01.319185 master-0 kubenswrapper[37036]: I0312 14:44:01.319153 37036 generic.go:334] "Generic (PLEG): container finished" podID="fd075974-a975-4e1b-af3b-ba8bdf4678e0" containerID="41969c5eed5a2e29b3416ebc442304de53b0980b7946548fe58e71d0d53489ee" exitCode=0 Mar 12 14:44:01.319296 master-0 kubenswrapper[37036]: I0312 14:44:01.319228 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p" event={"ID":"fd075974-a975-4e1b-af3b-ba8bdf4678e0","Type":"ContainerDied","Data":"41969c5eed5a2e29b3416ebc442304de53b0980b7946548fe58e71d0d53489ee"} Mar 12 14:44:01.323295 master-0 kubenswrapper[37036]: I0312 14:44:01.323261 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4" event={"ID":"0f47aa02-212a-41fc-9069-504b571229aa","Type":"ContainerDied","Data":"90bdf827bcf46e4a8ca2590abe21dd9c8b6ceaf3998e8c639ec4f75f466da579"} Mar 12 14:44:01.323491 master-0 kubenswrapper[37036]: I0312 14:44:01.323473 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90bdf827bcf46e4a8ca2590abe21dd9c8b6ceaf3998e8c639ec4f75f466da579" Mar 12 14:44:01.323597 master-0 kubenswrapper[37036]: I0312 14:44:01.323299 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e54h9l4" Mar 12 14:44:02.623414 master-0 kubenswrapper[37036]: I0312 14:44:02.623367 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p" Mar 12 14:44:02.733716 master-0 kubenswrapper[37036]: I0312 14:44:02.733632 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsws5\" (UniqueName: \"kubernetes.io/projected/fd075974-a975-4e1b-af3b-ba8bdf4678e0-kube-api-access-hsws5\") pod \"fd075974-a975-4e1b-af3b-ba8bdf4678e0\" (UID: \"fd075974-a975-4e1b-af3b-ba8bdf4678e0\") " Mar 12 14:44:02.733716 master-0 kubenswrapper[37036]: I0312 14:44:02.733708 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd075974-a975-4e1b-af3b-ba8bdf4678e0-util\") pod \"fd075974-a975-4e1b-af3b-ba8bdf4678e0\" (UID: \"fd075974-a975-4e1b-af3b-ba8bdf4678e0\") " Mar 12 14:44:02.734104 master-0 kubenswrapper[37036]: I0312 14:44:02.733749 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd075974-a975-4e1b-af3b-ba8bdf4678e0-bundle\") pod \"fd075974-a975-4e1b-af3b-ba8bdf4678e0\" (UID: \"fd075974-a975-4e1b-af3b-ba8bdf4678e0\") " Mar 12 14:44:02.734536 master-0 kubenswrapper[37036]: I0312 14:44:02.734496 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd075974-a975-4e1b-af3b-ba8bdf4678e0-bundle" (OuterVolumeSpecName: "bundle") pod "fd075974-a975-4e1b-af3b-ba8bdf4678e0" (UID: "fd075974-a975-4e1b-af3b-ba8bdf4678e0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:44:02.738393 master-0 kubenswrapper[37036]: I0312 14:44:02.738313 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd075974-a975-4e1b-af3b-ba8bdf4678e0-kube-api-access-hsws5" (OuterVolumeSpecName: "kube-api-access-hsws5") pod "fd075974-a975-4e1b-af3b-ba8bdf4678e0" (UID: "fd075974-a975-4e1b-af3b-ba8bdf4678e0"). InnerVolumeSpecName "kube-api-access-hsws5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:44:02.749063 master-0 kubenswrapper[37036]: I0312 14:44:02.749003 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd075974-a975-4e1b-af3b-ba8bdf4678e0-util" (OuterVolumeSpecName: "util") pod "fd075974-a975-4e1b-af3b-ba8bdf4678e0" (UID: "fd075974-a975-4e1b-af3b-ba8bdf4678e0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:44:02.836075 master-0 kubenswrapper[37036]: I0312 14:44:02.835994 37036 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd075974-a975-4e1b-af3b-ba8bdf4678e0-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:44:02.836075 master-0 kubenswrapper[37036]: I0312 14:44:02.836061 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsws5\" (UniqueName: \"kubernetes.io/projected/fd075974-a975-4e1b-af3b-ba8bdf4678e0-kube-api-access-hsws5\") on node \"master-0\" DevicePath \"\"" Mar 12 14:44:02.836075 master-0 kubenswrapper[37036]: I0312 14:44:02.836080 37036 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd075974-a975-4e1b-af3b-ba8bdf4678e0-util\") on node \"master-0\" DevicePath \"\"" Mar 12 14:44:03.338280 master-0 kubenswrapper[37036]: I0312 14:44:03.338213 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p" event={"ID":"fd075974-a975-4e1b-af3b-ba8bdf4678e0","Type":"ContainerDied","Data":"92ce7d8e6848c55ea07d508d3d04b9e0eb7e9c0e9ef6531b93d86ea3f7e16822"} Mar 12 14:44:03.338280 master-0 kubenswrapper[37036]: I0312 14:44:03.338262 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92ce7d8e6848c55ea07d508d3d04b9e0eb7e9c0e9ef6531b93d86ea3f7e16822" Mar 12 14:44:03.338749 master-0 kubenswrapper[37036]: I0312 14:44:03.338317 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8746hq4p" Mar 12 14:44:04.100025 master-0 kubenswrapper[37036]: I0312 14:44:04.099938 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8"] Mar 12 14:44:04.100676 master-0 kubenswrapper[37036]: E0312 14:44:04.100392 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd075974-a975-4e1b-af3b-ba8bdf4678e0" containerName="extract" Mar 12 14:44:04.100676 master-0 kubenswrapper[37036]: I0312 14:44:04.100408 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd075974-a975-4e1b-af3b-ba8bdf4678e0" containerName="extract" Mar 12 14:44:04.100676 master-0 kubenswrapper[37036]: E0312 14:44:04.100427 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80026a08-d982-4f4b-a2bf-b35fd1f77279" containerName="util" Mar 12 14:44:04.100676 master-0 kubenswrapper[37036]: I0312 14:44:04.100433 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="80026a08-d982-4f4b-a2bf-b35fd1f77279" containerName="util" Mar 12 14:44:04.100676 master-0 kubenswrapper[37036]: E0312 14:44:04.100451 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f47aa02-212a-41fc-9069-504b571229aa" containerName="pull" Mar 12 14:44:04.100676 master-0 kubenswrapper[37036]: I0312 14:44:04.100457 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f47aa02-212a-41fc-9069-504b571229aa" containerName="pull" Mar 12 14:44:04.100676 master-0 kubenswrapper[37036]: E0312 14:44:04.100474 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f47aa02-212a-41fc-9069-504b571229aa" containerName="extract" Mar 12 14:44:04.100676 master-0 kubenswrapper[37036]: I0312 14:44:04.100479 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f47aa02-212a-41fc-9069-504b571229aa" containerName="extract" Mar 12 14:44:04.100676 master-0 kubenswrapper[37036]: E0312 14:44:04.100491 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f47aa02-212a-41fc-9069-504b571229aa" containerName="util" Mar 12 14:44:04.100676 master-0 kubenswrapper[37036]: I0312 14:44:04.100496 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f47aa02-212a-41fc-9069-504b571229aa" containerName="util" Mar 12 14:44:04.100676 master-0 kubenswrapper[37036]: E0312 14:44:04.100517 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd075974-a975-4e1b-af3b-ba8bdf4678e0" containerName="util" Mar 12 14:44:04.100676 master-0 kubenswrapper[37036]: I0312 14:44:04.100524 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd075974-a975-4e1b-af3b-ba8bdf4678e0" containerName="util" Mar 12 14:44:04.100676 master-0 kubenswrapper[37036]: E0312 14:44:04.100539 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80026a08-d982-4f4b-a2bf-b35fd1f77279" containerName="pull" Mar 12 14:44:04.100676 master-0 kubenswrapper[37036]: I0312 14:44:04.100546 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="80026a08-d982-4f4b-a2bf-b35fd1f77279" containerName="pull" Mar 12 14:44:04.100676 master-0 kubenswrapper[37036]: E0312 14:44:04.100572 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80026a08-d982-4f4b-a2bf-b35fd1f77279" containerName="extract" Mar 12 14:44:04.100676 master-0 kubenswrapper[37036]: I0312 14:44:04.100580 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="80026a08-d982-4f4b-a2bf-b35fd1f77279" containerName="extract" Mar 12 14:44:04.100676 master-0 kubenswrapper[37036]: E0312 14:44:04.100589 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd075974-a975-4e1b-af3b-ba8bdf4678e0" containerName="pull" Mar 12 14:44:04.100676 master-0 kubenswrapper[37036]: I0312 14:44:04.100595 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd075974-a975-4e1b-af3b-ba8bdf4678e0" containerName="pull" Mar 12 14:44:04.101446 master-0 kubenswrapper[37036]: I0312 14:44:04.100954 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="80026a08-d982-4f4b-a2bf-b35fd1f77279" containerName="extract" Mar 12 14:44:04.101446 master-0 kubenswrapper[37036]: I0312 14:44:04.100983 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f47aa02-212a-41fc-9069-504b571229aa" containerName="extract" Mar 12 14:44:04.101446 master-0 kubenswrapper[37036]: I0312 14:44:04.101003 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd075974-a975-4e1b-af3b-ba8bdf4678e0" containerName="extract" Mar 12 14:44:04.102119 master-0 kubenswrapper[37036]: I0312 14:44:04.102093 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8" Mar 12 14:44:04.110130 master-0 kubenswrapper[37036]: I0312 14:44:04.110070 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-9qbhc" Mar 12 14:44:04.118754 master-0 kubenswrapper[37036]: I0312 14:44:04.118691 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8"] Mar 12 14:44:04.295960 master-0 kubenswrapper[37036]: I0312 14:44:04.295843 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8\" (UID: \"6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8" Mar 12 14:44:04.296212 master-0 kubenswrapper[37036]: I0312 14:44:04.295962 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dd5z\" (UniqueName: \"kubernetes.io/projected/6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2-kube-api-access-8dd5z\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8\" (UID: \"6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8" Mar 12 14:44:04.296630 master-0 kubenswrapper[37036]: I0312 14:44:04.296567 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8\" (UID: \"6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8" Mar 12 14:44:04.397742 master-0 kubenswrapper[37036]: I0312 14:44:04.397596 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8\" (UID: \"6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8" Mar 12 14:44:04.397742 master-0 kubenswrapper[37036]: I0312 14:44:04.397677 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8\" (UID: \"6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8" Mar 12 14:44:04.397742 master-0 kubenswrapper[37036]: I0312 14:44:04.397701 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dd5z\" (UniqueName: \"kubernetes.io/projected/6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2-kube-api-access-8dd5z\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8\" (UID: \"6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8" Mar 12 14:44:04.398674 master-0 kubenswrapper[37036]: I0312 14:44:04.398636 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8\" (UID: \"6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8" Mar 12 14:44:04.398865 master-0 kubenswrapper[37036]: I0312 14:44:04.398762 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8\" (UID: \"6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8" Mar 12 14:44:04.471296 master-0 kubenswrapper[37036]: I0312 14:44:04.471236 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dd5z\" (UniqueName: \"kubernetes.io/projected/6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2-kube-api-access-8dd5z\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8\" (UID: \"6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8" Mar 12 14:44:04.718731 master-0 kubenswrapper[37036]: I0312 14:44:04.718582 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8" Mar 12 14:44:05.261917 master-0 kubenswrapper[37036]: I0312 14:44:05.259141 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8"] Mar 12 14:44:05.355904 master-0 kubenswrapper[37036]: I0312 14:44:05.355793 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8" event={"ID":"6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2","Type":"ContainerStarted","Data":"e9d6b062aaec09867f18a04863038a40edf3310d06b79e144d462884d88d05a8"} Mar 12 14:44:06.379946 master-0 kubenswrapper[37036]: I0312 14:44:06.377655 37036 generic.go:334] "Generic (PLEG): container finished" podID="6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2" containerID="cd88c8de4d53bbc70a6f9ce5232500e15863d8f10d42975045b6755acfe1c694" exitCode=0 Mar 12 14:44:06.379946 master-0 kubenswrapper[37036]: I0312 14:44:06.377735 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8" event={"ID":"6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2","Type":"ContainerDied","Data":"cd88c8de4d53bbc70a6f9ce5232500e15863d8f10d42975045b6755acfe1c694"} Mar 12 14:44:06.852638 master-0 kubenswrapper[37036]: I0312 14:44:06.852581 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-grxzz"] Mar 12 14:44:06.853608 master-0 kubenswrapper[37036]: I0312 14:44:06.853581 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-grxzz" Mar 12 14:44:06.855210 master-0 kubenswrapper[37036]: I0312 14:44:06.855176 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Mar 12 14:44:06.855780 master-0 kubenswrapper[37036]: I0312 14:44:06.855736 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Mar 12 14:44:06.882241 master-0 kubenswrapper[37036]: I0312 14:44:06.882185 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-grxzz"] Mar 12 14:44:06.951028 master-0 kubenswrapper[37036]: I0312 14:44:06.950949 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvq9c\" (UniqueName: \"kubernetes.io/projected/28e064ff-6ba8-4ea8-8fe2-fc542b090462-kube-api-access-zvq9c\") pod \"cert-manager-operator-controller-manager-66c8bdd694-grxzz\" (UID: \"28e064ff-6ba8-4ea8-8fe2-fc542b090462\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-grxzz" Mar 12 14:44:06.951250 master-0 kubenswrapper[37036]: I0312 14:44:06.951068 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/28e064ff-6ba8-4ea8-8fe2-fc542b090462-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-grxzz\" (UID: \"28e064ff-6ba8-4ea8-8fe2-fc542b090462\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-grxzz" Mar 12 14:44:07.052137 master-0 kubenswrapper[37036]: I0312 14:44:07.052069 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/28e064ff-6ba8-4ea8-8fe2-fc542b090462-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-grxzz\" (UID: \"28e064ff-6ba8-4ea8-8fe2-fc542b090462\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-grxzz" Mar 12 14:44:07.052364 master-0 kubenswrapper[37036]: I0312 14:44:07.052207 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvq9c\" (UniqueName: \"kubernetes.io/projected/28e064ff-6ba8-4ea8-8fe2-fc542b090462-kube-api-access-zvq9c\") pod \"cert-manager-operator-controller-manager-66c8bdd694-grxzz\" (UID: \"28e064ff-6ba8-4ea8-8fe2-fc542b090462\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-grxzz" Mar 12 14:44:07.052665 master-0 kubenswrapper[37036]: I0312 14:44:07.052620 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/28e064ff-6ba8-4ea8-8fe2-fc542b090462-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-grxzz\" (UID: \"28e064ff-6ba8-4ea8-8fe2-fc542b090462\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-grxzz" Mar 12 14:44:07.067527 master-0 kubenswrapper[37036]: I0312 14:44:07.067480 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvq9c\" (UniqueName: \"kubernetes.io/projected/28e064ff-6ba8-4ea8-8fe2-fc542b090462-kube-api-access-zvq9c\") pod \"cert-manager-operator-controller-manager-66c8bdd694-grxzz\" (UID: \"28e064ff-6ba8-4ea8-8fe2-fc542b090462\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-grxzz" Mar 12 14:44:07.169673 master-0 kubenswrapper[37036]: I0312 14:44:07.169414 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-grxzz" Mar 12 14:44:07.679957 master-0 kubenswrapper[37036]: W0312 14:44:07.679882 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28e064ff_6ba8_4ea8_8fe2_fc542b090462.slice/crio-69141ee9d9ce2ca2126d50823c1663066ee80c2196d2b7ae6fdef9bbf61a0804 WatchSource:0}: Error finding container 69141ee9d9ce2ca2126d50823c1663066ee80c2196d2b7ae6fdef9bbf61a0804: Status 404 returned error can't find the container with id 69141ee9d9ce2ca2126d50823c1663066ee80c2196d2b7ae6fdef9bbf61a0804 Mar 12 14:44:07.683200 master-0 kubenswrapper[37036]: I0312 14:44:07.683165 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-grxzz"] Mar 12 14:44:08.394495 master-0 kubenswrapper[37036]: I0312 14:44:08.394415 37036 generic.go:334] "Generic (PLEG): container finished" podID="6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2" containerID="e84570ea7435f97d56848c7a9a5ab00efb3a67810356a8efce40ef7f6b98ca20" exitCode=0 Mar 12 14:44:08.394714 master-0 kubenswrapper[37036]: I0312 14:44:08.394516 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8" event={"ID":"6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2","Type":"ContainerDied","Data":"e84570ea7435f97d56848c7a9a5ab00efb3a67810356a8efce40ef7f6b98ca20"} Mar 12 14:44:08.397795 master-0 kubenswrapper[37036]: I0312 14:44:08.397765 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-grxzz" event={"ID":"28e064ff-6ba8-4ea8-8fe2-fc542b090462","Type":"ContainerStarted","Data":"69141ee9d9ce2ca2126d50823c1663066ee80c2196d2b7ae6fdef9bbf61a0804"} Mar 12 14:44:09.426021 master-0 kubenswrapper[37036]: I0312 14:44:09.425955 37036 generic.go:334] "Generic (PLEG): container finished" podID="6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2" containerID="61d25c72a552dbb1bdd51a37c3fd0bc5d55441c6371934c9dd869441dd764e8d" exitCode=0 Mar 12 14:44:09.426597 master-0 kubenswrapper[37036]: I0312 14:44:09.426420 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8" event={"ID":"6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2","Type":"ContainerDied","Data":"61d25c72a552dbb1bdd51a37c3fd0bc5d55441c6371934c9dd869441dd764e8d"} Mar 12 14:44:11.797610 master-0 kubenswrapper[37036]: I0312 14:44:11.797558 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8" Mar 12 14:44:11.931246 master-0 kubenswrapper[37036]: I0312 14:44:11.929124 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2-bundle\") pod \"6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2\" (UID: \"6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2\") " Mar 12 14:44:11.931246 master-0 kubenswrapper[37036]: I0312 14:44:11.929217 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2-util\") pod \"6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2\" (UID: \"6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2\") " Mar 12 14:44:11.931246 master-0 kubenswrapper[37036]: I0312 14:44:11.929382 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dd5z\" (UniqueName: \"kubernetes.io/projected/6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2-kube-api-access-8dd5z\") pod \"6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2\" (UID: \"6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2\") " Mar 12 14:44:11.932323 master-0 kubenswrapper[37036]: I0312 14:44:11.932263 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2-bundle" (OuterVolumeSpecName: "bundle") pod "6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2" (UID: "6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:44:11.934956 master-0 kubenswrapper[37036]: I0312 14:44:11.934026 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2-kube-api-access-8dd5z" (OuterVolumeSpecName: "kube-api-access-8dd5z") pod "6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2" (UID: "6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2"). InnerVolumeSpecName "kube-api-access-8dd5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:44:11.939380 master-0 kubenswrapper[37036]: I0312 14:44:11.939325 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2-util" (OuterVolumeSpecName: "util") pod "6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2" (UID: "6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:44:12.031359 master-0 kubenswrapper[37036]: I0312 14:44:12.031312 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dd5z\" (UniqueName: \"kubernetes.io/projected/6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2-kube-api-access-8dd5z\") on node \"master-0\" DevicePath \"\"" Mar 12 14:44:12.031359 master-0 kubenswrapper[37036]: I0312 14:44:12.031354 37036 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:44:12.031490 master-0 kubenswrapper[37036]: I0312 14:44:12.031364 37036 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2-util\") on node \"master-0\" DevicePath \"\"" Mar 12 14:44:12.447776 master-0 kubenswrapper[37036]: I0312 14:44:12.447670 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8" event={"ID":"6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2","Type":"ContainerDied","Data":"e9d6b062aaec09867f18a04863038a40edf3310d06b79e144d462884d88d05a8"} Mar 12 14:44:12.448005 master-0 kubenswrapper[37036]: I0312 14:44:12.447991 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9d6b062aaec09867f18a04863038a40edf3310d06b79e144d462884d88d05a8" Mar 12 14:44:12.448158 master-0 kubenswrapper[37036]: I0312 14:44:12.448144 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j96s8" Mar 12 14:44:12.458318 master-0 kubenswrapper[37036]: I0312 14:44:12.458285 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-grxzz" event={"ID":"28e064ff-6ba8-4ea8-8fe2-fc542b090462","Type":"ContainerStarted","Data":"64348015679fbc2c9f7b5399dfa5823a1fff900b8a46dc0a78e1fc12af5b6efb"} Mar 12 14:44:12.488831 master-0 kubenswrapper[37036]: I0312 14:44:12.488731 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-grxzz" podStartSLOduration=2.33208795 podStartE2EDuration="6.488703313s" podCreationTimestamp="2026-03-12 14:44:06 +0000 UTC" firstStartedPulling="2026-03-12 14:44:07.684354509 +0000 UTC m=+506.692095446" lastFinishedPulling="2026-03-12 14:44:11.840969882 +0000 UTC m=+510.848710809" observedRunningTime="2026-03-12 14:44:12.483958576 +0000 UTC m=+511.491699513" watchObservedRunningTime="2026-03-12 14:44:12.488703313 +0000 UTC m=+511.496444250" Mar 12 14:44:18.032578 master-0 kubenswrapper[37036]: I0312 14:44:18.032499 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-7sh2t"] Mar 12 14:44:18.033383 master-0 kubenswrapper[37036]: E0312 14:44:18.032834 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2" containerName="pull" Mar 12 14:44:18.033383 master-0 kubenswrapper[37036]: I0312 14:44:18.032851 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2" containerName="pull" Mar 12 14:44:18.033383 master-0 kubenswrapper[37036]: E0312 14:44:18.032877 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2" containerName="util" Mar 12 14:44:18.033383 master-0 kubenswrapper[37036]: I0312 14:44:18.032887 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2" containerName="util" Mar 12 14:44:18.033383 master-0 kubenswrapper[37036]: E0312 14:44:18.032921 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2" containerName="extract" Mar 12 14:44:18.033383 master-0 kubenswrapper[37036]: I0312 14:44:18.032930 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2" containerName="extract" Mar 12 14:44:18.033383 master-0 kubenswrapper[37036]: I0312 14:44:18.033136 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e092517-b57e-4ed0-b4e9-cb5b22b8a3b2" containerName="extract" Mar 12 14:44:18.033879 master-0 kubenswrapper[37036]: I0312 14:44:18.033851 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-7sh2t" Mar 12 14:44:18.036020 master-0 kubenswrapper[37036]: I0312 14:44:18.035962 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 12 14:44:18.036134 master-0 kubenswrapper[37036]: I0312 14:44:18.036059 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 12 14:44:18.047127 master-0 kubenswrapper[37036]: I0312 14:44:18.047087 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-7sh2t"] Mar 12 14:44:18.053947 master-0 kubenswrapper[37036]: I0312 14:44:18.053881 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/97e72e99-1465-4b6e-89a1-1dcd9a574357-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-7sh2t\" (UID: \"97e72e99-1465-4b6e-89a1-1dcd9a574357\") " pod="cert-manager/cert-manager-webhook-6888856db4-7sh2t" Mar 12 14:44:18.054152 master-0 kubenswrapper[37036]: I0312 14:44:18.054003 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8824x\" (UniqueName: \"kubernetes.io/projected/97e72e99-1465-4b6e-89a1-1dcd9a574357-kube-api-access-8824x\") pod \"cert-manager-webhook-6888856db4-7sh2t\" (UID: \"97e72e99-1465-4b6e-89a1-1dcd9a574357\") " pod="cert-manager/cert-manager-webhook-6888856db4-7sh2t" Mar 12 14:44:18.156870 master-0 kubenswrapper[37036]: I0312 14:44:18.156782 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/97e72e99-1465-4b6e-89a1-1dcd9a574357-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-7sh2t\" (UID: \"97e72e99-1465-4b6e-89a1-1dcd9a574357\") " pod="cert-manager/cert-manager-webhook-6888856db4-7sh2t" Mar 12 14:44:18.157153 master-0 kubenswrapper[37036]: I0312 14:44:18.156957 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8824x\" (UniqueName: \"kubernetes.io/projected/97e72e99-1465-4b6e-89a1-1dcd9a574357-kube-api-access-8824x\") pod \"cert-manager-webhook-6888856db4-7sh2t\" (UID: \"97e72e99-1465-4b6e-89a1-1dcd9a574357\") " pod="cert-manager/cert-manager-webhook-6888856db4-7sh2t" Mar 12 14:44:18.177954 master-0 kubenswrapper[37036]: I0312 14:44:18.177835 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/97e72e99-1465-4b6e-89a1-1dcd9a574357-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-7sh2t\" (UID: \"97e72e99-1465-4b6e-89a1-1dcd9a574357\") " pod="cert-manager/cert-manager-webhook-6888856db4-7sh2t" Mar 12 14:44:18.179928 master-0 kubenswrapper[37036]: I0312 14:44:18.179883 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8824x\" (UniqueName: \"kubernetes.io/projected/97e72e99-1465-4b6e-89a1-1dcd9a574357-kube-api-access-8824x\") pod \"cert-manager-webhook-6888856db4-7sh2t\" (UID: \"97e72e99-1465-4b6e-89a1-1dcd9a574357\") " pod="cert-manager/cert-manager-webhook-6888856db4-7sh2t" Mar 12 14:44:18.349145 master-0 kubenswrapper[37036]: I0312 14:44:18.349009 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-7sh2t" Mar 12 14:44:18.420758 master-0 kubenswrapper[37036]: I0312 14:44:18.420694 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-k962n"] Mar 12 14:44:18.421863 master-0 kubenswrapper[37036]: I0312 14:44:18.421828 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-k962n" Mar 12 14:44:18.456668 master-0 kubenswrapper[37036]: I0312 14:44:18.455872 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-k962n"] Mar 12 14:44:18.462070 master-0 kubenswrapper[37036]: I0312 14:44:18.461433 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12f049d2-2e69-4afe-b56e-de0f10dbf9f7-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-k962n\" (UID: \"12f049d2-2e69-4afe-b56e-de0f10dbf9f7\") " pod="cert-manager/cert-manager-cainjector-5545bd876-k962n" Mar 12 14:44:18.462070 master-0 kubenswrapper[37036]: I0312 14:44:18.461471 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x86xq\" (UniqueName: \"kubernetes.io/projected/12f049d2-2e69-4afe-b56e-de0f10dbf9f7-kube-api-access-x86xq\") pod \"cert-manager-cainjector-5545bd876-k962n\" (UID: \"12f049d2-2e69-4afe-b56e-de0f10dbf9f7\") " pod="cert-manager/cert-manager-cainjector-5545bd876-k962n" Mar 12 14:44:18.573122 master-0 kubenswrapper[37036]: I0312 14:44:18.572776 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12f049d2-2e69-4afe-b56e-de0f10dbf9f7-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-k962n\" (UID: \"12f049d2-2e69-4afe-b56e-de0f10dbf9f7\") " pod="cert-manager/cert-manager-cainjector-5545bd876-k962n" Mar 12 14:44:18.573122 master-0 kubenswrapper[37036]: I0312 14:44:18.572841 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x86xq\" (UniqueName: \"kubernetes.io/projected/12f049d2-2e69-4afe-b56e-de0f10dbf9f7-kube-api-access-x86xq\") pod \"cert-manager-cainjector-5545bd876-k962n\" (UID: \"12f049d2-2e69-4afe-b56e-de0f10dbf9f7\") " pod="cert-manager/cert-manager-cainjector-5545bd876-k962n" Mar 12 14:44:18.595369 master-0 kubenswrapper[37036]: I0312 14:44:18.595307 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12f049d2-2e69-4afe-b56e-de0f10dbf9f7-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-k962n\" (UID: \"12f049d2-2e69-4afe-b56e-de0f10dbf9f7\") " pod="cert-manager/cert-manager-cainjector-5545bd876-k962n" Mar 12 14:44:18.596832 master-0 kubenswrapper[37036]: I0312 14:44:18.596791 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x86xq\" (UniqueName: \"kubernetes.io/projected/12f049d2-2e69-4afe-b56e-de0f10dbf9f7-kube-api-access-x86xq\") pod \"cert-manager-cainjector-5545bd876-k962n\" (UID: \"12f049d2-2e69-4afe-b56e-de0f10dbf9f7\") " pod="cert-manager/cert-manager-cainjector-5545bd876-k962n" Mar 12 14:44:18.756365 master-0 kubenswrapper[37036]: I0312 14:44:18.756294 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-k962n" Mar 12 14:44:18.782349 master-0 kubenswrapper[37036]: I0312 14:44:18.782186 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-7sh2t"] Mar 12 14:44:19.187615 master-0 kubenswrapper[37036]: I0312 14:44:19.187488 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-tkxjb"] Mar 12 14:44:19.191036 master-0 kubenswrapper[37036]: I0312 14:44:19.188435 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-tkxjb" Mar 12 14:44:19.191036 master-0 kubenswrapper[37036]: I0312 14:44:19.190602 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 12 14:44:19.192696 master-0 kubenswrapper[37036]: I0312 14:44:19.192660 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 12 14:44:19.198882 master-0 kubenswrapper[37036]: I0312 14:44:19.198823 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-tkxjb"] Mar 12 14:44:19.267529 master-0 kubenswrapper[37036]: I0312 14:44:19.261147 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-k962n"] Mar 12 14:44:19.268492 master-0 kubenswrapper[37036]: W0312 14:44:19.268427 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12f049d2_2e69_4afe_b56e_de0f10dbf9f7.slice/crio-6f8d4991bc3444c0a621da23c5214f832febed92c85f79e37d69f1dd79f537cc WatchSource:0}: Error finding container 6f8d4991bc3444c0a621da23c5214f832febed92c85f79e37d69f1dd79f537cc: Status 404 returned error can't find the container with id 6f8d4991bc3444c0a621da23c5214f832febed92c85f79e37d69f1dd79f537cc Mar 12 14:44:19.285998 master-0 kubenswrapper[37036]: I0312 14:44:19.285939 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qx8g\" (UniqueName: \"kubernetes.io/projected/8f4463a6-493b-4af7-959a-eaef1ff7048f-kube-api-access-9qx8g\") pod \"nmstate-operator-796d4cfff4-tkxjb\" (UID: \"8f4463a6-493b-4af7-959a-eaef1ff7048f\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-tkxjb" Mar 12 14:44:19.387992 master-0 kubenswrapper[37036]: I0312 14:44:19.387668 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qx8g\" (UniqueName: \"kubernetes.io/projected/8f4463a6-493b-4af7-959a-eaef1ff7048f-kube-api-access-9qx8g\") pod \"nmstate-operator-796d4cfff4-tkxjb\" (UID: \"8f4463a6-493b-4af7-959a-eaef1ff7048f\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-tkxjb" Mar 12 14:44:19.407313 master-0 kubenswrapper[37036]: I0312 14:44:19.407239 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qx8g\" (UniqueName: \"kubernetes.io/projected/8f4463a6-493b-4af7-959a-eaef1ff7048f-kube-api-access-9qx8g\") pod \"nmstate-operator-796d4cfff4-tkxjb\" (UID: \"8f4463a6-493b-4af7-959a-eaef1ff7048f\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-tkxjb" Mar 12 14:44:19.522549 master-0 kubenswrapper[37036]: I0312 14:44:19.522430 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-k962n" event={"ID":"12f049d2-2e69-4afe-b56e-de0f10dbf9f7","Type":"ContainerStarted","Data":"6f8d4991bc3444c0a621da23c5214f832febed92c85f79e37d69f1dd79f537cc"} Mar 12 14:44:19.523841 master-0 kubenswrapper[37036]: I0312 14:44:19.523781 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-7sh2t" event={"ID":"97e72e99-1465-4b6e-89a1-1dcd9a574357","Type":"ContainerStarted","Data":"9c1d70d9cde817a4bea0b9ff8e3ccc014e8dcaaa06c7be8485ca95bf5388b824"} Mar 12 14:44:19.536273 master-0 kubenswrapper[37036]: I0312 14:44:19.536212 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-tkxjb" Mar 12 14:44:20.042974 master-0 kubenswrapper[37036]: I0312 14:44:20.042926 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-tkxjb"] Mar 12 14:44:20.533067 master-0 kubenswrapper[37036]: I0312 14:44:20.532998 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-tkxjb" event={"ID":"8f4463a6-493b-4af7-959a-eaef1ff7048f","Type":"ContainerStarted","Data":"87196e77f795aeb672aedce05451068fde3a43c155c9fbb4a724cc563386adfe"} Mar 12 14:44:24.417300 master-0 kubenswrapper[37036]: I0312 14:44:24.417209 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-gffgh"] Mar 12 14:44:24.418851 master-0 kubenswrapper[37036]: I0312 14:44:24.418515 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-gffgh" Mar 12 14:44:24.434227 master-0 kubenswrapper[37036]: I0312 14:44:24.434176 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-gffgh"] Mar 12 14:44:24.532156 master-0 kubenswrapper[37036]: I0312 14:44:24.532089 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9p5p\" (UniqueName: \"kubernetes.io/projected/12c76f30-5a00-4192-8b28-4c5d565150da-kube-api-access-d9p5p\") pod \"cert-manager-545d4d4674-gffgh\" (UID: \"12c76f30-5a00-4192-8b28-4c5d565150da\") " pod="cert-manager/cert-manager-545d4d4674-gffgh" Mar 12 14:44:24.532391 master-0 kubenswrapper[37036]: I0312 14:44:24.532172 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12c76f30-5a00-4192-8b28-4c5d565150da-bound-sa-token\") pod \"cert-manager-545d4d4674-gffgh\" (UID: \"12c76f30-5a00-4192-8b28-4c5d565150da\") " pod="cert-manager/cert-manager-545d4d4674-gffgh" Mar 12 14:44:24.638943 master-0 kubenswrapper[37036]: I0312 14:44:24.634267 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9p5p\" (UniqueName: \"kubernetes.io/projected/12c76f30-5a00-4192-8b28-4c5d565150da-kube-api-access-d9p5p\") pod \"cert-manager-545d4d4674-gffgh\" (UID: \"12c76f30-5a00-4192-8b28-4c5d565150da\") " pod="cert-manager/cert-manager-545d4d4674-gffgh" Mar 12 14:44:24.638943 master-0 kubenswrapper[37036]: I0312 14:44:24.634350 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12c76f30-5a00-4192-8b28-4c5d565150da-bound-sa-token\") pod \"cert-manager-545d4d4674-gffgh\" (UID: \"12c76f30-5a00-4192-8b28-4c5d565150da\") " pod="cert-manager/cert-manager-545d4d4674-gffgh" Mar 12 14:44:24.657819 master-0 kubenswrapper[37036]: I0312 14:44:24.657770 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12c76f30-5a00-4192-8b28-4c5d565150da-bound-sa-token\") pod \"cert-manager-545d4d4674-gffgh\" (UID: \"12c76f30-5a00-4192-8b28-4c5d565150da\") " pod="cert-manager/cert-manager-545d4d4674-gffgh" Mar 12 14:44:24.658041 master-0 kubenswrapper[37036]: I0312 14:44:24.657902 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9p5p\" (UniqueName: \"kubernetes.io/projected/12c76f30-5a00-4192-8b28-4c5d565150da-kube-api-access-d9p5p\") pod \"cert-manager-545d4d4674-gffgh\" (UID: \"12c76f30-5a00-4192-8b28-4c5d565150da\") " pod="cert-manager/cert-manager-545d4d4674-gffgh" Mar 12 14:44:24.771001 master-0 kubenswrapper[37036]: I0312 14:44:24.770939 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-gffgh" Mar 12 14:44:26.281944 master-0 kubenswrapper[37036]: I0312 14:44:26.275721 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-794566cf8d-rcz9c"] Mar 12 14:44:26.283522 master-0 kubenswrapper[37036]: I0312 14:44:26.283489 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-794566cf8d-rcz9c" Mar 12 14:44:26.294142 master-0 kubenswrapper[37036]: I0312 14:44:26.294099 37036 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 12 14:44:26.301962 master-0 kubenswrapper[37036]: I0312 14:44:26.296993 37036 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 12 14:44:26.301962 master-0 kubenswrapper[37036]: I0312 14:44:26.299585 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-794566cf8d-rcz9c"] Mar 12 14:44:26.303019 master-0 kubenswrapper[37036]: I0312 14:44:26.302608 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 12 14:44:26.303019 master-0 kubenswrapper[37036]: I0312 14:44:26.302774 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 12 14:44:26.384064 master-0 kubenswrapper[37036]: I0312 14:44:26.383952 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88t88\" (UniqueName: \"kubernetes.io/projected/5160bc8b-ff23-474f-b5b9-fa90f8e78394-kube-api-access-88t88\") pod \"metallb-operator-controller-manager-794566cf8d-rcz9c\" (UID: \"5160bc8b-ff23-474f-b5b9-fa90f8e78394\") " pod="metallb-system/metallb-operator-controller-manager-794566cf8d-rcz9c" Mar 12 14:44:26.384064 master-0 kubenswrapper[37036]: I0312 14:44:26.384046 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5160bc8b-ff23-474f-b5b9-fa90f8e78394-webhook-cert\") pod \"metallb-operator-controller-manager-794566cf8d-rcz9c\" (UID: \"5160bc8b-ff23-474f-b5b9-fa90f8e78394\") " pod="metallb-system/metallb-operator-controller-manager-794566cf8d-rcz9c" Mar 12 14:44:26.384407 master-0 kubenswrapper[37036]: I0312 14:44:26.384144 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5160bc8b-ff23-474f-b5b9-fa90f8e78394-apiservice-cert\") pod \"metallb-operator-controller-manager-794566cf8d-rcz9c\" (UID: \"5160bc8b-ff23-474f-b5b9-fa90f8e78394\") " pod="metallb-system/metallb-operator-controller-manager-794566cf8d-rcz9c" Mar 12 14:44:26.487475 master-0 kubenswrapper[37036]: I0312 14:44:26.487379 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88t88\" (UniqueName: \"kubernetes.io/projected/5160bc8b-ff23-474f-b5b9-fa90f8e78394-kube-api-access-88t88\") pod \"metallb-operator-controller-manager-794566cf8d-rcz9c\" (UID: \"5160bc8b-ff23-474f-b5b9-fa90f8e78394\") " pod="metallb-system/metallb-operator-controller-manager-794566cf8d-rcz9c" Mar 12 14:44:26.487475 master-0 kubenswrapper[37036]: I0312 14:44:26.487448 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5160bc8b-ff23-474f-b5b9-fa90f8e78394-webhook-cert\") pod \"metallb-operator-controller-manager-794566cf8d-rcz9c\" (UID: \"5160bc8b-ff23-474f-b5b9-fa90f8e78394\") " pod="metallb-system/metallb-operator-controller-manager-794566cf8d-rcz9c" Mar 12 14:44:26.487709 master-0 kubenswrapper[37036]: I0312 14:44:26.487491 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5160bc8b-ff23-474f-b5b9-fa90f8e78394-apiservice-cert\") pod \"metallb-operator-controller-manager-794566cf8d-rcz9c\" (UID: \"5160bc8b-ff23-474f-b5b9-fa90f8e78394\") " pod="metallb-system/metallb-operator-controller-manager-794566cf8d-rcz9c" Mar 12 14:44:26.492207 master-0 kubenswrapper[37036]: I0312 14:44:26.492179 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5160bc8b-ff23-474f-b5b9-fa90f8e78394-webhook-cert\") pod \"metallb-operator-controller-manager-794566cf8d-rcz9c\" (UID: \"5160bc8b-ff23-474f-b5b9-fa90f8e78394\") " pod="metallb-system/metallb-operator-controller-manager-794566cf8d-rcz9c" Mar 12 14:44:26.497006 master-0 kubenswrapper[37036]: I0312 14:44:26.495524 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5160bc8b-ff23-474f-b5b9-fa90f8e78394-apiservice-cert\") pod \"metallb-operator-controller-manager-794566cf8d-rcz9c\" (UID: \"5160bc8b-ff23-474f-b5b9-fa90f8e78394\") " pod="metallb-system/metallb-operator-controller-manager-794566cf8d-rcz9c" Mar 12 14:44:26.539496 master-0 kubenswrapper[37036]: I0312 14:44:26.539401 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88t88\" (UniqueName: \"kubernetes.io/projected/5160bc8b-ff23-474f-b5b9-fa90f8e78394-kube-api-access-88t88\") pod \"metallb-operator-controller-manager-794566cf8d-rcz9c\" (UID: \"5160bc8b-ff23-474f-b5b9-fa90f8e78394\") " pod="metallb-system/metallb-operator-controller-manager-794566cf8d-rcz9c" Mar 12 14:44:26.706636 master-0 kubenswrapper[37036]: I0312 14:44:26.705717 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-794566cf8d-rcz9c" Mar 12 14:44:26.911164 master-0 kubenswrapper[37036]: I0312 14:44:26.911022 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-644b57d759-m5szb"] Mar 12 14:44:26.920253 master-0 kubenswrapper[37036]: I0312 14:44:26.918787 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-644b57d759-m5szb" Mar 12 14:44:26.931082 master-0 kubenswrapper[37036]: I0312 14:44:26.930763 37036 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 12 14:44:26.931082 master-0 kubenswrapper[37036]: I0312 14:44:26.930886 37036 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 12 14:44:26.948220 master-0 kubenswrapper[37036]: I0312 14:44:26.948107 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-644b57d759-m5szb"] Mar 12 14:44:27.100576 master-0 kubenswrapper[37036]: I0312 14:44:27.100509 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9ae40425-b1c6-4fe9-bf12-7af305cc7990-apiservice-cert\") pod \"metallb-operator-webhook-server-644b57d759-m5szb\" (UID: \"9ae40425-b1c6-4fe9-bf12-7af305cc7990\") " pod="metallb-system/metallb-operator-webhook-server-644b57d759-m5szb" Mar 12 14:44:27.100809 master-0 kubenswrapper[37036]: I0312 14:44:27.100654 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv7pc\" (UniqueName: \"kubernetes.io/projected/9ae40425-b1c6-4fe9-bf12-7af305cc7990-kube-api-access-nv7pc\") pod \"metallb-operator-webhook-server-644b57d759-m5szb\" (UID: \"9ae40425-b1c6-4fe9-bf12-7af305cc7990\") " pod="metallb-system/metallb-operator-webhook-server-644b57d759-m5szb" Mar 12 14:44:27.100809 master-0 kubenswrapper[37036]: I0312 14:44:27.100741 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9ae40425-b1c6-4fe9-bf12-7af305cc7990-webhook-cert\") pod \"metallb-operator-webhook-server-644b57d759-m5szb\" (UID: \"9ae40425-b1c6-4fe9-bf12-7af305cc7990\") " pod="metallb-system/metallb-operator-webhook-server-644b57d759-m5szb" Mar 12 14:44:27.202926 master-0 kubenswrapper[37036]: I0312 14:44:27.202068 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9ae40425-b1c6-4fe9-bf12-7af305cc7990-apiservice-cert\") pod \"metallb-operator-webhook-server-644b57d759-m5szb\" (UID: \"9ae40425-b1c6-4fe9-bf12-7af305cc7990\") " pod="metallb-system/metallb-operator-webhook-server-644b57d759-m5szb" Mar 12 14:44:27.202926 master-0 kubenswrapper[37036]: I0312 14:44:27.202145 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv7pc\" (UniqueName: \"kubernetes.io/projected/9ae40425-b1c6-4fe9-bf12-7af305cc7990-kube-api-access-nv7pc\") pod \"metallb-operator-webhook-server-644b57d759-m5szb\" (UID: \"9ae40425-b1c6-4fe9-bf12-7af305cc7990\") " pod="metallb-system/metallb-operator-webhook-server-644b57d759-m5szb" Mar 12 14:44:27.202926 master-0 kubenswrapper[37036]: I0312 14:44:27.202176 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9ae40425-b1c6-4fe9-bf12-7af305cc7990-webhook-cert\") pod \"metallb-operator-webhook-server-644b57d759-m5szb\" (UID: \"9ae40425-b1c6-4fe9-bf12-7af305cc7990\") " pod="metallb-system/metallb-operator-webhook-server-644b57d759-m5szb" Mar 12 14:44:27.206540 master-0 kubenswrapper[37036]: I0312 14:44:27.206499 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9ae40425-b1c6-4fe9-bf12-7af305cc7990-webhook-cert\") pod \"metallb-operator-webhook-server-644b57d759-m5szb\" (UID: \"9ae40425-b1c6-4fe9-bf12-7af305cc7990\") " pod="metallb-system/metallb-operator-webhook-server-644b57d759-m5szb" Mar 12 14:44:27.225030 master-0 kubenswrapper[37036]: I0312 14:44:27.224607 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9ae40425-b1c6-4fe9-bf12-7af305cc7990-apiservice-cert\") pod \"metallb-operator-webhook-server-644b57d759-m5szb\" (UID: \"9ae40425-b1c6-4fe9-bf12-7af305cc7990\") " pod="metallb-system/metallb-operator-webhook-server-644b57d759-m5szb" Mar 12 14:44:27.231928 master-0 kubenswrapper[37036]: I0312 14:44:27.229884 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv7pc\" (UniqueName: \"kubernetes.io/projected/9ae40425-b1c6-4fe9-bf12-7af305cc7990-kube-api-access-nv7pc\") pod \"metallb-operator-webhook-server-644b57d759-m5szb\" (UID: \"9ae40425-b1c6-4fe9-bf12-7af305cc7990\") " pod="metallb-system/metallb-operator-webhook-server-644b57d759-m5szb" Mar 12 14:44:27.287120 master-0 kubenswrapper[37036]: I0312 14:44:27.287041 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-644b57d759-m5szb" Mar 12 14:44:29.234941 master-0 kubenswrapper[37036]: I0312 14:44:29.231287 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-gffgh"] Mar 12 14:44:29.288679 master-0 kubenswrapper[37036]: I0312 14:44:29.288629 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-644b57d759-m5szb"] Mar 12 14:44:29.311225 master-0 kubenswrapper[37036]: I0312 14:44:29.311137 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-794566cf8d-rcz9c"] Mar 12 14:44:29.313081 master-0 kubenswrapper[37036]: W0312 14:44:29.313033 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5160bc8b_ff23_474f_b5b9_fa90f8e78394.slice/crio-74c44f173bb778f617c17b786c8e62fc73e34b3c76585c183bef3ec222232876 WatchSource:0}: Error finding container 74c44f173bb778f617c17b786c8e62fc73e34b3c76585c183bef3ec222232876: Status 404 returned error can't find the container with id 74c44f173bb778f617c17b786c8e62fc73e34b3c76585c183bef3ec222232876 Mar 12 14:44:29.647944 master-0 kubenswrapper[37036]: I0312 14:44:29.647858 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-gffgh" event={"ID":"12c76f30-5a00-4192-8b28-4c5d565150da","Type":"ContainerStarted","Data":"62d0eca074b76994ff285a6dbf06a0f25a534fd4537adb8f5275dd5202a887bb"} Mar 12 14:44:29.647944 master-0 kubenswrapper[37036]: I0312 14:44:29.647948 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-gffgh" event={"ID":"12c76f30-5a00-4192-8b28-4c5d565150da","Type":"ContainerStarted","Data":"25b9a028e402a2ec755f90151770c6d04897737146522913f347e7573371b060"} Mar 12 14:44:29.656577 master-0 kubenswrapper[37036]: I0312 14:44:29.656531 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-644b57d759-m5szb" event={"ID":"9ae40425-b1c6-4fe9-bf12-7af305cc7990","Type":"ContainerStarted","Data":"b643d34c2208e529af68e598b9395d78f59faa7a9f373d2e73f041f544f34d29"} Mar 12 14:44:29.661859 master-0 kubenswrapper[37036]: I0312 14:44:29.661524 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-794566cf8d-rcz9c" event={"ID":"5160bc8b-ff23-474f-b5b9-fa90f8e78394","Type":"ContainerStarted","Data":"74c44f173bb778f617c17b786c8e62fc73e34b3c76585c183bef3ec222232876"} Mar 12 14:44:29.663048 master-0 kubenswrapper[37036]: I0312 14:44:29.662988 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-7sh2t" event={"ID":"97e72e99-1465-4b6e-89a1-1dcd9a574357","Type":"ContainerStarted","Data":"f3d6e240ff5799fbebf86dc20ac99ec4477d16396d6457748f702e7b85c8b380"} Mar 12 14:44:29.665073 master-0 kubenswrapper[37036]: I0312 14:44:29.663710 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-7sh2t" Mar 12 14:44:29.670072 master-0 kubenswrapper[37036]: I0312 14:44:29.669971 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-gffgh" podStartSLOduration=5.669950657 podStartE2EDuration="5.669950657s" podCreationTimestamp="2026-03-12 14:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:44:29.668318796 +0000 UTC m=+528.676059733" watchObservedRunningTime="2026-03-12 14:44:29.669950657 +0000 UTC m=+528.677691594" Mar 12 14:44:29.678432 master-0 kubenswrapper[37036]: I0312 14:44:29.678349 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-tkxjb" event={"ID":"8f4463a6-493b-4af7-959a-eaef1ff7048f","Type":"ContainerStarted","Data":"3f5e642ae388f4068ea8f541a58567d33ce8e23b458c1f5e0ced9f4d39ff4e40"} Mar 12 14:44:29.682141 master-0 kubenswrapper[37036]: I0312 14:44:29.682047 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-k962n" event={"ID":"12f049d2-2e69-4afe-b56e-de0f10dbf9f7","Type":"ContainerStarted","Data":"99a91aa277f30030cab15273fa1096ec1a80ee827a0ca40f55dc5953ddacddc1"} Mar 12 14:44:29.704030 master-0 kubenswrapper[37036]: I0312 14:44:29.703561 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-7sh2t" podStartSLOduration=1.82611294 podStartE2EDuration="11.703543133s" podCreationTimestamp="2026-03-12 14:44:18 +0000 UTC" firstStartedPulling="2026-03-12 14:44:18.784279875 +0000 UTC m=+517.792020812" lastFinishedPulling="2026-03-12 14:44:28.661710068 +0000 UTC m=+527.669451005" observedRunningTime="2026-03-12 14:44:29.699842641 +0000 UTC m=+528.707583588" watchObservedRunningTime="2026-03-12 14:44:29.703543133 +0000 UTC m=+528.711284070" Mar 12 14:44:29.729060 master-0 kubenswrapper[37036]: I0312 14:44:29.726648 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-k962n" podStartSLOduration=2.30184771 podStartE2EDuration="11.726631189s" podCreationTimestamp="2026-03-12 14:44:18 +0000 UTC" firstStartedPulling="2026-03-12 14:44:19.271255533 +0000 UTC m=+518.278996470" lastFinishedPulling="2026-03-12 14:44:28.696039012 +0000 UTC m=+527.703779949" observedRunningTime="2026-03-12 14:44:29.725943071 +0000 UTC m=+528.733684008" watchObservedRunningTime="2026-03-12 14:44:29.726631189 +0000 UTC m=+528.734372126" Mar 12 14:44:29.760859 master-0 kubenswrapper[37036]: I0312 14:44:29.760758 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-tkxjb" podStartSLOduration=2.140184706 podStartE2EDuration="10.760739758s" podCreationTimestamp="2026-03-12 14:44:19 +0000 UTC" firstStartedPulling="2026-03-12 14:44:20.048456788 +0000 UTC m=+519.056197725" lastFinishedPulling="2026-03-12 14:44:28.66901184 +0000 UTC m=+527.676752777" observedRunningTime="2026-03-12 14:44:29.753729433 +0000 UTC m=+528.761470370" watchObservedRunningTime="2026-03-12 14:44:29.760739758 +0000 UTC m=+528.768480695" Mar 12 14:44:33.360599 master-0 kubenswrapper[37036]: I0312 14:44:33.360550 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-7sh2t" Mar 12 14:44:36.050923 master-0 kubenswrapper[37036]: I0312 14:44:36.048408 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-kxgwz"] Mar 12 14:44:36.050923 master-0 kubenswrapper[37036]: I0312 14:44:36.049588 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kxgwz" Mar 12 14:44:36.059948 master-0 kubenswrapper[37036]: I0312 14:44:36.057248 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 12 14:44:36.059948 master-0 kubenswrapper[37036]: I0312 14:44:36.057350 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 12 14:44:36.078511 master-0 kubenswrapper[37036]: I0312 14:44:36.078120 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-kxgwz"] Mar 12 14:44:36.226921 master-0 kubenswrapper[37036]: I0312 14:44:36.225859 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hz5z\" (UniqueName: \"kubernetes.io/projected/a05dfe81-0288-4c00-b2b4-381cd4e88467-kube-api-access-8hz5z\") pod \"obo-prometheus-operator-68bc856cb9-kxgwz\" (UID: \"a05dfe81-0288-4c00-b2b4-381cd4e88467\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kxgwz" Mar 12 14:44:36.259924 master-0 kubenswrapper[37036]: I0312 14:44:36.256038 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-gl7gf"] Mar 12 14:44:36.259924 master-0 kubenswrapper[37036]: I0312 14:44:36.256977 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-gl7gf" Mar 12 14:44:36.270923 master-0 kubenswrapper[37036]: I0312 14:44:36.267719 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 12 14:44:36.283563 master-0 kubenswrapper[37036]: I0312 14:44:36.282962 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-lzj76"] Mar 12 14:44:36.286917 master-0 kubenswrapper[37036]: I0312 14:44:36.284287 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-lzj76" Mar 12 14:44:36.322957 master-0 kubenswrapper[37036]: I0312 14:44:36.311420 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-gl7gf"] Mar 12 14:44:36.330116 master-0 kubenswrapper[37036]: I0312 14:44:36.327919 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hz5z\" (UniqueName: \"kubernetes.io/projected/a05dfe81-0288-4c00-b2b4-381cd4e88467-kube-api-access-8hz5z\") pod \"obo-prometheus-operator-68bc856cb9-kxgwz\" (UID: \"a05dfe81-0288-4c00-b2b4-381cd4e88467\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kxgwz" Mar 12 14:44:36.330116 master-0 kubenswrapper[37036]: I0312 14:44:36.328325 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-lzj76"] Mar 12 14:44:36.361925 master-0 kubenswrapper[37036]: I0312 14:44:36.356589 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hz5z\" (UniqueName: \"kubernetes.io/projected/a05dfe81-0288-4c00-b2b4-381cd4e88467-kube-api-access-8hz5z\") pod \"obo-prometheus-operator-68bc856cb9-kxgwz\" (UID: \"a05dfe81-0288-4c00-b2b4-381cd4e88467\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kxgwz" Mar 12 14:44:36.377926 master-0 kubenswrapper[37036]: I0312 14:44:36.376868 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kxgwz" Mar 12 14:44:36.429924 master-0 kubenswrapper[37036]: I0312 14:44:36.429549 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dd478f89-821d-40ce-9c62-ea212e20696f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-55d74954d9-lzj76\" (UID: \"dd478f89-821d-40ce-9c62-ea212e20696f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-lzj76" Mar 12 14:44:36.429924 master-0 kubenswrapper[37036]: I0312 14:44:36.429642 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/92320a06-534b-4135-9012-6f5099673ab1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-55d74954d9-gl7gf\" (UID: \"92320a06-534b-4135-9012-6f5099673ab1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-gl7gf" Mar 12 14:44:36.429924 master-0 kubenswrapper[37036]: I0312 14:44:36.429724 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/92320a06-534b-4135-9012-6f5099673ab1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-55d74954d9-gl7gf\" (UID: \"92320a06-534b-4135-9012-6f5099673ab1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-gl7gf" Mar 12 14:44:36.429924 master-0 kubenswrapper[37036]: I0312 14:44:36.429757 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dd478f89-821d-40ce-9c62-ea212e20696f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-55d74954d9-lzj76\" (UID: \"dd478f89-821d-40ce-9c62-ea212e20696f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-lzj76" Mar 12 14:44:36.476929 master-0 kubenswrapper[37036]: I0312 14:44:36.467114 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-f9rp8"] Mar 12 14:44:36.476929 master-0 kubenswrapper[37036]: I0312 14:44:36.470926 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-f9rp8" Mar 12 14:44:36.476929 master-0 kubenswrapper[37036]: I0312 14:44:36.473826 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 12 14:44:36.499930 master-0 kubenswrapper[37036]: I0312 14:44:36.497201 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-f9rp8"] Mar 12 14:44:36.538489 master-0 kubenswrapper[37036]: I0312 14:44:36.532308 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/92320a06-534b-4135-9012-6f5099673ab1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-55d74954d9-gl7gf\" (UID: \"92320a06-534b-4135-9012-6f5099673ab1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-gl7gf" Mar 12 14:44:36.538489 master-0 kubenswrapper[37036]: I0312 14:44:36.532370 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dd478f89-821d-40ce-9c62-ea212e20696f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-55d74954d9-lzj76\" (UID: \"dd478f89-821d-40ce-9c62-ea212e20696f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-lzj76" Mar 12 14:44:36.538489 master-0 kubenswrapper[37036]: I0312 14:44:36.532406 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dd478f89-821d-40ce-9c62-ea212e20696f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-55d74954d9-lzj76\" (UID: \"dd478f89-821d-40ce-9c62-ea212e20696f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-lzj76" Mar 12 14:44:36.538489 master-0 kubenswrapper[37036]: I0312 14:44:36.532489 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/92320a06-534b-4135-9012-6f5099673ab1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-55d74954d9-gl7gf\" (UID: \"92320a06-534b-4135-9012-6f5099673ab1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-gl7gf" Mar 12 14:44:36.538489 master-0 kubenswrapper[37036]: I0312 14:44:36.537615 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/92320a06-534b-4135-9012-6f5099673ab1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-55d74954d9-gl7gf\" (UID: \"92320a06-534b-4135-9012-6f5099673ab1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-gl7gf" Mar 12 14:44:36.549084 master-0 kubenswrapper[37036]: I0312 14:44:36.546331 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dd478f89-821d-40ce-9c62-ea212e20696f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-55d74954d9-lzj76\" (UID: \"dd478f89-821d-40ce-9c62-ea212e20696f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-lzj76" Mar 12 14:44:36.549084 master-0 kubenswrapper[37036]: I0312 14:44:36.546537 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dd478f89-821d-40ce-9c62-ea212e20696f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-55d74954d9-lzj76\" (UID: \"dd478f89-821d-40ce-9c62-ea212e20696f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-lzj76" Mar 12 14:44:36.549084 master-0 kubenswrapper[37036]: I0312 14:44:36.546945 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/92320a06-534b-4135-9012-6f5099673ab1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-55d74954d9-gl7gf\" (UID: \"92320a06-534b-4135-9012-6f5099673ab1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-gl7gf" Mar 12 14:44:36.589229 master-0 kubenswrapper[37036]: I0312 14:44:36.588993 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-gl7gf" Mar 12 14:44:36.624106 master-0 kubenswrapper[37036]: I0312 14:44:36.621325 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-lv55q"] Mar 12 14:44:36.627582 master-0 kubenswrapper[37036]: I0312 14:44:36.626074 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-lv55q" Mar 12 14:44:36.631923 master-0 kubenswrapper[37036]: I0312 14:44:36.631467 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-lzj76" Mar 12 14:44:36.635166 master-0 kubenswrapper[37036]: I0312 14:44:36.633196 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a75f59f-8dea-49f1-b506-fc2b89ef4252-observability-operator-tls\") pod \"observability-operator-59bdc8b94-f9rp8\" (UID: \"8a75f59f-8dea-49f1-b506-fc2b89ef4252\") " pod="openshift-operators/observability-operator-59bdc8b94-f9rp8" Mar 12 14:44:36.635166 master-0 kubenswrapper[37036]: I0312 14:44:36.633239 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdsg8\" (UniqueName: \"kubernetes.io/projected/8a75f59f-8dea-49f1-b506-fc2b89ef4252-kube-api-access-hdsg8\") pod \"observability-operator-59bdc8b94-f9rp8\" (UID: \"8a75f59f-8dea-49f1-b506-fc2b89ef4252\") " pod="openshift-operators/observability-operator-59bdc8b94-f9rp8" Mar 12 14:44:36.641945 master-0 kubenswrapper[37036]: I0312 14:44:36.639068 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-lv55q"] Mar 12 14:44:36.736930 master-0 kubenswrapper[37036]: I0312 14:44:36.735967 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzvqg\" (UniqueName: \"kubernetes.io/projected/6f7906c1-4fcc-42db-aaf6-b202aca79ed8-kube-api-access-wzvqg\") pod \"perses-operator-5bf474d74f-lv55q\" (UID: \"6f7906c1-4fcc-42db-aaf6-b202aca79ed8\") " pod="openshift-operators/perses-operator-5bf474d74f-lv55q" Mar 12 14:44:36.736930 master-0 kubenswrapper[37036]: I0312 14:44:36.736072 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/6f7906c1-4fcc-42db-aaf6-b202aca79ed8-openshift-service-ca\") pod \"perses-operator-5bf474d74f-lv55q\" (UID: \"6f7906c1-4fcc-42db-aaf6-b202aca79ed8\") " pod="openshift-operators/perses-operator-5bf474d74f-lv55q" Mar 12 14:44:36.736930 master-0 kubenswrapper[37036]: I0312 14:44:36.736102 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a75f59f-8dea-49f1-b506-fc2b89ef4252-observability-operator-tls\") pod \"observability-operator-59bdc8b94-f9rp8\" (UID: \"8a75f59f-8dea-49f1-b506-fc2b89ef4252\") " pod="openshift-operators/observability-operator-59bdc8b94-f9rp8" Mar 12 14:44:36.736930 master-0 kubenswrapper[37036]: I0312 14:44:36.736137 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdsg8\" (UniqueName: \"kubernetes.io/projected/8a75f59f-8dea-49f1-b506-fc2b89ef4252-kube-api-access-hdsg8\") pod \"observability-operator-59bdc8b94-f9rp8\" (UID: \"8a75f59f-8dea-49f1-b506-fc2b89ef4252\") " pod="openshift-operators/observability-operator-59bdc8b94-f9rp8" Mar 12 14:44:36.741254 master-0 kubenswrapper[37036]: I0312 14:44:36.739521 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a75f59f-8dea-49f1-b506-fc2b89ef4252-observability-operator-tls\") pod \"observability-operator-59bdc8b94-f9rp8\" (UID: \"8a75f59f-8dea-49f1-b506-fc2b89ef4252\") " pod="openshift-operators/observability-operator-59bdc8b94-f9rp8" Mar 12 14:44:36.764994 master-0 kubenswrapper[37036]: I0312 14:44:36.764529 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdsg8\" (UniqueName: \"kubernetes.io/projected/8a75f59f-8dea-49f1-b506-fc2b89ef4252-kube-api-access-hdsg8\") pod \"observability-operator-59bdc8b94-f9rp8\" (UID: \"8a75f59f-8dea-49f1-b506-fc2b89ef4252\") " pod="openshift-operators/observability-operator-59bdc8b94-f9rp8" Mar 12 14:44:36.839403 master-0 kubenswrapper[37036]: I0312 14:44:36.838189 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzvqg\" (UniqueName: \"kubernetes.io/projected/6f7906c1-4fcc-42db-aaf6-b202aca79ed8-kube-api-access-wzvqg\") pod \"perses-operator-5bf474d74f-lv55q\" (UID: \"6f7906c1-4fcc-42db-aaf6-b202aca79ed8\") " pod="openshift-operators/perses-operator-5bf474d74f-lv55q" Mar 12 14:44:36.839403 master-0 kubenswrapper[37036]: I0312 14:44:36.838277 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/6f7906c1-4fcc-42db-aaf6-b202aca79ed8-openshift-service-ca\") pod \"perses-operator-5bf474d74f-lv55q\" (UID: \"6f7906c1-4fcc-42db-aaf6-b202aca79ed8\") " pod="openshift-operators/perses-operator-5bf474d74f-lv55q" Mar 12 14:44:36.839403 master-0 kubenswrapper[37036]: I0312 14:44:36.839191 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/6f7906c1-4fcc-42db-aaf6-b202aca79ed8-openshift-service-ca\") pod \"perses-operator-5bf474d74f-lv55q\" (UID: \"6f7906c1-4fcc-42db-aaf6-b202aca79ed8\") " pod="openshift-operators/perses-operator-5bf474d74f-lv55q" Mar 12 14:44:36.840848 master-0 kubenswrapper[37036]: I0312 14:44:36.840218 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-f9rp8" Mar 12 14:44:36.865881 master-0 kubenswrapper[37036]: I0312 14:44:36.865824 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzvqg\" (UniqueName: \"kubernetes.io/projected/6f7906c1-4fcc-42db-aaf6-b202aca79ed8-kube-api-access-wzvqg\") pod \"perses-operator-5bf474d74f-lv55q\" (UID: \"6f7906c1-4fcc-42db-aaf6-b202aca79ed8\") " pod="openshift-operators/perses-operator-5bf474d74f-lv55q" Mar 12 14:44:36.950292 master-0 kubenswrapper[37036]: I0312 14:44:36.950219 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-lv55q" Mar 12 14:44:39.216839 master-0 kubenswrapper[37036]: W0312 14:44:39.216744 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f7906c1_4fcc_42db_aaf6_b202aca79ed8.slice/crio-30e2524d5da6bdd872fa2067ada415aa3eac22adfd96ed532f53c91b90d848ea WatchSource:0}: Error finding container 30e2524d5da6bdd872fa2067ada415aa3eac22adfd96ed532f53c91b90d848ea: Status 404 returned error can't find the container with id 30e2524d5da6bdd872fa2067ada415aa3eac22adfd96ed532f53c91b90d848ea Mar 12 14:44:39.218035 master-0 kubenswrapper[37036]: I0312 14:44:39.217877 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-lv55q"] Mar 12 14:44:39.299494 master-0 kubenswrapper[37036]: I0312 14:44:39.299169 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-lzj76"] Mar 12 14:44:39.310925 master-0 kubenswrapper[37036]: W0312 14:44:39.310499 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd478f89_821d_40ce_9c62_ea212e20696f.slice/crio-0c6279b0bdbe0f38bfe7e8666bf276fdfd5e05bc3b8cda8a75d93a0ca9fdff7d WatchSource:0}: Error finding container 0c6279b0bdbe0f38bfe7e8666bf276fdfd5e05bc3b8cda8a75d93a0ca9fdff7d: Status 404 returned error can't find the container with id 0c6279b0bdbe0f38bfe7e8666bf276fdfd5e05bc3b8cda8a75d93a0ca9fdff7d Mar 12 14:44:39.322732 master-0 kubenswrapper[37036]: I0312 14:44:39.322602 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-kxgwz"] Mar 12 14:44:39.340720 master-0 kubenswrapper[37036]: W0312 14:44:39.340229 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda05dfe81_0288_4c00_b2b4_381cd4e88467.slice/crio-b3270dc00f4ca2ed2bc2cbf50d8aa9d8415fe03ef90e3e9c34252c748bb926f3 WatchSource:0}: Error finding container b3270dc00f4ca2ed2bc2cbf50d8aa9d8415fe03ef90e3e9c34252c748bb926f3: Status 404 returned error can't find the container with id b3270dc00f4ca2ed2bc2cbf50d8aa9d8415fe03ef90e3e9c34252c748bb926f3 Mar 12 14:44:39.509922 master-0 kubenswrapper[37036]: W0312 14:44:39.504069 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a75f59f_8dea_49f1_b506_fc2b89ef4252.slice/crio-44ac3327f5fc9b1e22f3479253c4f64423ec6f14d15854bf6329e0caf2b975cd WatchSource:0}: Error finding container 44ac3327f5fc9b1e22f3479253c4f64423ec6f14d15854bf6329e0caf2b975cd: Status 404 returned error can't find the container with id 44ac3327f5fc9b1e22f3479253c4f64423ec6f14d15854bf6329e0caf2b975cd Mar 12 14:44:39.509922 master-0 kubenswrapper[37036]: W0312 14:44:39.506566 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92320a06_534b_4135_9012_6f5099673ab1.slice/crio-fcb40c4bef1ad21330240a6c4b50b002839a525c128a12e64991fcc02d5a500e WatchSource:0}: Error finding container fcb40c4bef1ad21330240a6c4b50b002839a525c128a12e64991fcc02d5a500e: Status 404 returned error can't find the container with id fcb40c4bef1ad21330240a6c4b50b002839a525c128a12e64991fcc02d5a500e Mar 12 14:44:39.518921 master-0 kubenswrapper[37036]: I0312 14:44:39.512358 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-gl7gf"] Mar 12 14:44:39.518921 master-0 kubenswrapper[37036]: I0312 14:44:39.518271 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-f9rp8"] Mar 12 14:44:39.864454 master-0 kubenswrapper[37036]: I0312 14:44:39.864289 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-lzj76" event={"ID":"dd478f89-821d-40ce-9c62-ea212e20696f","Type":"ContainerStarted","Data":"0c6279b0bdbe0f38bfe7e8666bf276fdfd5e05bc3b8cda8a75d93a0ca9fdff7d"} Mar 12 14:44:39.870289 master-0 kubenswrapper[37036]: I0312 14:44:39.870218 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-644b57d759-m5szb" event={"ID":"9ae40425-b1c6-4fe9-bf12-7af305cc7990","Type":"ContainerStarted","Data":"554a175894634bf5547d1d04a04979da9fefc3462255bcfda1455d9e38353d1f"} Mar 12 14:44:39.870478 master-0 kubenswrapper[37036]: I0312 14:44:39.870316 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-644b57d759-m5szb" Mar 12 14:44:39.872053 master-0 kubenswrapper[37036]: I0312 14:44:39.872002 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-794566cf8d-rcz9c" event={"ID":"5160bc8b-ff23-474f-b5b9-fa90f8e78394","Type":"ContainerStarted","Data":"853035a76f6b0210a6053c77403c2d19c329e4e1532a450971004abf40d24743"} Mar 12 14:44:39.872744 master-0 kubenswrapper[37036]: I0312 14:44:39.872708 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-794566cf8d-rcz9c" Mar 12 14:44:39.873419 master-0 kubenswrapper[37036]: I0312 14:44:39.873384 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-gl7gf" event={"ID":"92320a06-534b-4135-9012-6f5099673ab1","Type":"ContainerStarted","Data":"fcb40c4bef1ad21330240a6c4b50b002839a525c128a12e64991fcc02d5a500e"} Mar 12 14:44:39.874976 master-0 kubenswrapper[37036]: I0312 14:44:39.874939 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kxgwz" event={"ID":"a05dfe81-0288-4c00-b2b4-381cd4e88467","Type":"ContainerStarted","Data":"b3270dc00f4ca2ed2bc2cbf50d8aa9d8415fe03ef90e3e9c34252c748bb926f3"} Mar 12 14:44:39.876162 master-0 kubenswrapper[37036]: I0312 14:44:39.876124 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-lv55q" event={"ID":"6f7906c1-4fcc-42db-aaf6-b202aca79ed8","Type":"ContainerStarted","Data":"30e2524d5da6bdd872fa2067ada415aa3eac22adfd96ed532f53c91b90d848ea"} Mar 12 14:44:39.877305 master-0 kubenswrapper[37036]: I0312 14:44:39.877271 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-f9rp8" event={"ID":"8a75f59f-8dea-49f1-b506-fc2b89ef4252","Type":"ContainerStarted","Data":"44ac3327f5fc9b1e22f3479253c4f64423ec6f14d15854bf6329e0caf2b975cd"} Mar 12 14:44:40.010928 master-0 kubenswrapper[37036]: I0312 14:44:40.010802 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-644b57d759-m5szb" podStartSLOduration=4.70647452 podStartE2EDuration="14.01078015s" podCreationTimestamp="2026-03-12 14:44:26 +0000 UTC" firstStartedPulling="2026-03-12 14:44:29.333930038 +0000 UTC m=+528.341670975" lastFinishedPulling="2026-03-12 14:44:38.638235668 +0000 UTC m=+537.645976605" observedRunningTime="2026-03-12 14:44:40.002484773 +0000 UTC m=+539.010225720" watchObservedRunningTime="2026-03-12 14:44:40.01078015 +0000 UTC m=+539.018521087" Mar 12 14:44:40.055923 master-0 kubenswrapper[37036]: I0312 14:44:40.055445 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-794566cf8d-rcz9c" podStartSLOduration=4.81203271 podStartE2EDuration="14.055428242s" podCreationTimestamp="2026-03-12 14:44:26 +0000 UTC" firstStartedPulling="2026-03-12 14:44:29.32472449 +0000 UTC m=+528.332465427" lastFinishedPulling="2026-03-12 14:44:38.568120022 +0000 UTC m=+537.575860959" observedRunningTime="2026-03-12 14:44:40.054379445 +0000 UTC m=+539.062120382" watchObservedRunningTime="2026-03-12 14:44:40.055428242 +0000 UTC m=+539.063169179" Mar 12 14:44:51.032957 master-0 kubenswrapper[37036]: I0312 14:44:51.032869 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-f9rp8" event={"ID":"8a75f59f-8dea-49f1-b506-fc2b89ef4252","Type":"ContainerStarted","Data":"31b056effa3d8d05d6a4e4e875b462ed608933104883684bb8e3c3bb0ea7ab0c"} Mar 12 14:44:51.033459 master-0 kubenswrapper[37036]: I0312 14:44:51.033286 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-f9rp8" Mar 12 14:44:51.034990 master-0 kubenswrapper[37036]: I0312 14:44:51.034950 37036 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-f9rp8 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.128.0.135:8081/healthz\": dial tcp 10.128.0.135:8081: connect: connection refused" start-of-body= Mar 12 14:44:51.035065 master-0 kubenswrapper[37036]: I0312 14:44:51.035024 37036 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-f9rp8" podUID="8a75f59f-8dea-49f1-b506-fc2b89ef4252" containerName="operator" probeResult="failure" output="Get \"http://10.128.0.135:8081/healthz\": dial tcp 10.128.0.135:8081: connect: connection refused" Mar 12 14:44:51.037034 master-0 kubenswrapper[37036]: I0312 14:44:51.036691 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-lzj76" event={"ID":"dd478f89-821d-40ce-9c62-ea212e20696f","Type":"ContainerStarted","Data":"92577298fdc42bcd6002284714e1b93683478881186c2feb0c0509a1b4ae3d4b"} Mar 12 14:44:51.049593 master-0 kubenswrapper[37036]: I0312 14:44:51.049305 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-lv55q" event={"ID":"6f7906c1-4fcc-42db-aaf6-b202aca79ed8","Type":"ContainerStarted","Data":"b408d01cbc0067ebd5a16bb69116785f9647fbdd9c68cf4712fb8c8f60617983"} Mar 12 14:44:51.049686 master-0 kubenswrapper[37036]: I0312 14:44:51.049607 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-lv55q" Mar 12 14:44:51.072020 master-0 kubenswrapper[37036]: I0312 14:44:51.071684 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-f9rp8" podStartSLOduration=3.936832736 podStartE2EDuration="15.071669053s" podCreationTimestamp="2026-03-12 14:44:36 +0000 UTC" firstStartedPulling="2026-03-12 14:44:39.507651259 +0000 UTC m=+538.515392196" lastFinishedPulling="2026-03-12 14:44:50.642487576 +0000 UTC m=+549.650228513" observedRunningTime="2026-03-12 14:44:51.07071848 +0000 UTC m=+550.078459417" watchObservedRunningTime="2026-03-12 14:44:51.071669053 +0000 UTC m=+550.079409990" Mar 12 14:44:51.115607 master-0 kubenswrapper[37036]: I0312 14:44:51.115404 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-lzj76" podStartSLOduration=3.793860128 podStartE2EDuration="15.115384153s" podCreationTimestamp="2026-03-12 14:44:36 +0000 UTC" firstStartedPulling="2026-03-12 14:44:39.328703593 +0000 UTC m=+538.336444530" lastFinishedPulling="2026-03-12 14:44:50.650227618 +0000 UTC m=+549.657968555" observedRunningTime="2026-03-12 14:44:51.112174282 +0000 UTC m=+550.119915219" watchObservedRunningTime="2026-03-12 14:44:51.115384153 +0000 UTC m=+550.123125090" Mar 12 14:44:51.165422 master-0 kubenswrapper[37036]: I0312 14:44:51.164340 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-lv55q" podStartSLOduration=3.741567695 podStartE2EDuration="15.164311011s" podCreationTimestamp="2026-03-12 14:44:36 +0000 UTC" firstStartedPulling="2026-03-12 14:44:39.219626177 +0000 UTC m=+538.227367114" lastFinishedPulling="2026-03-12 14:44:50.642369493 +0000 UTC m=+549.650110430" observedRunningTime="2026-03-12 14:44:51.162640519 +0000 UTC m=+550.170381486" watchObservedRunningTime="2026-03-12 14:44:51.164311011 +0000 UTC m=+550.172051948" Mar 12 14:44:52.058501 master-0 kubenswrapper[37036]: I0312 14:44:52.058429 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-gl7gf" event={"ID":"92320a06-534b-4135-9012-6f5099673ab1","Type":"ContainerStarted","Data":"4082a27bcb07a32c015a0cb4c4d4678b900afb0b00bb750e00d5e4b32e0cb15d"} Mar 12 14:44:52.061892 master-0 kubenswrapper[37036]: I0312 14:44:52.061024 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kxgwz" event={"ID":"a05dfe81-0288-4c00-b2b4-381cd4e88467","Type":"ContainerStarted","Data":"7ade5096d288e9753be3bc05e10f044f44be9d1fd374a7f9e57f4913426ab5b3"} Mar 12 14:44:52.062738 master-0 kubenswrapper[37036]: I0312 14:44:52.062703 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-f9rp8" Mar 12 14:44:52.082384 master-0 kubenswrapper[37036]: I0312 14:44:52.082317 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-55d74954d9-gl7gf" podStartSLOduration=4.948263176 podStartE2EDuration="16.082301472s" podCreationTimestamp="2026-03-12 14:44:36 +0000 UTC" firstStartedPulling="2026-03-12 14:44:39.508361408 +0000 UTC m=+538.516102345" lastFinishedPulling="2026-03-12 14:44:50.642399704 +0000 UTC m=+549.650140641" observedRunningTime="2026-03-12 14:44:52.079175995 +0000 UTC m=+551.086916932" watchObservedRunningTime="2026-03-12 14:44:52.082301472 +0000 UTC m=+551.090042409" Mar 12 14:44:52.169934 master-0 kubenswrapper[37036]: I0312 14:44:52.169667 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kxgwz" podStartSLOduration=4.851013424 podStartE2EDuration="16.169645198s" podCreationTimestamp="2026-03-12 14:44:36 +0000 UTC" firstStartedPulling="2026-03-12 14:44:39.349177243 +0000 UTC m=+538.356918180" lastFinishedPulling="2026-03-12 14:44:50.667809007 +0000 UTC m=+549.675549954" observedRunningTime="2026-03-12 14:44:52.113856868 +0000 UTC m=+551.121597805" watchObservedRunningTime="2026-03-12 14:44:52.169645198 +0000 UTC m=+551.177386135" Mar 12 14:44:56.953637 master-0 kubenswrapper[37036]: I0312 14:44:56.953567 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-lv55q" Mar 12 14:44:57.293345 master-0 kubenswrapper[37036]: I0312 14:44:57.293276 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-644b57d759-m5szb" Mar 12 14:45:16.710419 master-0 kubenswrapper[37036]: I0312 14:45:16.710375 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-794566cf8d-rcz9c" Mar 12 14:45:24.203968 master-0 kubenswrapper[37036]: I0312 14:45:24.199759 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-nt2jr"] Mar 12 14:45:24.203968 master-0 kubenswrapper[37036]: I0312 14:45:24.201484 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nt2jr" Mar 12 14:45:24.215400 master-0 kubenswrapper[37036]: I0312 14:45:24.212088 37036 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 12 14:45:24.232746 master-0 kubenswrapper[37036]: I0312 14:45:24.232680 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-t6r4g"] Mar 12 14:45:24.245426 master-0 kubenswrapper[37036]: I0312 14:45:24.245361 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-nt2jr"] Mar 12 14:45:24.246229 master-0 kubenswrapper[37036]: I0312 14:45:24.246195 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:24.249264 master-0 kubenswrapper[37036]: I0312 14:45:24.249025 37036 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 12 14:45:24.250084 master-0 kubenswrapper[37036]: I0312 14:45:24.250054 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 12 14:45:24.267086 master-0 kubenswrapper[37036]: I0312 14:45:24.267044 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d-frr-sockets\") pod \"frr-k8s-t6r4g\" (UID: \"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d\") " pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:24.268970 master-0 kubenswrapper[37036]: I0312 14:45:24.268933 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d-metrics\") pod \"frr-k8s-t6r4g\" (UID: \"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d\") " pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:24.269151 master-0 kubenswrapper[37036]: I0312 14:45:24.269132 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d-frr-conf\") pod \"frr-k8s-t6r4g\" (UID: \"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d\") " pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:24.269321 master-0 kubenswrapper[37036]: I0312 14:45:24.269301 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm9l6\" (UniqueName: \"kubernetes.io/projected/3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d-kube-api-access-sm9l6\") pod \"frr-k8s-t6r4g\" (UID: \"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d\") " pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:24.269445 master-0 kubenswrapper[37036]: I0312 14:45:24.269431 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d-frr-startup\") pod \"frr-k8s-t6r4g\" (UID: \"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d\") " pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:24.269580 master-0 kubenswrapper[37036]: I0312 14:45:24.269561 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d-metrics-certs\") pod \"frr-k8s-t6r4g\" (UID: \"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d\") " pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:24.270504 master-0 kubenswrapper[37036]: I0312 14:45:24.270457 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d-reloader\") pod \"frr-k8s-t6r4g\" (UID: \"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d\") " pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:24.341999 master-0 kubenswrapper[37036]: I0312 14:45:24.339556 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-l6mt6"] Mar 12 14:45:24.342215 master-0 kubenswrapper[37036]: I0312 14:45:24.342177 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-l6mt6" Mar 12 14:45:24.344147 master-0 kubenswrapper[37036]: I0312 14:45:24.343831 37036 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 12 14:45:24.346289 master-0 kubenswrapper[37036]: I0312 14:45:24.346180 37036 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 12 14:45:24.347167 master-0 kubenswrapper[37036]: I0312 14:45:24.346413 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 12 14:45:24.360761 master-0 kubenswrapper[37036]: I0312 14:45:24.360711 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-9rrn5"] Mar 12 14:45:24.391974 master-0 kubenswrapper[37036]: I0312 14:45:24.371680 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d-reloader\") pod \"frr-k8s-t6r4g\" (UID: \"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d\") " pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:24.391974 master-0 kubenswrapper[37036]: I0312 14:45:24.371765 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d-frr-sockets\") pod \"frr-k8s-t6r4g\" (UID: \"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d\") " pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:24.391974 master-0 kubenswrapper[37036]: I0312 14:45:24.371823 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d-metrics\") pod \"frr-k8s-t6r4g\" (UID: \"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d\") " pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:24.391974 master-0 kubenswrapper[37036]: I0312 14:45:24.371849 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a17fe07a-69eb-4d18-9348-1ea5bddf51a6-metrics-certs\") pod \"speaker-l6mt6\" (UID: \"a17fe07a-69eb-4d18-9348-1ea5bddf51a6\") " pod="metallb-system/speaker-l6mt6" Mar 12 14:45:24.391974 master-0 kubenswrapper[37036]: I0312 14:45:24.372158 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjk92\" (UniqueName: \"kubernetes.io/projected/a17fe07a-69eb-4d18-9348-1ea5bddf51a6-kube-api-access-zjk92\") pod \"speaker-l6mt6\" (UID: \"a17fe07a-69eb-4d18-9348-1ea5bddf51a6\") " pod="metallb-system/speaker-l6mt6" Mar 12 14:45:24.391974 master-0 kubenswrapper[37036]: I0312 14:45:24.372225 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b53f7ff-ae45-4b88-9a32-8548fcab110a-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-nt2jr\" (UID: \"9b53f7ff-ae45-4b88-9a32-8548fcab110a\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nt2jr" Mar 12 14:45:24.391974 master-0 kubenswrapper[37036]: I0312 14:45:24.372268 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d-frr-conf\") pod \"frr-k8s-t6r4g\" (UID: \"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d\") " pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:24.391974 master-0 kubenswrapper[37036]: I0312 14:45:24.372305 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a17fe07a-69eb-4d18-9348-1ea5bddf51a6-memberlist\") pod \"speaker-l6mt6\" (UID: \"a17fe07a-69eb-4d18-9348-1ea5bddf51a6\") " pod="metallb-system/speaker-l6mt6" Mar 12 14:45:24.391974 master-0 kubenswrapper[37036]: I0312 14:45:24.372352 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm9l6\" (UniqueName: \"kubernetes.io/projected/3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d-kube-api-access-sm9l6\") pod \"frr-k8s-t6r4g\" (UID: \"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d\") " pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:24.391974 master-0 kubenswrapper[37036]: I0312 14:45:24.372445 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d-frr-startup\") pod \"frr-k8s-t6r4g\" (UID: \"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d\") " pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:24.391974 master-0 kubenswrapper[37036]: I0312 14:45:24.372474 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a17fe07a-69eb-4d18-9348-1ea5bddf51a6-metallb-excludel2\") pod \"speaker-l6mt6\" (UID: \"a17fe07a-69eb-4d18-9348-1ea5bddf51a6\") " pod="metallb-system/speaker-l6mt6" Mar 12 14:45:24.391974 master-0 kubenswrapper[37036]: I0312 14:45:24.372492 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpnm5\" (UniqueName: \"kubernetes.io/projected/9b53f7ff-ae45-4b88-9a32-8548fcab110a-kube-api-access-bpnm5\") pod \"frr-k8s-webhook-server-bcc4b6f68-nt2jr\" (UID: \"9b53f7ff-ae45-4b88-9a32-8548fcab110a\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nt2jr" Mar 12 14:45:24.391974 master-0 kubenswrapper[37036]: I0312 14:45:24.372504 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d-frr-sockets\") pod \"frr-k8s-t6r4g\" (UID: \"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d\") " pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:24.391974 master-0 kubenswrapper[37036]: I0312 14:45:24.372517 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d-metrics-certs\") pod \"frr-k8s-t6r4g\" (UID: \"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d\") " pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:24.391974 master-0 kubenswrapper[37036]: I0312 14:45:24.372578 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d-reloader\") pod \"frr-k8s-t6r4g\" (UID: \"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d\") " pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:24.391974 master-0 kubenswrapper[37036]: I0312 14:45:24.373172 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d-frr-conf\") pod \"frr-k8s-t6r4g\" (UID: \"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d\") " pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:24.391974 master-0 kubenswrapper[37036]: I0312 14:45:24.373202 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-9rrn5" Mar 12 14:45:24.391974 master-0 kubenswrapper[37036]: I0312 14:45:24.373349 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d-metrics\") pod \"frr-k8s-t6r4g\" (UID: \"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d\") " pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:24.391974 master-0 kubenswrapper[37036]: I0312 14:45:24.373477 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d-frr-startup\") pod \"frr-k8s-t6r4g\" (UID: \"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d\") " pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:24.391974 master-0 kubenswrapper[37036]: I0312 14:45:24.375522 37036 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 12 14:45:24.393292 master-0 kubenswrapper[37036]: I0312 14:45:24.393129 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d-metrics-certs\") pod \"frr-k8s-t6r4g\" (UID: \"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d\") " pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:24.416979 master-0 kubenswrapper[37036]: I0312 14:45:24.412106 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm9l6\" (UniqueName: \"kubernetes.io/projected/3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d-kube-api-access-sm9l6\") pod \"frr-k8s-t6r4g\" (UID: \"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d\") " pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:24.433970 master-0 kubenswrapper[37036]: I0312 14:45:24.433933 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-9rrn5"] Mar 12 14:45:24.473395 master-0 kubenswrapper[37036]: I0312 14:45:24.473043 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a17fe07a-69eb-4d18-9348-1ea5bddf51a6-memberlist\") pod \"speaker-l6mt6\" (UID: \"a17fe07a-69eb-4d18-9348-1ea5bddf51a6\") " pod="metallb-system/speaker-l6mt6" Mar 12 14:45:24.473395 master-0 kubenswrapper[37036]: I0312 14:45:24.473103 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a17fe07a-69eb-4d18-9348-1ea5bddf51a6-metallb-excludel2\") pod \"speaker-l6mt6\" (UID: \"a17fe07a-69eb-4d18-9348-1ea5bddf51a6\") " pod="metallb-system/speaker-l6mt6" Mar 12 14:45:24.473395 master-0 kubenswrapper[37036]: I0312 14:45:24.473127 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpnm5\" (UniqueName: \"kubernetes.io/projected/9b53f7ff-ae45-4b88-9a32-8548fcab110a-kube-api-access-bpnm5\") pod \"frr-k8s-webhook-server-bcc4b6f68-nt2jr\" (UID: \"9b53f7ff-ae45-4b88-9a32-8548fcab110a\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nt2jr" Mar 12 14:45:24.477168 master-0 kubenswrapper[37036]: E0312 14:45:24.473479 37036 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 12 14:45:24.477168 master-0 kubenswrapper[37036]: E0312 14:45:24.473521 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a17fe07a-69eb-4d18-9348-1ea5bddf51a6-memberlist podName:a17fe07a-69eb-4d18-9348-1ea5bddf51a6 nodeName:}" failed. No retries permitted until 2026-03-12 14:45:24.973506186 +0000 UTC m=+583.981247123 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a17fe07a-69eb-4d18-9348-1ea5bddf51a6-memberlist") pod "speaker-l6mt6" (UID: "a17fe07a-69eb-4d18-9348-1ea5bddf51a6") : secret "metallb-memberlist" not found Mar 12 14:45:24.477168 master-0 kubenswrapper[37036]: I0312 14:45:24.474117 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2rdr\" (UniqueName: \"kubernetes.io/projected/27123016-8e66-428d-8998-0b9113e606a7-kube-api-access-h2rdr\") pod \"controller-7bb4cc7c98-9rrn5\" (UID: \"27123016-8e66-428d-8998-0b9113e606a7\") " pod="metallb-system/controller-7bb4cc7c98-9rrn5" Mar 12 14:45:24.477168 master-0 kubenswrapper[37036]: I0312 14:45:24.474200 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a17fe07a-69eb-4d18-9348-1ea5bddf51a6-metallb-excludel2\") pod \"speaker-l6mt6\" (UID: \"a17fe07a-69eb-4d18-9348-1ea5bddf51a6\") " pod="metallb-system/speaker-l6mt6" Mar 12 14:45:24.477168 master-0 kubenswrapper[37036]: I0312 14:45:24.474263 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27123016-8e66-428d-8998-0b9113e606a7-metrics-certs\") pod \"controller-7bb4cc7c98-9rrn5\" (UID: \"27123016-8e66-428d-8998-0b9113e606a7\") " pod="metallb-system/controller-7bb4cc7c98-9rrn5" Mar 12 14:45:24.477168 master-0 kubenswrapper[37036]: I0312 14:45:24.474298 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/27123016-8e66-428d-8998-0b9113e606a7-cert\") pod \"controller-7bb4cc7c98-9rrn5\" (UID: \"27123016-8e66-428d-8998-0b9113e606a7\") " pod="metallb-system/controller-7bb4cc7c98-9rrn5" Mar 12 14:45:24.477168 master-0 kubenswrapper[37036]: I0312 14:45:24.474388 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjk92\" (UniqueName: \"kubernetes.io/projected/a17fe07a-69eb-4d18-9348-1ea5bddf51a6-kube-api-access-zjk92\") pod \"speaker-l6mt6\" (UID: \"a17fe07a-69eb-4d18-9348-1ea5bddf51a6\") " pod="metallb-system/speaker-l6mt6" Mar 12 14:45:24.477168 master-0 kubenswrapper[37036]: I0312 14:45:24.474419 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a17fe07a-69eb-4d18-9348-1ea5bddf51a6-metrics-certs\") pod \"speaker-l6mt6\" (UID: \"a17fe07a-69eb-4d18-9348-1ea5bddf51a6\") " pod="metallb-system/speaker-l6mt6" Mar 12 14:45:24.477168 master-0 kubenswrapper[37036]: I0312 14:45:24.474452 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b53f7ff-ae45-4b88-9a32-8548fcab110a-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-nt2jr\" (UID: \"9b53f7ff-ae45-4b88-9a32-8548fcab110a\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nt2jr" Mar 12 14:45:24.477168 master-0 kubenswrapper[37036]: E0312 14:45:24.474648 37036 secret.go:189] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Mar 12 14:45:24.477168 master-0 kubenswrapper[37036]: E0312 14:45:24.474677 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a17fe07a-69eb-4d18-9348-1ea5bddf51a6-metrics-certs podName:a17fe07a-69eb-4d18-9348-1ea5bddf51a6 nodeName:}" failed. No retries permitted until 2026-03-12 14:45:24.974668724 +0000 UTC m=+583.982409661 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a17fe07a-69eb-4d18-9348-1ea5bddf51a6-metrics-certs") pod "speaker-l6mt6" (UID: "a17fe07a-69eb-4d18-9348-1ea5bddf51a6") : secret "speaker-certs-secret" not found Mar 12 14:45:24.478220 master-0 kubenswrapper[37036]: I0312 14:45:24.478177 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b53f7ff-ae45-4b88-9a32-8548fcab110a-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-nt2jr\" (UID: \"9b53f7ff-ae45-4b88-9a32-8548fcab110a\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nt2jr" Mar 12 14:45:24.500716 master-0 kubenswrapper[37036]: I0312 14:45:24.500671 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjk92\" (UniqueName: \"kubernetes.io/projected/a17fe07a-69eb-4d18-9348-1ea5bddf51a6-kube-api-access-zjk92\") pod \"speaker-l6mt6\" (UID: \"a17fe07a-69eb-4d18-9348-1ea5bddf51a6\") " pod="metallb-system/speaker-l6mt6" Mar 12 14:45:24.501020 master-0 kubenswrapper[37036]: I0312 14:45:24.500772 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpnm5\" (UniqueName: \"kubernetes.io/projected/9b53f7ff-ae45-4b88-9a32-8548fcab110a-kube-api-access-bpnm5\") pod \"frr-k8s-webhook-server-bcc4b6f68-nt2jr\" (UID: \"9b53f7ff-ae45-4b88-9a32-8548fcab110a\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nt2jr" Mar 12 14:45:24.572737 master-0 kubenswrapper[37036]: I0312 14:45:24.572675 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nt2jr" Mar 12 14:45:24.576017 master-0 kubenswrapper[37036]: I0312 14:45:24.575978 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27123016-8e66-428d-8998-0b9113e606a7-metrics-certs\") pod \"controller-7bb4cc7c98-9rrn5\" (UID: \"27123016-8e66-428d-8998-0b9113e606a7\") " pod="metallb-system/controller-7bb4cc7c98-9rrn5" Mar 12 14:45:24.576146 master-0 kubenswrapper[37036]: I0312 14:45:24.576131 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/27123016-8e66-428d-8998-0b9113e606a7-cert\") pod \"controller-7bb4cc7c98-9rrn5\" (UID: \"27123016-8e66-428d-8998-0b9113e606a7\") " pod="metallb-system/controller-7bb4cc7c98-9rrn5" Mar 12 14:45:24.576411 master-0 kubenswrapper[37036]: I0312 14:45:24.576396 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2rdr\" (UniqueName: \"kubernetes.io/projected/27123016-8e66-428d-8998-0b9113e606a7-kube-api-access-h2rdr\") pod \"controller-7bb4cc7c98-9rrn5\" (UID: \"27123016-8e66-428d-8998-0b9113e606a7\") " pod="metallb-system/controller-7bb4cc7c98-9rrn5" Mar 12 14:45:24.580286 master-0 kubenswrapper[37036]: I0312 14:45:24.580261 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27123016-8e66-428d-8998-0b9113e606a7-metrics-certs\") pod \"controller-7bb4cc7c98-9rrn5\" (UID: \"27123016-8e66-428d-8998-0b9113e606a7\") " pod="metallb-system/controller-7bb4cc7c98-9rrn5" Mar 12 14:45:24.581542 master-0 kubenswrapper[37036]: I0312 14:45:24.581521 37036 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 12 14:45:24.588986 master-0 kubenswrapper[37036]: I0312 14:45:24.588346 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:24.594278 master-0 kubenswrapper[37036]: I0312 14:45:24.594251 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/27123016-8e66-428d-8998-0b9113e606a7-cert\") pod \"controller-7bb4cc7c98-9rrn5\" (UID: \"27123016-8e66-428d-8998-0b9113e606a7\") " pod="metallb-system/controller-7bb4cc7c98-9rrn5" Mar 12 14:45:24.595533 master-0 kubenswrapper[37036]: I0312 14:45:24.595489 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2rdr\" (UniqueName: \"kubernetes.io/projected/27123016-8e66-428d-8998-0b9113e606a7-kube-api-access-h2rdr\") pod \"controller-7bb4cc7c98-9rrn5\" (UID: \"27123016-8e66-428d-8998-0b9113e606a7\") " pod="metallb-system/controller-7bb4cc7c98-9rrn5" Mar 12 14:45:24.776563 master-0 kubenswrapper[37036]: I0312 14:45:24.776458 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-9rrn5" Mar 12 14:45:24.981427 master-0 kubenswrapper[37036]: I0312 14:45:24.981369 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a17fe07a-69eb-4d18-9348-1ea5bddf51a6-metrics-certs\") pod \"speaker-l6mt6\" (UID: \"a17fe07a-69eb-4d18-9348-1ea5bddf51a6\") " pod="metallb-system/speaker-l6mt6" Mar 12 14:45:24.981546 master-0 kubenswrapper[37036]: I0312 14:45:24.981460 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a17fe07a-69eb-4d18-9348-1ea5bddf51a6-memberlist\") pod \"speaker-l6mt6\" (UID: \"a17fe07a-69eb-4d18-9348-1ea5bddf51a6\") " pod="metallb-system/speaker-l6mt6" Mar 12 14:45:24.981652 master-0 kubenswrapper[37036]: E0312 14:45:24.981621 37036 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 12 14:45:24.981716 master-0 kubenswrapper[37036]: E0312 14:45:24.981699 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a17fe07a-69eb-4d18-9348-1ea5bddf51a6-memberlist podName:a17fe07a-69eb-4d18-9348-1ea5bddf51a6 nodeName:}" failed. No retries permitted until 2026-03-12 14:45:25.981675911 +0000 UTC m=+584.989416868 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a17fe07a-69eb-4d18-9348-1ea5bddf51a6-memberlist") pod "speaker-l6mt6" (UID: "a17fe07a-69eb-4d18-9348-1ea5bddf51a6") : secret "metallb-memberlist" not found Mar 12 14:45:24.983537 master-0 kubenswrapper[37036]: I0312 14:45:24.983493 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-nt2jr"] Mar 12 14:45:24.985182 master-0 kubenswrapper[37036]: I0312 14:45:24.985148 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a17fe07a-69eb-4d18-9348-1ea5bddf51a6-metrics-certs\") pod \"speaker-l6mt6\" (UID: \"a17fe07a-69eb-4d18-9348-1ea5bddf51a6\") " pod="metallb-system/speaker-l6mt6" Mar 12 14:45:24.987494 master-0 kubenswrapper[37036]: W0312 14:45:24.987463 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b53f7ff_ae45_4b88_9a32_8548fcab110a.slice/crio-4c6069c605b66354d4aca11e6e1a724f0e8da98ea30ba2cd92f482a708674468 WatchSource:0}: Error finding container 4c6069c605b66354d4aca11e6e1a724f0e8da98ea30ba2cd92f482a708674468: Status 404 returned error can't find the container with id 4c6069c605b66354d4aca11e6e1a724f0e8da98ea30ba2cd92f482a708674468 Mar 12 14:45:25.203059 master-0 kubenswrapper[37036]: I0312 14:45:25.202979 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-9rrn5"] Mar 12 14:45:25.334456 master-0 kubenswrapper[37036]: I0312 14:45:25.334395 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-9rrn5" event={"ID":"27123016-8e66-428d-8998-0b9113e606a7","Type":"ContainerStarted","Data":"97dc06d1303ebca90edfb392a5be643e62b82f9118e394d81722bb6026d303e9"} Mar 12 14:45:25.335892 master-0 kubenswrapper[37036]: I0312 14:45:25.335858 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6r4g" event={"ID":"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d","Type":"ContainerStarted","Data":"3f839d3f4a1c363b0854a3661c0c00e7e278cd5f25ca634a9dc351468f20a211"} Mar 12 14:45:25.337311 master-0 kubenswrapper[37036]: I0312 14:45:25.337278 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nt2jr" event={"ID":"9b53f7ff-ae45-4b88-9a32-8548fcab110a","Type":"ContainerStarted","Data":"4c6069c605b66354d4aca11e6e1a724f0e8da98ea30ba2cd92f482a708674468"} Mar 12 14:45:25.995183 master-0 kubenswrapper[37036]: I0312 14:45:25.995131 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a17fe07a-69eb-4d18-9348-1ea5bddf51a6-memberlist\") pod \"speaker-l6mt6\" (UID: \"a17fe07a-69eb-4d18-9348-1ea5bddf51a6\") " pod="metallb-system/speaker-l6mt6" Mar 12 14:45:25.998502 master-0 kubenswrapper[37036]: I0312 14:45:25.998465 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a17fe07a-69eb-4d18-9348-1ea5bddf51a6-memberlist\") pod \"speaker-l6mt6\" (UID: \"a17fe07a-69eb-4d18-9348-1ea5bddf51a6\") " pod="metallb-system/speaker-l6mt6" Mar 12 14:45:26.257960 master-0 kubenswrapper[37036]: I0312 14:45:26.255466 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-l6mt6" Mar 12 14:45:26.292002 master-0 kubenswrapper[37036]: I0312 14:45:26.290838 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-4wrzj"] Mar 12 14:45:26.294383 master-0 kubenswrapper[37036]: I0312 14:45:26.294042 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-4wrzj" Mar 12 14:45:26.311793 master-0 kubenswrapper[37036]: I0312 14:45:26.311748 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-9qpxv"] Mar 12 14:45:26.334423 master-0 kubenswrapper[37036]: I0312 14:45:26.331254 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-4wrzj"] Mar 12 14:45:26.334423 master-0 kubenswrapper[37036]: I0312 14:45:26.331363 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-9qpxv" Mar 12 14:45:26.335991 master-0 kubenswrapper[37036]: I0312 14:45:26.335954 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 12 14:45:26.342126 master-0 kubenswrapper[37036]: I0312 14:45:26.342071 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-q2z2r"] Mar 12 14:45:26.343586 master-0 kubenswrapper[37036]: I0312 14:45:26.343437 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-q2z2r" Mar 12 14:45:26.349602 master-0 kubenswrapper[37036]: I0312 14:45:26.349556 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-9qpxv"] Mar 12 14:45:26.368975 master-0 kubenswrapper[37036]: I0312 14:45:26.368915 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-l6mt6" event={"ID":"a17fe07a-69eb-4d18-9348-1ea5bddf51a6","Type":"ContainerStarted","Data":"b2e5cd5f985b07df8964fa602105c6fc62faebde06d11fcee5c8f49057422423"} Mar 12 14:45:26.374411 master-0 kubenswrapper[37036]: I0312 14:45:26.372773 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-9rrn5" event={"ID":"27123016-8e66-428d-8998-0b9113e606a7","Type":"ContainerStarted","Data":"d731f984d3451fb46e7c2877dafa58fcaa1637828b4059cfbff6870fb35aa2e0"} Mar 12 14:45:26.408767 master-0 kubenswrapper[37036]: I0312 14:45:26.408722 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b2dg\" (UniqueName: \"kubernetes.io/projected/63f17f99-00e2-470c-9a36-a121e3bd8fb8-kube-api-access-7b2dg\") pod \"nmstate-metrics-9b8c8685d-4wrzj\" (UID: \"63f17f99-00e2-470c-9a36-a121e3bd8fb8\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-4wrzj" Mar 12 14:45:26.443826 master-0 kubenswrapper[37036]: I0312 14:45:26.438864 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-6d5c2"] Mar 12 14:45:26.443826 master-0 kubenswrapper[37036]: I0312 14:45:26.441369 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6d5c2" Mar 12 14:45:26.446841 master-0 kubenswrapper[37036]: I0312 14:45:26.444214 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 12 14:45:26.451301 master-0 kubenswrapper[37036]: I0312 14:45:26.449846 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 12 14:45:26.454476 master-0 kubenswrapper[37036]: I0312 14:45:26.454315 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-6d5c2"] Mar 12 14:45:26.515215 master-0 kubenswrapper[37036]: I0312 14:45:26.510498 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k8wf\" (UniqueName: \"kubernetes.io/projected/2a484e66-3d50-4a01-968a-7758520e5880-kube-api-access-4k8wf\") pod \"nmstate-webhook-5f558f5558-9qpxv\" (UID: \"2a484e66-3d50-4a01-968a-7758520e5880\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-9qpxv" Mar 12 14:45:26.515215 master-0 kubenswrapper[37036]: I0312 14:45:26.510586 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9098b81d-4f6c-4c5b-a8eb-8471467f295f-ovs-socket\") pod \"nmstate-handler-q2z2r\" (UID: \"9098b81d-4f6c-4c5b-a8eb-8471467f295f\") " pod="openshift-nmstate/nmstate-handler-q2z2r" Mar 12 14:45:26.515215 master-0 kubenswrapper[37036]: I0312 14:45:26.510613 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8946\" (UniqueName: \"kubernetes.io/projected/9098b81d-4f6c-4c5b-a8eb-8471467f295f-kube-api-access-c8946\") pod \"nmstate-handler-q2z2r\" (UID: \"9098b81d-4f6c-4c5b-a8eb-8471467f295f\") " pod="openshift-nmstate/nmstate-handler-q2z2r" Mar 12 14:45:26.515215 master-0 kubenswrapper[37036]: I0312 14:45:26.510649 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b2dg\" (UniqueName: \"kubernetes.io/projected/63f17f99-00e2-470c-9a36-a121e3bd8fb8-kube-api-access-7b2dg\") pod \"nmstate-metrics-9b8c8685d-4wrzj\" (UID: \"63f17f99-00e2-470c-9a36-a121e3bd8fb8\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-4wrzj" Mar 12 14:45:26.515215 master-0 kubenswrapper[37036]: I0312 14:45:26.510684 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9098b81d-4f6c-4c5b-a8eb-8471467f295f-nmstate-lock\") pod \"nmstate-handler-q2z2r\" (UID: \"9098b81d-4f6c-4c5b-a8eb-8471467f295f\") " pod="openshift-nmstate/nmstate-handler-q2z2r" Mar 12 14:45:26.515215 master-0 kubenswrapper[37036]: I0312 14:45:26.510718 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9098b81d-4f6c-4c5b-a8eb-8471467f295f-dbus-socket\") pod \"nmstate-handler-q2z2r\" (UID: \"9098b81d-4f6c-4c5b-a8eb-8471467f295f\") " pod="openshift-nmstate/nmstate-handler-q2z2r" Mar 12 14:45:26.515215 master-0 kubenswrapper[37036]: I0312 14:45:26.510760 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2a484e66-3d50-4a01-968a-7758520e5880-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-9qpxv\" (UID: \"2a484e66-3d50-4a01-968a-7758520e5880\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-9qpxv" Mar 12 14:45:26.535275 master-0 kubenswrapper[37036]: I0312 14:45:26.535230 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b2dg\" (UniqueName: \"kubernetes.io/projected/63f17f99-00e2-470c-9a36-a121e3bd8fb8-kube-api-access-7b2dg\") pod \"nmstate-metrics-9b8c8685d-4wrzj\" (UID: \"63f17f99-00e2-470c-9a36-a121e3bd8fb8\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-4wrzj" Mar 12 14:45:26.611892 master-0 kubenswrapper[37036]: I0312 14:45:26.611851 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2a484e66-3d50-4a01-968a-7758520e5880-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-9qpxv\" (UID: \"2a484e66-3d50-4a01-968a-7758520e5880\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-9qpxv" Mar 12 14:45:26.612101 master-0 kubenswrapper[37036]: I0312 14:45:26.611946 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrnkf\" (UniqueName: \"kubernetes.io/projected/b866dae3-6a86-4fc5-af95-d12b24ad3f52-kube-api-access-rrnkf\") pod \"nmstate-console-plugin-86f58fcf4-6d5c2\" (UID: \"b866dae3-6a86-4fc5-af95-d12b24ad3f52\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6d5c2" Mar 12 14:45:26.612101 master-0 kubenswrapper[37036]: I0312 14:45:26.612006 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k8wf\" (UniqueName: \"kubernetes.io/projected/2a484e66-3d50-4a01-968a-7758520e5880-kube-api-access-4k8wf\") pod \"nmstate-webhook-5f558f5558-9qpxv\" (UID: \"2a484e66-3d50-4a01-968a-7758520e5880\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-9qpxv" Mar 12 14:45:26.612101 master-0 kubenswrapper[37036]: I0312 14:45:26.612044 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9098b81d-4f6c-4c5b-a8eb-8471467f295f-ovs-socket\") pod \"nmstate-handler-q2z2r\" (UID: \"9098b81d-4f6c-4c5b-a8eb-8471467f295f\") " pod="openshift-nmstate/nmstate-handler-q2z2r" Mar 12 14:45:26.612101 master-0 kubenswrapper[37036]: I0312 14:45:26.612061 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8946\" (UniqueName: \"kubernetes.io/projected/9098b81d-4f6c-4c5b-a8eb-8471467f295f-kube-api-access-c8946\") pod \"nmstate-handler-q2z2r\" (UID: \"9098b81d-4f6c-4c5b-a8eb-8471467f295f\") " pod="openshift-nmstate/nmstate-handler-q2z2r" Mar 12 14:45:26.612101 master-0 kubenswrapper[37036]: I0312 14:45:26.612086 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b866dae3-6a86-4fc5-af95-d12b24ad3f52-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-6d5c2\" (UID: \"b866dae3-6a86-4fc5-af95-d12b24ad3f52\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6d5c2" Mar 12 14:45:26.612269 master-0 kubenswrapper[37036]: I0312 14:45:26.612105 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9098b81d-4f6c-4c5b-a8eb-8471467f295f-nmstate-lock\") pod \"nmstate-handler-q2z2r\" (UID: \"9098b81d-4f6c-4c5b-a8eb-8471467f295f\") " pod="openshift-nmstate/nmstate-handler-q2z2r" Mar 12 14:45:26.612269 master-0 kubenswrapper[37036]: I0312 14:45:26.612128 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9098b81d-4f6c-4c5b-a8eb-8471467f295f-dbus-socket\") pod \"nmstate-handler-q2z2r\" (UID: \"9098b81d-4f6c-4c5b-a8eb-8471467f295f\") " pod="openshift-nmstate/nmstate-handler-q2z2r" Mar 12 14:45:26.612269 master-0 kubenswrapper[37036]: I0312 14:45:26.612146 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b866dae3-6a86-4fc5-af95-d12b24ad3f52-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-6d5c2\" (UID: \"b866dae3-6a86-4fc5-af95-d12b24ad3f52\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6d5c2" Mar 12 14:45:26.613452 master-0 kubenswrapper[37036]: I0312 14:45:26.612644 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9098b81d-4f6c-4c5b-a8eb-8471467f295f-nmstate-lock\") pod \"nmstate-handler-q2z2r\" (UID: \"9098b81d-4f6c-4c5b-a8eb-8471467f295f\") " pod="openshift-nmstate/nmstate-handler-q2z2r" Mar 12 14:45:26.613452 master-0 kubenswrapper[37036]: I0312 14:45:26.612785 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9098b81d-4f6c-4c5b-a8eb-8471467f295f-ovs-socket\") pod \"nmstate-handler-q2z2r\" (UID: \"9098b81d-4f6c-4c5b-a8eb-8471467f295f\") " pod="openshift-nmstate/nmstate-handler-q2z2r" Mar 12 14:45:26.613452 master-0 kubenswrapper[37036]: I0312 14:45:26.612828 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9098b81d-4f6c-4c5b-a8eb-8471467f295f-dbus-socket\") pod \"nmstate-handler-q2z2r\" (UID: \"9098b81d-4f6c-4c5b-a8eb-8471467f295f\") " pod="openshift-nmstate/nmstate-handler-q2z2r" Mar 12 14:45:26.617962 master-0 kubenswrapper[37036]: I0312 14:45:26.617914 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2a484e66-3d50-4a01-968a-7758520e5880-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-9qpxv\" (UID: \"2a484e66-3d50-4a01-968a-7758520e5880\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-9qpxv" Mar 12 14:45:26.623673 master-0 kubenswrapper[37036]: I0312 14:45:26.623616 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f48d8466d-4rmwh"] Mar 12 14:45:26.625026 master-0 kubenswrapper[37036]: I0312 14:45:26.624989 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:26.650043 master-0 kubenswrapper[37036]: I0312 14:45:26.649988 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f48d8466d-4rmwh"] Mar 12 14:45:26.654093 master-0 kubenswrapper[37036]: I0312 14:45:26.654019 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k8wf\" (UniqueName: \"kubernetes.io/projected/2a484e66-3d50-4a01-968a-7758520e5880-kube-api-access-4k8wf\") pod \"nmstate-webhook-5f558f5558-9qpxv\" (UID: \"2a484e66-3d50-4a01-968a-7758520e5880\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-9qpxv" Mar 12 14:45:26.655470 master-0 kubenswrapper[37036]: I0312 14:45:26.655431 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8946\" (UniqueName: \"kubernetes.io/projected/9098b81d-4f6c-4c5b-a8eb-8471467f295f-kube-api-access-c8946\") pod \"nmstate-handler-q2z2r\" (UID: \"9098b81d-4f6c-4c5b-a8eb-8471467f295f\") " pod="openshift-nmstate/nmstate-handler-q2z2r" Mar 12 14:45:26.708491 master-0 kubenswrapper[37036]: I0312 14:45:26.706750 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-4wrzj" Mar 12 14:45:26.714222 master-0 kubenswrapper[37036]: I0312 14:45:26.714086 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrnkf\" (UniqueName: \"kubernetes.io/projected/b866dae3-6a86-4fc5-af95-d12b24ad3f52-kube-api-access-rrnkf\") pod \"nmstate-console-plugin-86f58fcf4-6d5c2\" (UID: \"b866dae3-6a86-4fc5-af95-d12b24ad3f52\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6d5c2" Mar 12 14:45:26.714222 master-0 kubenswrapper[37036]: I0312 14:45:26.714204 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b866dae3-6a86-4fc5-af95-d12b24ad3f52-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-6d5c2\" (UID: \"b866dae3-6a86-4fc5-af95-d12b24ad3f52\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6d5c2" Mar 12 14:45:26.714442 master-0 kubenswrapper[37036]: I0312 14:45:26.714239 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b866dae3-6a86-4fc5-af95-d12b24ad3f52-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-6d5c2\" (UID: \"b866dae3-6a86-4fc5-af95-d12b24ad3f52\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6d5c2" Mar 12 14:45:26.715552 master-0 kubenswrapper[37036]: I0312 14:45:26.715452 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b866dae3-6a86-4fc5-af95-d12b24ad3f52-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-6d5c2\" (UID: \"b866dae3-6a86-4fc5-af95-d12b24ad3f52\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6d5c2" Mar 12 14:45:26.730431 master-0 kubenswrapper[37036]: I0312 14:45:26.730274 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b866dae3-6a86-4fc5-af95-d12b24ad3f52-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-6d5c2\" (UID: \"b866dae3-6a86-4fc5-af95-d12b24ad3f52\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6d5c2" Mar 12 14:45:26.733960 master-0 kubenswrapper[37036]: I0312 14:45:26.733881 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrnkf\" (UniqueName: \"kubernetes.io/projected/b866dae3-6a86-4fc5-af95-d12b24ad3f52-kube-api-access-rrnkf\") pod \"nmstate-console-plugin-86f58fcf4-6d5c2\" (UID: \"b866dae3-6a86-4fc5-af95-d12b24ad3f52\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6d5c2" Mar 12 14:45:26.744570 master-0 kubenswrapper[37036]: I0312 14:45:26.744515 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-9qpxv" Mar 12 14:45:26.759868 master-0 kubenswrapper[37036]: I0312 14:45:26.759607 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-q2z2r" Mar 12 14:45:26.786574 master-0 kubenswrapper[37036]: I0312 14:45:26.785595 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6d5c2" Mar 12 14:45:26.824098 master-0 kubenswrapper[37036]: I0312 14:45:26.815541 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppcl6\" (UniqueName: \"kubernetes.io/projected/10012bf3-1e8d-4224-9d71-51d4a0231a08-kube-api-access-ppcl6\") pod \"console-f48d8466d-4rmwh\" (UID: \"10012bf3-1e8d-4224-9d71-51d4a0231a08\") " pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:26.824098 master-0 kubenswrapper[37036]: I0312 14:45:26.815594 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/10012bf3-1e8d-4224-9d71-51d4a0231a08-console-serving-cert\") pod \"console-f48d8466d-4rmwh\" (UID: \"10012bf3-1e8d-4224-9d71-51d4a0231a08\") " pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:26.824098 master-0 kubenswrapper[37036]: I0312 14:45:26.815620 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/10012bf3-1e8d-4224-9d71-51d4a0231a08-console-oauth-config\") pod \"console-f48d8466d-4rmwh\" (UID: \"10012bf3-1e8d-4224-9d71-51d4a0231a08\") " pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:26.824098 master-0 kubenswrapper[37036]: I0312 14:45:26.815980 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/10012bf3-1e8d-4224-9d71-51d4a0231a08-oauth-serving-cert\") pod \"console-f48d8466d-4rmwh\" (UID: \"10012bf3-1e8d-4224-9d71-51d4a0231a08\") " pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:26.824098 master-0 kubenswrapper[37036]: I0312 14:45:26.816025 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/10012bf3-1e8d-4224-9d71-51d4a0231a08-console-config\") pod \"console-f48d8466d-4rmwh\" (UID: \"10012bf3-1e8d-4224-9d71-51d4a0231a08\") " pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:26.824098 master-0 kubenswrapper[37036]: I0312 14:45:26.816056 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/10012bf3-1e8d-4224-9d71-51d4a0231a08-service-ca\") pod \"console-f48d8466d-4rmwh\" (UID: \"10012bf3-1e8d-4224-9d71-51d4a0231a08\") " pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:26.824098 master-0 kubenswrapper[37036]: I0312 14:45:26.816103 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10012bf3-1e8d-4224-9d71-51d4a0231a08-trusted-ca-bundle\") pod \"console-f48d8466d-4rmwh\" (UID: \"10012bf3-1e8d-4224-9d71-51d4a0231a08\") " pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:26.850398 master-0 kubenswrapper[37036]: W0312 14:45:26.850314 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9098b81d_4f6c_4c5b_a8eb_8471467f295f.slice/crio-32ed1bfcece3629e74b83051f1005e055599e3ed02381c2030929fa5ec9e2ce6 WatchSource:0}: Error finding container 32ed1bfcece3629e74b83051f1005e055599e3ed02381c2030929fa5ec9e2ce6: Status 404 returned error can't find the container with id 32ed1bfcece3629e74b83051f1005e055599e3ed02381c2030929fa5ec9e2ce6 Mar 12 14:45:26.918269 master-0 kubenswrapper[37036]: I0312 14:45:26.917230 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppcl6\" (UniqueName: \"kubernetes.io/projected/10012bf3-1e8d-4224-9d71-51d4a0231a08-kube-api-access-ppcl6\") pod \"console-f48d8466d-4rmwh\" (UID: \"10012bf3-1e8d-4224-9d71-51d4a0231a08\") " pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:26.918269 master-0 kubenswrapper[37036]: I0312 14:45:26.917275 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/10012bf3-1e8d-4224-9d71-51d4a0231a08-console-serving-cert\") pod \"console-f48d8466d-4rmwh\" (UID: \"10012bf3-1e8d-4224-9d71-51d4a0231a08\") " pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:26.918269 master-0 kubenswrapper[37036]: I0312 14:45:26.917294 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/10012bf3-1e8d-4224-9d71-51d4a0231a08-console-oauth-config\") pod \"console-f48d8466d-4rmwh\" (UID: \"10012bf3-1e8d-4224-9d71-51d4a0231a08\") " pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:26.918269 master-0 kubenswrapper[37036]: I0312 14:45:26.917339 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/10012bf3-1e8d-4224-9d71-51d4a0231a08-oauth-serving-cert\") pod \"console-f48d8466d-4rmwh\" (UID: \"10012bf3-1e8d-4224-9d71-51d4a0231a08\") " pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:26.918269 master-0 kubenswrapper[37036]: I0312 14:45:26.917359 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/10012bf3-1e8d-4224-9d71-51d4a0231a08-console-config\") pod \"console-f48d8466d-4rmwh\" (UID: \"10012bf3-1e8d-4224-9d71-51d4a0231a08\") " pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:26.918269 master-0 kubenswrapper[37036]: I0312 14:45:26.917381 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/10012bf3-1e8d-4224-9d71-51d4a0231a08-service-ca\") pod \"console-f48d8466d-4rmwh\" (UID: \"10012bf3-1e8d-4224-9d71-51d4a0231a08\") " pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:26.918269 master-0 kubenswrapper[37036]: I0312 14:45:26.917412 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10012bf3-1e8d-4224-9d71-51d4a0231a08-trusted-ca-bundle\") pod \"console-f48d8466d-4rmwh\" (UID: \"10012bf3-1e8d-4224-9d71-51d4a0231a08\") " pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:26.918819 master-0 kubenswrapper[37036]: I0312 14:45:26.918363 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/10012bf3-1e8d-4224-9d71-51d4a0231a08-oauth-serving-cert\") pod \"console-f48d8466d-4rmwh\" (UID: \"10012bf3-1e8d-4224-9d71-51d4a0231a08\") " pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:26.920047 master-0 kubenswrapper[37036]: I0312 14:45:26.918951 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/10012bf3-1e8d-4224-9d71-51d4a0231a08-service-ca\") pod \"console-f48d8466d-4rmwh\" (UID: \"10012bf3-1e8d-4224-9d71-51d4a0231a08\") " pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:26.920047 master-0 kubenswrapper[37036]: I0312 14:45:26.919444 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/10012bf3-1e8d-4224-9d71-51d4a0231a08-console-config\") pod \"console-f48d8466d-4rmwh\" (UID: \"10012bf3-1e8d-4224-9d71-51d4a0231a08\") " pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:26.920047 master-0 kubenswrapper[37036]: I0312 14:45:26.919865 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10012bf3-1e8d-4224-9d71-51d4a0231a08-trusted-ca-bundle\") pod \"console-f48d8466d-4rmwh\" (UID: \"10012bf3-1e8d-4224-9d71-51d4a0231a08\") " pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:26.936943 master-0 kubenswrapper[37036]: I0312 14:45:26.932079 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/10012bf3-1e8d-4224-9d71-51d4a0231a08-console-oauth-config\") pod \"console-f48d8466d-4rmwh\" (UID: \"10012bf3-1e8d-4224-9d71-51d4a0231a08\") " pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:26.960238 master-0 kubenswrapper[37036]: I0312 14:45:26.960171 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/10012bf3-1e8d-4224-9d71-51d4a0231a08-console-serving-cert\") pod \"console-f48d8466d-4rmwh\" (UID: \"10012bf3-1e8d-4224-9d71-51d4a0231a08\") " pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:26.960457 master-0 kubenswrapper[37036]: I0312 14:45:26.960431 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppcl6\" (UniqueName: \"kubernetes.io/projected/10012bf3-1e8d-4224-9d71-51d4a0231a08-kube-api-access-ppcl6\") pod \"console-f48d8466d-4rmwh\" (UID: \"10012bf3-1e8d-4224-9d71-51d4a0231a08\") " pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:26.992873 master-0 kubenswrapper[37036]: I0312 14:45:26.992818 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:27.317507 master-0 kubenswrapper[37036]: W0312 14:45:27.317445 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63f17f99_00e2_470c_9a36_a121e3bd8fb8.slice/crio-2874bac3be288d4e8676442759e919ee8f3f6237e20837abf8b25bc351713db2 WatchSource:0}: Error finding container 2874bac3be288d4e8676442759e919ee8f3f6237e20837abf8b25bc351713db2: Status 404 returned error can't find the container with id 2874bac3be288d4e8676442759e919ee8f3f6237e20837abf8b25bc351713db2 Mar 12 14:45:27.327571 master-0 kubenswrapper[37036]: I0312 14:45:27.327524 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-4wrzj"] Mar 12 14:45:27.411402 master-0 kubenswrapper[37036]: I0312 14:45:27.406695 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-4wrzj" event={"ID":"63f17f99-00e2-470c-9a36-a121e3bd8fb8","Type":"ContainerStarted","Data":"2874bac3be288d4e8676442759e919ee8f3f6237e20837abf8b25bc351713db2"} Mar 12 14:45:27.411402 master-0 kubenswrapper[37036]: I0312 14:45:27.409157 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-l6mt6" event={"ID":"a17fe07a-69eb-4d18-9348-1ea5bddf51a6","Type":"ContainerStarted","Data":"74d9cf5e174234cb38e3c88454be4cf08d6e7b3fbd4439acb3426ecc71ae0346"} Mar 12 14:45:27.411402 master-0 kubenswrapper[37036]: W0312 14:45:27.409866 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a484e66_3d50_4a01_968a_7758520e5880.slice/crio-1b1e5fea69b2a13234de3f93566a2fb46b046d9c6d51c6a94fc93f67560ee6bf WatchSource:0}: Error finding container 1b1e5fea69b2a13234de3f93566a2fb46b046d9c6d51c6a94fc93f67560ee6bf: Status 404 returned error can't find the container with id 1b1e5fea69b2a13234de3f93566a2fb46b046d9c6d51c6a94fc93f67560ee6bf Mar 12 14:45:27.411402 master-0 kubenswrapper[37036]: I0312 14:45:27.410150 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-q2z2r" event={"ID":"9098b81d-4f6c-4c5b-a8eb-8471467f295f","Type":"ContainerStarted","Data":"32ed1bfcece3629e74b83051f1005e055599e3ed02381c2030929fa5ec9e2ce6"} Mar 12 14:45:27.411402 master-0 kubenswrapper[37036]: I0312 14:45:27.410977 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-9qpxv"] Mar 12 14:45:27.431072 master-0 kubenswrapper[37036]: I0312 14:45:27.431003 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-6d5c2"] Mar 12 14:45:27.556037 master-0 kubenswrapper[37036]: I0312 14:45:27.555991 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f48d8466d-4rmwh"] Mar 12 14:45:28.421519 master-0 kubenswrapper[37036]: I0312 14:45:28.421418 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f48d8466d-4rmwh" event={"ID":"10012bf3-1e8d-4224-9d71-51d4a0231a08","Type":"ContainerStarted","Data":"5401f23ffa38416a0679046581165aba78d59766ae77ac59ac97c5e190b37e79"} Mar 12 14:45:28.422115 master-0 kubenswrapper[37036]: I0312 14:45:28.421511 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f48d8466d-4rmwh" event={"ID":"10012bf3-1e8d-4224-9d71-51d4a0231a08","Type":"ContainerStarted","Data":"65cb41c965929f2f0185790bbb8a6ef67da6b6150c9848a6558ea7bc1f62c97b"} Mar 12 14:45:28.423101 master-0 kubenswrapper[37036]: I0312 14:45:28.423057 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-9qpxv" event={"ID":"2a484e66-3d50-4a01-968a-7758520e5880","Type":"ContainerStarted","Data":"1b1e5fea69b2a13234de3f93566a2fb46b046d9c6d51c6a94fc93f67560ee6bf"} Mar 12 14:45:28.424248 master-0 kubenswrapper[37036]: I0312 14:45:28.424225 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6d5c2" event={"ID":"b866dae3-6a86-4fc5-af95-d12b24ad3f52","Type":"ContainerStarted","Data":"e48df80006093310e5db3ea770db4e04fb831801a798ec8472ab83a1640d267c"} Mar 12 14:45:28.446658 master-0 kubenswrapper[37036]: I0312 14:45:28.444990 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f48d8466d-4rmwh" podStartSLOduration=2.444972329 podStartE2EDuration="2.444972329s" podCreationTimestamp="2026-03-12 14:45:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:45:28.443368339 +0000 UTC m=+587.451109286" watchObservedRunningTime="2026-03-12 14:45:28.444972329 +0000 UTC m=+587.452713266" Mar 12 14:45:29.438635 master-0 kubenswrapper[37036]: I0312 14:45:29.438572 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-9rrn5" event={"ID":"27123016-8e66-428d-8998-0b9113e606a7","Type":"ContainerStarted","Data":"d8c8b6d81a071b03ff038391111ad4b2fe249b0c4a6e3636c6f1fff29c303fbe"} Mar 12 14:45:29.439666 master-0 kubenswrapper[37036]: I0312 14:45:29.438736 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-9rrn5" Mar 12 14:45:29.441551 master-0 kubenswrapper[37036]: I0312 14:45:29.441499 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-l6mt6" event={"ID":"a17fe07a-69eb-4d18-9348-1ea5bddf51a6","Type":"ContainerStarted","Data":"2192beffb69f0d0306fd28d41a6cccd28cbdb5dd9e61be9ca30eea15087f06fe"} Mar 12 14:45:29.442078 master-0 kubenswrapper[37036]: I0312 14:45:29.442015 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-l6mt6" Mar 12 14:45:29.469332 master-0 kubenswrapper[37036]: I0312 14:45:29.469225 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-9rrn5" podStartSLOduration=2.420893922 podStartE2EDuration="5.469188235s" podCreationTimestamp="2026-03-12 14:45:24 +0000 UTC" firstStartedPulling="2026-03-12 14:45:25.351892731 +0000 UTC m=+584.359633668" lastFinishedPulling="2026-03-12 14:45:28.400187054 +0000 UTC m=+587.407927981" observedRunningTime="2026-03-12 14:45:29.465695159 +0000 UTC m=+588.473436096" watchObservedRunningTime="2026-03-12 14:45:29.469188235 +0000 UTC m=+588.476929172" Mar 12 14:45:29.504539 master-0 kubenswrapper[37036]: I0312 14:45:29.504465 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-l6mt6" podStartSLOduration=3.5280170440000003 podStartE2EDuration="5.504447484s" podCreationTimestamp="2026-03-12 14:45:24 +0000 UTC" firstStartedPulling="2026-03-12 14:45:26.706403543 +0000 UTC m=+585.714144480" lastFinishedPulling="2026-03-12 14:45:28.682833983 +0000 UTC m=+587.690574920" observedRunningTime="2026-03-12 14:45:29.500701541 +0000 UTC m=+588.508442498" watchObservedRunningTime="2026-03-12 14:45:29.504447484 +0000 UTC m=+588.512188421" Mar 12 14:45:33.477950 master-0 kubenswrapper[37036]: I0312 14:45:33.477889 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-9qpxv" event={"ID":"2a484e66-3d50-4a01-968a-7758520e5880","Type":"ContainerStarted","Data":"b7c940ec2c36cc1502cc2e506fd7b51b5de7bb80ae5c5900130020eca4418a78"} Mar 12 14:45:33.479086 master-0 kubenswrapper[37036]: I0312 14:45:33.479067 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-9qpxv" Mar 12 14:45:33.480067 master-0 kubenswrapper[37036]: I0312 14:45:33.480035 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-q2z2r" Mar 12 14:45:33.480127 master-0 kubenswrapper[37036]: I0312 14:45:33.480086 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-q2z2r" event={"ID":"9098b81d-4f6c-4c5b-a8eb-8471467f295f","Type":"ContainerStarted","Data":"bce912545d7a9fae00ebdfc42396cd854d09975ae7ce551da4501093ee393741"} Mar 12 14:45:33.486950 master-0 kubenswrapper[37036]: I0312 14:45:33.482023 37036 generic.go:334] "Generic (PLEG): container finished" podID="3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d" containerID="2248a4d16e32ee1b06c5a49ca293474b923dea2bc97c300062336a4892f9a255" exitCode=0 Mar 12 14:45:33.486950 master-0 kubenswrapper[37036]: I0312 14:45:33.482057 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6r4g" event={"ID":"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d","Type":"ContainerDied","Data":"2248a4d16e32ee1b06c5a49ca293474b923dea2bc97c300062336a4892f9a255"} Mar 12 14:45:33.486950 master-0 kubenswrapper[37036]: I0312 14:45:33.484758 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-4wrzj" event={"ID":"63f17f99-00e2-470c-9a36-a121e3bd8fb8","Type":"ContainerStarted","Data":"fd6c1c1e1ef64a48580b165bfe606713463757c75a6f61e05ae4753f051b456b"} Mar 12 14:45:33.486950 master-0 kubenswrapper[37036]: I0312 14:45:33.484782 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-4wrzj" event={"ID":"63f17f99-00e2-470c-9a36-a121e3bd8fb8","Type":"ContainerStarted","Data":"49bd4fc78b35d9b2fb397f0b5c7c92f66c7980374ea2c3f40eea720740becbe7"} Mar 12 14:45:33.489117 master-0 kubenswrapper[37036]: I0312 14:45:33.487256 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nt2jr" event={"ID":"9b53f7ff-ae45-4b88-9a32-8548fcab110a","Type":"ContainerStarted","Data":"23ea412ae6bf5c10efff90840b14c8c5150ebdcb9e23e7da8431d38c27f5b131"} Mar 12 14:45:33.489117 master-0 kubenswrapper[37036]: I0312 14:45:33.487729 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nt2jr" Mar 12 14:45:33.490578 master-0 kubenswrapper[37036]: I0312 14:45:33.490487 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6d5c2" event={"ID":"b866dae3-6a86-4fc5-af95-d12b24ad3f52","Type":"ContainerStarted","Data":"12ad21e479bea8ba6bed705e693f0dbfb2aa00bdb5b98f458020b0bd973bf2ee"} Mar 12 14:45:33.506522 master-0 kubenswrapper[37036]: I0312 14:45:33.506444 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-9qpxv" podStartSLOduration=2.49082389 podStartE2EDuration="7.506422146s" podCreationTimestamp="2026-03-12 14:45:26 +0000 UTC" firstStartedPulling="2026-03-12 14:45:27.414892496 +0000 UTC m=+586.422633433" lastFinishedPulling="2026-03-12 14:45:32.430490712 +0000 UTC m=+591.438231689" observedRunningTime="2026-03-12 14:45:33.497817992 +0000 UTC m=+592.505558929" watchObservedRunningTime="2026-03-12 14:45:33.506422146 +0000 UTC m=+592.514163083" Mar 12 14:45:33.536787 master-0 kubenswrapper[37036]: I0312 14:45:33.536681 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-4wrzj" podStartSLOduration=2.424696985 podStartE2EDuration="7.53665248s" podCreationTimestamp="2026-03-12 14:45:26 +0000 UTC" firstStartedPulling="2026-03-12 14:45:27.319198894 +0000 UTC m=+586.326939831" lastFinishedPulling="2026-03-12 14:45:32.431154389 +0000 UTC m=+591.438895326" observedRunningTime="2026-03-12 14:45:33.518647541 +0000 UTC m=+592.526388478" watchObservedRunningTime="2026-03-12 14:45:33.53665248 +0000 UTC m=+592.544393417" Mar 12 14:45:33.628929 master-0 kubenswrapper[37036]: I0312 14:45:33.620345 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-q2z2r" podStartSLOduration=2.041003718 podStartE2EDuration="7.620317762s" podCreationTimestamp="2026-03-12 14:45:26 +0000 UTC" firstStartedPulling="2026-03-12 14:45:26.852374758 +0000 UTC m=+585.860115685" lastFinishedPulling="2026-03-12 14:45:32.431688752 +0000 UTC m=+591.439429729" observedRunningTime="2026-03-12 14:45:33.581137097 +0000 UTC m=+592.588878044" watchObservedRunningTime="2026-03-12 14:45:33.620317762 +0000 UTC m=+592.628058699" Mar 12 14:45:33.673861 master-0 kubenswrapper[37036]: I0312 14:45:33.673760 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6d5c2" podStartSLOduration=2.625253898 podStartE2EDuration="7.673728492s" podCreationTimestamp="2026-03-12 14:45:26 +0000 UTC" firstStartedPulling="2026-03-12 14:45:27.404645352 +0000 UTC m=+586.412386289" lastFinishedPulling="2026-03-12 14:45:32.453119946 +0000 UTC m=+591.460860883" observedRunningTime="2026-03-12 14:45:33.605244778 +0000 UTC m=+592.612985705" watchObservedRunningTime="2026-03-12 14:45:33.673728492 +0000 UTC m=+592.681469429" Mar 12 14:45:33.728463 master-0 kubenswrapper[37036]: I0312 14:45:33.728322 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nt2jr" podStartSLOduration=2.286212729 podStartE2EDuration="9.728296182s" podCreationTimestamp="2026-03-12 14:45:24 +0000 UTC" firstStartedPulling="2026-03-12 14:45:24.989776913 +0000 UTC m=+583.997517860" lastFinishedPulling="2026-03-12 14:45:32.431860376 +0000 UTC m=+591.439601313" observedRunningTime="2026-03-12 14:45:33.722058717 +0000 UTC m=+592.729799664" watchObservedRunningTime="2026-03-12 14:45:33.728296182 +0000 UTC m=+592.736037119" Mar 12 14:45:34.501860 master-0 kubenswrapper[37036]: I0312 14:45:34.501793 37036 generic.go:334] "Generic (PLEG): container finished" podID="3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d" containerID="321f7ad8ea7a3d8b6063620d3a5bc21196b8ba9c101ed781ca4f0f4b8746cb9b" exitCode=0 Mar 12 14:45:34.502355 master-0 kubenswrapper[37036]: I0312 14:45:34.501860 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6r4g" event={"ID":"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d","Type":"ContainerDied","Data":"321f7ad8ea7a3d8b6063620d3a5bc21196b8ba9c101ed781ca4f0f4b8746cb9b"} Mar 12 14:45:35.520336 master-0 kubenswrapper[37036]: I0312 14:45:35.520275 37036 generic.go:334] "Generic (PLEG): container finished" podID="3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d" containerID="4d19a1ebb681c85fa1e36b8cccd3c67388b34e3943ff18277c46756fd12bd092" exitCode=0 Mar 12 14:45:35.520336 master-0 kubenswrapper[37036]: I0312 14:45:35.520315 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6r4g" event={"ID":"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d","Type":"ContainerDied","Data":"4d19a1ebb681c85fa1e36b8cccd3c67388b34e3943ff18277c46756fd12bd092"} Mar 12 14:45:36.263927 master-0 kubenswrapper[37036]: I0312 14:45:36.261207 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-l6mt6" Mar 12 14:45:36.534600 master-0 kubenswrapper[37036]: I0312 14:45:36.534537 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6r4g" event={"ID":"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d","Type":"ContainerStarted","Data":"41464195c085cb8de103fd5a9c4b81d821fd40416a13b1dfdb1cfa4c9cd29557"} Mar 12 14:45:36.534600 master-0 kubenswrapper[37036]: I0312 14:45:36.534586 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6r4g" event={"ID":"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d","Type":"ContainerStarted","Data":"fb5019d30063d6461a046632b112671adccf6eb8921cfb41a05905fa4f9925e3"} Mar 12 14:45:36.534600 master-0 kubenswrapper[37036]: I0312 14:45:36.534602 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6r4g" event={"ID":"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d","Type":"ContainerStarted","Data":"5c587fe2ba275f78aa88953d0c4bc0ad0b2d60e1a6167ecab35ee1cee96b261e"} Mar 12 14:45:36.535140 master-0 kubenswrapper[37036]: I0312 14:45:36.534634 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6r4g" event={"ID":"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d","Type":"ContainerStarted","Data":"e4bd467c3cfc896624f758f0feab5573ba20d34ce8898984fd641b260afdba22"} Mar 12 14:45:36.535140 master-0 kubenswrapper[37036]: I0312 14:45:36.534646 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6r4g" event={"ID":"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d","Type":"ContainerStarted","Data":"893499d09036ad930592ec4c2d3784139e6a22e4a44eb4a9d1da7df3318d7ede"} Mar 12 14:45:36.993651 master-0 kubenswrapper[37036]: I0312 14:45:36.993604 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:36.993920 master-0 kubenswrapper[37036]: I0312 14:45:36.993666 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:36.997956 master-0 kubenswrapper[37036]: I0312 14:45:36.997887 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:37.548205 master-0 kubenswrapper[37036]: I0312 14:45:37.548122 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6r4g" event={"ID":"3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d","Type":"ContainerStarted","Data":"09a5ddbf6abc45d298013b4cc1a1202419ef1df0644826ef1d12ae8371bc7772"} Mar 12 14:45:37.548889 master-0 kubenswrapper[37036]: I0312 14:45:37.548369 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:37.552240 master-0 kubenswrapper[37036]: I0312 14:45:37.552198 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f48d8466d-4rmwh" Mar 12 14:45:37.659954 master-0 kubenswrapper[37036]: I0312 14:45:37.650780 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-t6r4g" podStartSLOduration=5.900259031 podStartE2EDuration="13.650758605s" podCreationTimestamp="2026-03-12 14:45:24 +0000 UTC" firstStartedPulling="2026-03-12 14:45:24.701430272 +0000 UTC m=+583.709171219" lastFinishedPulling="2026-03-12 14:45:32.451929856 +0000 UTC m=+591.459670793" observedRunningTime="2026-03-12 14:45:37.647604667 +0000 UTC m=+596.655345634" watchObservedRunningTime="2026-03-12 14:45:37.650758605 +0000 UTC m=+596.658499542" Mar 12 14:45:38.329057 master-0 kubenswrapper[37036]: I0312 14:45:38.328975 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6b9b4765bb-vsz5x"] Mar 12 14:45:39.590062 master-0 kubenswrapper[37036]: I0312 14:45:39.589969 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:39.634360 master-0 kubenswrapper[37036]: I0312 14:45:39.634283 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:41.787302 master-0 kubenswrapper[37036]: I0312 14:45:41.787224 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-q2z2r" Mar 12 14:45:44.579171 master-0 kubenswrapper[37036]: I0312 14:45:44.579094 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-nt2jr" Mar 12 14:45:44.780924 master-0 kubenswrapper[37036]: I0312 14:45:44.780811 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-9rrn5" Mar 12 14:45:46.754436 master-0 kubenswrapper[37036]: I0312 14:45:46.754133 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-9qpxv" Mar 12 14:45:51.537586 master-0 kubenswrapper[37036]: I0312 14:45:51.537525 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-gw6nq"] Mar 12 14:45:51.538780 master-0 kubenswrapper[37036]: I0312 14:45:51.538746 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.541242 master-0 kubenswrapper[37036]: I0312 14:45:51.540580 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Mar 12 14:45:51.550845 master-0 kubenswrapper[37036]: I0312 14:45:51.550806 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-gw6nq"] Mar 12 14:45:51.678754 master-0 kubenswrapper[37036]: I0312 14:45:51.678667 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-pod-volumes-dir\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.678985 master-0 kubenswrapper[37036]: I0312 14:45:51.678801 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr57l\" (UniqueName: \"kubernetes.io/projected/a72c52d0-a7af-406e-be2d-71f8bef1ee02-kube-api-access-xr57l\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.678985 master-0 kubenswrapper[37036]: I0312 14:45:51.678878 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-lvmd-config\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.678985 master-0 kubenswrapper[37036]: I0312 14:45:51.678957 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-csi-plugin-dir\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.679097 master-0 kubenswrapper[37036]: I0312 14:45:51.678989 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-device-dir\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.679097 master-0 kubenswrapper[37036]: I0312 14:45:51.679031 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-sys\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.679158 master-0 kubenswrapper[37036]: I0312 14:45:51.679103 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/a72c52d0-a7af-406e-be2d-71f8bef1ee02-metrics-cert\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.679158 master-0 kubenswrapper[37036]: I0312 14:45:51.679133 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-run-udev\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.679320 master-0 kubenswrapper[37036]: I0312 14:45:51.679278 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-node-plugin-dir\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.679424 master-0 kubenswrapper[37036]: I0312 14:45:51.679397 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-registration-dir\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.679464 master-0 kubenswrapper[37036]: I0312 14:45:51.679448 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-file-lock-dir\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.781657 master-0 kubenswrapper[37036]: I0312 14:45:51.781579 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-csi-plugin-dir\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.781657 master-0 kubenswrapper[37036]: I0312 14:45:51.781646 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-device-dir\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.782056 master-0 kubenswrapper[37036]: I0312 14:45:51.781683 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-sys\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.782056 master-0 kubenswrapper[37036]: I0312 14:45:51.781711 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/a72c52d0-a7af-406e-be2d-71f8bef1ee02-metrics-cert\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.782056 master-0 kubenswrapper[37036]: I0312 14:45:51.781733 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-run-udev\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.782056 master-0 kubenswrapper[37036]: I0312 14:45:51.781777 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-node-plugin-dir\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.782056 master-0 kubenswrapper[37036]: I0312 14:45:51.781819 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-registration-dir\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.782056 master-0 kubenswrapper[37036]: I0312 14:45:51.781846 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-file-lock-dir\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.782056 master-0 kubenswrapper[37036]: I0312 14:45:51.781870 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-csi-plugin-dir\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.782056 master-0 kubenswrapper[37036]: I0312 14:45:51.781884 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-pod-volumes-dir\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.782056 master-0 kubenswrapper[37036]: I0312 14:45:51.781999 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-pod-volumes-dir\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.782679 master-0 kubenswrapper[37036]: I0312 14:45:51.782075 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-run-udev\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.782679 master-0 kubenswrapper[37036]: I0312 14:45:51.782153 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-sys\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.782679 master-0 kubenswrapper[37036]: I0312 14:45:51.782206 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-device-dir\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.782679 master-0 kubenswrapper[37036]: I0312 14:45:51.782235 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-registration-dir\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.782679 master-0 kubenswrapper[37036]: I0312 14:45:51.782226 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr57l\" (UniqueName: \"kubernetes.io/projected/a72c52d0-a7af-406e-be2d-71f8bef1ee02-kube-api-access-xr57l\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.782679 master-0 kubenswrapper[37036]: I0312 14:45:51.782412 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-node-plugin-dir\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.782679 master-0 kubenswrapper[37036]: I0312 14:45:51.782419 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-lvmd-config\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.782679 master-0 kubenswrapper[37036]: I0312 14:45:51.782494 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-file-lock-dir\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.783294 master-0 kubenswrapper[37036]: I0312 14:45:51.782713 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/a72c52d0-a7af-406e-be2d-71f8bef1ee02-lvmd-config\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.788875 master-0 kubenswrapper[37036]: I0312 14:45:51.788766 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/a72c52d0-a7af-406e-be2d-71f8bef1ee02-metrics-cert\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.799289 master-0 kubenswrapper[37036]: I0312 14:45:51.799218 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr57l\" (UniqueName: \"kubernetes.io/projected/a72c52d0-a7af-406e-be2d-71f8bef1ee02-kube-api-access-xr57l\") pod \"vg-manager-gw6nq\" (UID: \"a72c52d0-a7af-406e-be2d-71f8bef1ee02\") " pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:51.853893 master-0 kubenswrapper[37036]: I0312 14:45:51.853822 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:45:52.263544 master-0 kubenswrapper[37036]: I0312 14:45:52.263056 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-gw6nq"] Mar 12 14:45:52.266179 master-0 kubenswrapper[37036]: W0312 14:45:52.265835 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda72c52d0_a7af_406e_be2d_71f8bef1ee02.slice/crio-2253edcb69c4affa5a7ed95d67fd16a1c9d80eca8ed378bb89ab8247927c60cb WatchSource:0}: Error finding container 2253edcb69c4affa5a7ed95d67fd16a1c9d80eca8ed378bb89ab8247927c60cb: Status 404 returned error can't find the container with id 2253edcb69c4affa5a7ed95d67fd16a1c9d80eca8ed378bb89ab8247927c60cb Mar 12 14:45:52.680280 master-0 kubenswrapper[37036]: I0312 14:45:52.680148 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-gw6nq" event={"ID":"a72c52d0-a7af-406e-be2d-71f8bef1ee02","Type":"ContainerStarted","Data":"c609d64cd11d0c0069a10330f4536123176bc6a1f3adf8ad7c740a493d8c9231"} Mar 12 14:45:52.680280 master-0 kubenswrapper[37036]: I0312 14:45:52.680221 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-gw6nq" event={"ID":"a72c52d0-a7af-406e-be2d-71f8bef1ee02","Type":"ContainerStarted","Data":"2253edcb69c4affa5a7ed95d67fd16a1c9d80eca8ed378bb89ab8247927c60cb"} Mar 12 14:45:52.706066 master-0 kubenswrapper[37036]: I0312 14:45:52.705985 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-gw6nq" podStartSLOduration=1.705956861 podStartE2EDuration="1.705956861s" podCreationTimestamp="2026-03-12 14:45:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:45:52.703823288 +0000 UTC m=+611.711564225" watchObservedRunningTime="2026-03-12 14:45:52.705956861 +0000 UTC m=+611.713697798" Mar 12 14:45:54.599122 master-0 kubenswrapper[37036]: I0312 14:45:54.598954 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-t6r4g" Mar 12 14:45:54.710956 master-0 kubenswrapper[37036]: I0312 14:45:54.710910 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-gw6nq_a72c52d0-a7af-406e-be2d-71f8bef1ee02/vg-manager/0.log" Mar 12 14:45:54.710956 master-0 kubenswrapper[37036]: I0312 14:45:54.710953 37036 generic.go:334] "Generic (PLEG): container finished" podID="a72c52d0-a7af-406e-be2d-71f8bef1ee02" containerID="c609d64cd11d0c0069a10330f4536123176bc6a1f3adf8ad7c740a493d8c9231" exitCode=1 Mar 12 14:45:54.711260 master-0 kubenswrapper[37036]: I0312 14:45:54.710983 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-gw6nq" event={"ID":"a72c52d0-a7af-406e-be2d-71f8bef1ee02","Type":"ContainerDied","Data":"c609d64cd11d0c0069a10330f4536123176bc6a1f3adf8ad7c740a493d8c9231"} Mar 12 14:45:54.711543 master-0 kubenswrapper[37036]: I0312 14:45:54.711499 37036 scope.go:117] "RemoveContainer" containerID="c609d64cd11d0c0069a10330f4536123176bc6a1f3adf8ad7c740a493d8c9231" Mar 12 14:45:55.015659 master-0 kubenswrapper[37036]: I0312 14:45:55.015596 37036 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Mar 12 14:45:55.551288 master-0 kubenswrapper[37036]: I0312 14:45:55.551131 37036 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-03-12T14:45:55.01565704Z","Handler":null,"Name":""} Mar 12 14:45:55.553077 master-0 kubenswrapper[37036]: I0312 14:45:55.553006 37036 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Mar 12 14:45:55.553077 master-0 kubenswrapper[37036]: I0312 14:45:55.553077 37036 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Mar 12 14:45:55.721271 master-0 kubenswrapper[37036]: I0312 14:45:55.721179 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-gw6nq_a72c52d0-a7af-406e-be2d-71f8bef1ee02/vg-manager/0.log" Mar 12 14:45:55.721271 master-0 kubenswrapper[37036]: I0312 14:45:55.721258 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-gw6nq" event={"ID":"a72c52d0-a7af-406e-be2d-71f8bef1ee02","Type":"ContainerStarted","Data":"59f97ac0ee615c38b065c1d552bbf6cf2d586ae8b6ebd441757b3d98bbaa42e7"} Mar 12 14:46:01.854588 master-0 kubenswrapper[37036]: I0312 14:46:01.854431 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:46:01.857488 master-0 kubenswrapper[37036]: I0312 14:46:01.857450 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:46:02.803762 master-0 kubenswrapper[37036]: I0312 14:46:02.803651 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:46:02.805130 master-0 kubenswrapper[37036]: I0312 14:46:02.805078 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-gw6nq" Mar 12 14:46:03.362385 master-0 kubenswrapper[37036]: I0312 14:46:03.362276 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6b9b4765bb-vsz5x" podUID="321f1912-4218-4afd-add3-ce16ef44420f" containerName="console" containerID="cri-o://5e381331e26e635662a3062f1769b24d075c4f94bd20233b38d347328db44ac3" gracePeriod=15 Mar 12 14:46:03.816368 master-0 kubenswrapper[37036]: I0312 14:46:03.816328 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b9b4765bb-vsz5x_321f1912-4218-4afd-add3-ce16ef44420f/console/0.log" Mar 12 14:46:03.816660 master-0 kubenswrapper[37036]: I0312 14:46:03.816634 37036 generic.go:334] "Generic (PLEG): container finished" podID="321f1912-4218-4afd-add3-ce16ef44420f" containerID="5e381331e26e635662a3062f1769b24d075c4f94bd20233b38d347328db44ac3" exitCode=2 Mar 12 14:46:03.816845 master-0 kubenswrapper[37036]: I0312 14:46:03.816792 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b9b4765bb-vsz5x" event={"ID":"321f1912-4218-4afd-add3-ce16ef44420f","Type":"ContainerDied","Data":"5e381331e26e635662a3062f1769b24d075c4f94bd20233b38d347328db44ac3"} Mar 12 14:46:03.889330 master-0 kubenswrapper[37036]: I0312 14:46:03.889279 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b9b4765bb-vsz5x_321f1912-4218-4afd-add3-ce16ef44420f/console/0.log" Mar 12 14:46:03.889530 master-0 kubenswrapper[37036]: I0312 14:46:03.889364 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:46:04.033003 master-0 kubenswrapper[37036]: I0312 14:46:04.032949 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/321f1912-4218-4afd-add3-ce16ef44420f-console-config\") pod \"321f1912-4218-4afd-add3-ce16ef44420f\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " Mar 12 14:46:04.033265 master-0 kubenswrapper[37036]: I0312 14:46:04.033063 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/321f1912-4218-4afd-add3-ce16ef44420f-oauth-serving-cert\") pod \"321f1912-4218-4afd-add3-ce16ef44420f\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " Mar 12 14:46:04.033265 master-0 kubenswrapper[37036]: I0312 14:46:04.033134 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp49n\" (UniqueName: \"kubernetes.io/projected/321f1912-4218-4afd-add3-ce16ef44420f-kube-api-access-zp49n\") pod \"321f1912-4218-4afd-add3-ce16ef44420f\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " Mar 12 14:46:04.033265 master-0 kubenswrapper[37036]: I0312 14:46:04.033203 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/321f1912-4218-4afd-add3-ce16ef44420f-trusted-ca-bundle\") pod \"321f1912-4218-4afd-add3-ce16ef44420f\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " Mar 12 14:46:04.033265 master-0 kubenswrapper[37036]: I0312 14:46:04.033224 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/321f1912-4218-4afd-add3-ce16ef44420f-console-oauth-config\") pod \"321f1912-4218-4afd-add3-ce16ef44420f\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " Mar 12 14:46:04.033265 master-0 kubenswrapper[37036]: I0312 14:46:04.033247 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/321f1912-4218-4afd-add3-ce16ef44420f-console-serving-cert\") pod \"321f1912-4218-4afd-add3-ce16ef44420f\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " Mar 12 14:46:04.033456 master-0 kubenswrapper[37036]: I0312 14:46:04.033289 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/321f1912-4218-4afd-add3-ce16ef44420f-service-ca\") pod \"321f1912-4218-4afd-add3-ce16ef44420f\" (UID: \"321f1912-4218-4afd-add3-ce16ef44420f\") " Mar 12 14:46:04.033527 master-0 kubenswrapper[37036]: I0312 14:46:04.033480 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/321f1912-4218-4afd-add3-ce16ef44420f-console-config" (OuterVolumeSpecName: "console-config") pod "321f1912-4218-4afd-add3-ce16ef44420f" (UID: "321f1912-4218-4afd-add3-ce16ef44420f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:46:04.033660 master-0 kubenswrapper[37036]: I0312 14:46:04.033493 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/321f1912-4218-4afd-add3-ce16ef44420f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "321f1912-4218-4afd-add3-ce16ef44420f" (UID: "321f1912-4218-4afd-add3-ce16ef44420f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:46:04.033660 master-0 kubenswrapper[37036]: I0312 14:46:04.033606 37036 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/321f1912-4218-4afd-add3-ce16ef44420f-console-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:46:04.033660 master-0 kubenswrapper[37036]: I0312 14:46:04.033632 37036 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/321f1912-4218-4afd-add3-ce16ef44420f-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:46:04.033878 master-0 kubenswrapper[37036]: I0312 14:46:04.033821 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/321f1912-4218-4afd-add3-ce16ef44420f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "321f1912-4218-4afd-add3-ce16ef44420f" (UID: "321f1912-4218-4afd-add3-ce16ef44420f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:46:04.034276 master-0 kubenswrapper[37036]: I0312 14:46:04.034238 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/321f1912-4218-4afd-add3-ce16ef44420f-service-ca" (OuterVolumeSpecName: "service-ca") pod "321f1912-4218-4afd-add3-ce16ef44420f" (UID: "321f1912-4218-4afd-add3-ce16ef44420f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:46:04.035832 master-0 kubenswrapper[37036]: I0312 14:46:04.035802 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/321f1912-4218-4afd-add3-ce16ef44420f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "321f1912-4218-4afd-add3-ce16ef44420f" (UID: "321f1912-4218-4afd-add3-ce16ef44420f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:46:04.035969 master-0 kubenswrapper[37036]: I0312 14:46:04.035943 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/321f1912-4218-4afd-add3-ce16ef44420f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "321f1912-4218-4afd-add3-ce16ef44420f" (UID: "321f1912-4218-4afd-add3-ce16ef44420f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:46:04.036251 master-0 kubenswrapper[37036]: I0312 14:46:04.036222 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/321f1912-4218-4afd-add3-ce16ef44420f-kube-api-access-zp49n" (OuterVolumeSpecName: "kube-api-access-zp49n") pod "321f1912-4218-4afd-add3-ce16ef44420f" (UID: "321f1912-4218-4afd-add3-ce16ef44420f"). InnerVolumeSpecName "kube-api-access-zp49n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:46:04.136020 master-0 kubenswrapper[37036]: I0312 14:46:04.135811 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp49n\" (UniqueName: \"kubernetes.io/projected/321f1912-4218-4afd-add3-ce16ef44420f-kube-api-access-zp49n\") on node \"master-0\" DevicePath \"\"" Mar 12 14:46:04.136020 master-0 kubenswrapper[37036]: I0312 14:46:04.135869 37036 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/321f1912-4218-4afd-add3-ce16ef44420f-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:46:04.136020 master-0 kubenswrapper[37036]: I0312 14:46:04.135882 37036 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/321f1912-4218-4afd-add3-ce16ef44420f-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:46:04.136020 master-0 kubenswrapper[37036]: I0312 14:46:04.135908 37036 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/321f1912-4218-4afd-add3-ce16ef44420f-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 14:46:04.136020 master-0 kubenswrapper[37036]: I0312 14:46:04.135925 37036 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/321f1912-4218-4afd-add3-ce16ef44420f-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 14:46:04.839619 master-0 kubenswrapper[37036]: I0312 14:46:04.839553 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b9b4765bb-vsz5x_321f1912-4218-4afd-add3-ce16ef44420f/console/0.log" Mar 12 14:46:04.840889 master-0 kubenswrapper[37036]: I0312 14:46:04.840508 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b9b4765bb-vsz5x" Mar 12 14:46:04.845287 master-0 kubenswrapper[37036]: I0312 14:46:04.845154 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b9b4765bb-vsz5x" event={"ID":"321f1912-4218-4afd-add3-ce16ef44420f","Type":"ContainerDied","Data":"a2421fa142535539e6f10fc6ae916d7380a4af54642a7b09be8ea550b442eecf"} Mar 12 14:46:04.845633 master-0 kubenswrapper[37036]: I0312 14:46:04.845314 37036 scope.go:117] "RemoveContainer" containerID="5e381331e26e635662a3062f1769b24d075c4f94bd20233b38d347328db44ac3" Mar 12 14:46:04.888427 master-0 kubenswrapper[37036]: I0312 14:46:04.888361 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6b9b4765bb-vsz5x"] Mar 12 14:46:04.899835 master-0 kubenswrapper[37036]: I0312 14:46:04.899774 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6b9b4765bb-vsz5x"] Mar 12 14:46:04.978766 master-0 kubenswrapper[37036]: I0312 14:46:04.978696 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-zp49j"] Mar 12 14:46:04.979109 master-0 kubenswrapper[37036]: E0312 14:46:04.979091 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="321f1912-4218-4afd-add3-ce16ef44420f" containerName="console" Mar 12 14:46:04.979109 master-0 kubenswrapper[37036]: I0312 14:46:04.979109 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="321f1912-4218-4afd-add3-ce16ef44420f" containerName="console" Mar 12 14:46:04.979300 master-0 kubenswrapper[37036]: I0312 14:46:04.979273 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="321f1912-4218-4afd-add3-ce16ef44420f" containerName="console" Mar 12 14:46:04.979857 master-0 kubenswrapper[37036]: I0312 14:46:04.979837 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zp49j" Mar 12 14:46:04.981812 master-0 kubenswrapper[37036]: I0312 14:46:04.981752 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 12 14:46:04.984058 master-0 kubenswrapper[37036]: I0312 14:46:04.984028 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 12 14:46:05.001175 master-0 kubenswrapper[37036]: I0312 14:46:05.001118 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zp49j"] Mar 12 14:46:05.154570 master-0 kubenswrapper[37036]: I0312 14:46:05.154426 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf45l\" (UniqueName: \"kubernetes.io/projected/51960efb-7643-4773-b58c-d40f1dc89b3a-kube-api-access-pf45l\") pod \"openstack-operator-index-zp49j\" (UID: \"51960efb-7643-4773-b58c-d40f1dc89b3a\") " pod="openstack-operators/openstack-operator-index-zp49j" Mar 12 14:46:05.255688 master-0 kubenswrapper[37036]: I0312 14:46:05.255625 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf45l\" (UniqueName: \"kubernetes.io/projected/51960efb-7643-4773-b58c-d40f1dc89b3a-kube-api-access-pf45l\") pod \"openstack-operator-index-zp49j\" (UID: \"51960efb-7643-4773-b58c-d40f1dc89b3a\") " pod="openstack-operators/openstack-operator-index-zp49j" Mar 12 14:46:05.266292 master-0 kubenswrapper[37036]: I0312 14:46:05.266244 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="321f1912-4218-4afd-add3-ce16ef44420f" path="/var/lib/kubelet/pods/321f1912-4218-4afd-add3-ce16ef44420f/volumes" Mar 12 14:46:05.274231 master-0 kubenswrapper[37036]: I0312 14:46:05.274181 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf45l\" (UniqueName: \"kubernetes.io/projected/51960efb-7643-4773-b58c-d40f1dc89b3a-kube-api-access-pf45l\") pod \"openstack-operator-index-zp49j\" (UID: \"51960efb-7643-4773-b58c-d40f1dc89b3a\") " pod="openstack-operators/openstack-operator-index-zp49j" Mar 12 14:46:05.295988 master-0 kubenswrapper[37036]: I0312 14:46:05.295640 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zp49j" Mar 12 14:46:05.720041 master-0 kubenswrapper[37036]: I0312 14:46:05.719993 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zp49j"] Mar 12 14:46:05.721376 master-0 kubenswrapper[37036]: W0312 14:46:05.721323 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51960efb_7643_4773_b58c_d40f1dc89b3a.slice/crio-6ba35578eae04ad49721c0f625432292f9ab003a3cd94d7c8904f46adf65143c WatchSource:0}: Error finding container 6ba35578eae04ad49721c0f625432292f9ab003a3cd94d7c8904f46adf65143c: Status 404 returned error can't find the container with id 6ba35578eae04ad49721c0f625432292f9ab003a3cd94d7c8904f46adf65143c Mar 12 14:46:05.848675 master-0 kubenswrapper[37036]: I0312 14:46:05.848629 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zp49j" event={"ID":"51960efb-7643-4773-b58c-d40f1dc89b3a","Type":"ContainerStarted","Data":"6ba35578eae04ad49721c0f625432292f9ab003a3cd94d7c8904f46adf65143c"} Mar 12 14:46:07.866460 master-0 kubenswrapper[37036]: I0312 14:46:07.866394 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zp49j" event={"ID":"51960efb-7643-4773-b58c-d40f1dc89b3a","Type":"ContainerStarted","Data":"6b1d13c5701a2777d5f7a88f5a64fcca8151b96a06a33c79e42866ed16c31743"} Mar 12 14:46:07.891598 master-0 kubenswrapper[37036]: I0312 14:46:07.891515 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-zp49j" podStartSLOduration=2.906338758 podStartE2EDuration="3.8914981s" podCreationTimestamp="2026-03-12 14:46:04 +0000 UTC" firstStartedPulling="2026-03-12 14:46:05.723589323 +0000 UTC m=+624.731330260" lastFinishedPulling="2026-03-12 14:46:06.708748665 +0000 UTC m=+625.716489602" observedRunningTime="2026-03-12 14:46:07.887661981 +0000 UTC m=+626.895402928" watchObservedRunningTime="2026-03-12 14:46:07.8914981 +0000 UTC m=+626.899239037" Mar 12 14:46:08.519432 master-0 kubenswrapper[37036]: I0312 14:46:08.519370 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-zp49j"] Mar 12 14:46:09.126944 master-0 kubenswrapper[37036]: I0312 14:46:09.126859 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-z25bx"] Mar 12 14:46:09.128026 master-0 kubenswrapper[37036]: I0312 14:46:09.128001 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-z25bx" Mar 12 14:46:09.138985 master-0 kubenswrapper[37036]: I0312 14:46:09.138890 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-z25bx"] Mar 12 14:46:09.219625 master-0 kubenswrapper[37036]: I0312 14:46:09.219543 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t6q9\" (UniqueName: \"kubernetes.io/projected/084b4b42-9dc1-4fe0-8f14-273b83ce0e05-kube-api-access-7t6q9\") pod \"openstack-operator-index-z25bx\" (UID: \"084b4b42-9dc1-4fe0-8f14-273b83ce0e05\") " pod="openstack-operators/openstack-operator-index-z25bx" Mar 12 14:46:09.321594 master-0 kubenswrapper[37036]: I0312 14:46:09.321074 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t6q9\" (UniqueName: \"kubernetes.io/projected/084b4b42-9dc1-4fe0-8f14-273b83ce0e05-kube-api-access-7t6q9\") pod \"openstack-operator-index-z25bx\" (UID: \"084b4b42-9dc1-4fe0-8f14-273b83ce0e05\") " pod="openstack-operators/openstack-operator-index-z25bx" Mar 12 14:46:09.339741 master-0 kubenswrapper[37036]: I0312 14:46:09.339654 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t6q9\" (UniqueName: \"kubernetes.io/projected/084b4b42-9dc1-4fe0-8f14-273b83ce0e05-kube-api-access-7t6q9\") pod \"openstack-operator-index-z25bx\" (UID: \"084b4b42-9dc1-4fe0-8f14-273b83ce0e05\") " pod="openstack-operators/openstack-operator-index-z25bx" Mar 12 14:46:09.445465 master-0 kubenswrapper[37036]: I0312 14:46:09.445329 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-z25bx" Mar 12 14:46:09.868712 master-0 kubenswrapper[37036]: I0312 14:46:09.868631 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-z25bx"] Mar 12 14:46:09.884775 master-0 kubenswrapper[37036]: I0312 14:46:09.884714 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-zp49j" podUID="51960efb-7643-4773-b58c-d40f1dc89b3a" containerName="registry-server" containerID="cri-o://6b1d13c5701a2777d5f7a88f5a64fcca8151b96a06a33c79e42866ed16c31743" gracePeriod=2 Mar 12 14:46:10.298825 master-0 kubenswrapper[37036]: I0312 14:46:10.298776 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zp49j" Mar 12 14:46:10.446815 master-0 kubenswrapper[37036]: I0312 14:46:10.446759 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf45l\" (UniqueName: \"kubernetes.io/projected/51960efb-7643-4773-b58c-d40f1dc89b3a-kube-api-access-pf45l\") pod \"51960efb-7643-4773-b58c-d40f1dc89b3a\" (UID: \"51960efb-7643-4773-b58c-d40f1dc89b3a\") " Mar 12 14:46:10.450200 master-0 kubenswrapper[37036]: I0312 14:46:10.450147 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51960efb-7643-4773-b58c-d40f1dc89b3a-kube-api-access-pf45l" (OuterVolumeSpecName: "kube-api-access-pf45l") pod "51960efb-7643-4773-b58c-d40f1dc89b3a" (UID: "51960efb-7643-4773-b58c-d40f1dc89b3a"). InnerVolumeSpecName "kube-api-access-pf45l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:46:10.549698 master-0 kubenswrapper[37036]: I0312 14:46:10.549632 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf45l\" (UniqueName: \"kubernetes.io/projected/51960efb-7643-4773-b58c-d40f1dc89b3a-kube-api-access-pf45l\") on node \"master-0\" DevicePath \"\"" Mar 12 14:46:10.894620 master-0 kubenswrapper[37036]: I0312 14:46:10.894463 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-z25bx" event={"ID":"084b4b42-9dc1-4fe0-8f14-273b83ce0e05","Type":"ContainerStarted","Data":"5db75dcd92a1fe586f6d0f03f9e398bb6efff1df17b85b3d920c7b14b36c88ff"} Mar 12 14:46:10.894620 master-0 kubenswrapper[37036]: I0312 14:46:10.894528 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-z25bx" event={"ID":"084b4b42-9dc1-4fe0-8f14-273b83ce0e05","Type":"ContainerStarted","Data":"14f1a45b4ca62833763a0361e51d879e57fc97a06ed67ca6e37c73f17379941c"} Mar 12 14:46:10.896542 master-0 kubenswrapper[37036]: I0312 14:46:10.896486 37036 generic.go:334] "Generic (PLEG): container finished" podID="51960efb-7643-4773-b58c-d40f1dc89b3a" containerID="6b1d13c5701a2777d5f7a88f5a64fcca8151b96a06a33c79e42866ed16c31743" exitCode=0 Mar 12 14:46:10.896611 master-0 kubenswrapper[37036]: I0312 14:46:10.896554 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zp49j" Mar 12 14:46:10.896694 master-0 kubenswrapper[37036]: I0312 14:46:10.896548 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zp49j" event={"ID":"51960efb-7643-4773-b58c-d40f1dc89b3a","Type":"ContainerDied","Data":"6b1d13c5701a2777d5f7a88f5a64fcca8151b96a06a33c79e42866ed16c31743"} Mar 12 14:46:10.896776 master-0 kubenswrapper[37036]: I0312 14:46:10.896741 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zp49j" event={"ID":"51960efb-7643-4773-b58c-d40f1dc89b3a","Type":"ContainerDied","Data":"6ba35578eae04ad49721c0f625432292f9ab003a3cd94d7c8904f46adf65143c"} Mar 12 14:46:10.896839 master-0 kubenswrapper[37036]: I0312 14:46:10.896791 37036 scope.go:117] "RemoveContainer" containerID="6b1d13c5701a2777d5f7a88f5a64fcca8151b96a06a33c79e42866ed16c31743" Mar 12 14:46:10.925965 master-0 kubenswrapper[37036]: I0312 14:46:10.922772 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-z25bx" podStartSLOduration=1.446322593 podStartE2EDuration="1.922745713s" podCreationTimestamp="2026-03-12 14:46:09 +0000 UTC" firstStartedPulling="2026-03-12 14:46:09.885200786 +0000 UTC m=+628.892941723" lastFinishedPulling="2026-03-12 14:46:10.361623906 +0000 UTC m=+629.369364843" observedRunningTime="2026-03-12 14:46:10.911005383 +0000 UTC m=+629.918746320" watchObservedRunningTime="2026-03-12 14:46:10.922745713 +0000 UTC m=+629.930486660" Mar 12 14:46:10.931214 master-0 kubenswrapper[37036]: I0312 14:46:10.931063 37036 scope.go:117] "RemoveContainer" containerID="6b1d13c5701a2777d5f7a88f5a64fcca8151b96a06a33c79e42866ed16c31743" Mar 12 14:46:10.934333 master-0 kubenswrapper[37036]: E0312 14:46:10.934258 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b1d13c5701a2777d5f7a88f5a64fcca8151b96a06a33c79e42866ed16c31743\": container with ID starting with 6b1d13c5701a2777d5f7a88f5a64fcca8151b96a06a33c79e42866ed16c31743 not found: ID does not exist" containerID="6b1d13c5701a2777d5f7a88f5a64fcca8151b96a06a33c79e42866ed16c31743" Mar 12 14:46:10.934470 master-0 kubenswrapper[37036]: I0312 14:46:10.934358 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b1d13c5701a2777d5f7a88f5a64fcca8151b96a06a33c79e42866ed16c31743"} err="failed to get container status \"6b1d13c5701a2777d5f7a88f5a64fcca8151b96a06a33c79e42866ed16c31743\": rpc error: code = NotFound desc = could not find container \"6b1d13c5701a2777d5f7a88f5a64fcca8151b96a06a33c79e42866ed16c31743\": container with ID starting with 6b1d13c5701a2777d5f7a88f5a64fcca8151b96a06a33c79e42866ed16c31743 not found: ID does not exist" Mar 12 14:46:10.943656 master-0 kubenswrapper[37036]: I0312 14:46:10.943603 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-zp49j"] Mar 12 14:46:10.953920 master-0 kubenswrapper[37036]: I0312 14:46:10.953460 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-zp49j"] Mar 12 14:46:11.245299 master-0 kubenswrapper[37036]: I0312 14:46:11.245206 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51960efb-7643-4773-b58c-d40f1dc89b3a" path="/var/lib/kubelet/pods/51960efb-7643-4773-b58c-d40f1dc89b3a/volumes" Mar 12 14:46:19.445850 master-0 kubenswrapper[37036]: I0312 14:46:19.445775 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-z25bx" Mar 12 14:46:19.446600 master-0 kubenswrapper[37036]: I0312 14:46:19.446065 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-z25bx" Mar 12 14:46:19.496674 master-0 kubenswrapper[37036]: I0312 14:46:19.496619 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-z25bx" Mar 12 14:46:20.065536 master-0 kubenswrapper[37036]: I0312 14:46:20.065449 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-z25bx" Mar 12 14:46:22.251173 master-0 kubenswrapper[37036]: I0312 14:46:22.251108 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72"] Mar 12 14:46:22.251796 master-0 kubenswrapper[37036]: E0312 14:46:22.251572 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51960efb-7643-4773-b58c-d40f1dc89b3a" containerName="registry-server" Mar 12 14:46:22.251796 master-0 kubenswrapper[37036]: I0312 14:46:22.251591 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="51960efb-7643-4773-b58c-d40f1dc89b3a" containerName="registry-server" Mar 12 14:46:22.251874 master-0 kubenswrapper[37036]: I0312 14:46:22.251811 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="51960efb-7643-4773-b58c-d40f1dc89b3a" containerName="registry-server" Mar 12 14:46:22.253397 master-0 kubenswrapper[37036]: I0312 14:46:22.253355 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72" Mar 12 14:46:22.269137 master-0 kubenswrapper[37036]: I0312 14:46:22.269080 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72"] Mar 12 14:46:22.280628 master-0 kubenswrapper[37036]: I0312 14:46:22.277113 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxwm8\" (UniqueName: \"kubernetes.io/projected/7081718f-3039-42ea-863d-7ccabdcc8808-kube-api-access-wxwm8\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72\" (UID: \"7081718f-3039-42ea-863d-7ccabdcc8808\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72" Mar 12 14:46:22.280628 master-0 kubenswrapper[37036]: I0312 14:46:22.277182 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7081718f-3039-42ea-863d-7ccabdcc8808-util\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72\" (UID: \"7081718f-3039-42ea-863d-7ccabdcc8808\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72" Mar 12 14:46:22.280628 master-0 kubenswrapper[37036]: I0312 14:46:22.277277 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7081718f-3039-42ea-863d-7ccabdcc8808-bundle\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72\" (UID: \"7081718f-3039-42ea-863d-7ccabdcc8808\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72" Mar 12 14:46:22.379126 master-0 kubenswrapper[37036]: I0312 14:46:22.379081 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7081718f-3039-42ea-863d-7ccabdcc8808-bundle\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72\" (UID: \"7081718f-3039-42ea-863d-7ccabdcc8808\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72" Mar 12 14:46:22.379493 master-0 kubenswrapper[37036]: I0312 14:46:22.379473 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxwm8\" (UniqueName: \"kubernetes.io/projected/7081718f-3039-42ea-863d-7ccabdcc8808-kube-api-access-wxwm8\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72\" (UID: \"7081718f-3039-42ea-863d-7ccabdcc8808\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72" Mar 12 14:46:22.379661 master-0 kubenswrapper[37036]: I0312 14:46:22.379595 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7081718f-3039-42ea-863d-7ccabdcc8808-bundle\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72\" (UID: \"7081718f-3039-42ea-863d-7ccabdcc8808\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72" Mar 12 14:46:22.379744 master-0 kubenswrapper[37036]: I0312 14:46:22.379649 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7081718f-3039-42ea-863d-7ccabdcc8808-util\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72\" (UID: \"7081718f-3039-42ea-863d-7ccabdcc8808\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72" Mar 12 14:46:22.380938 master-0 kubenswrapper[37036]: I0312 14:46:22.380275 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7081718f-3039-42ea-863d-7ccabdcc8808-util\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72\" (UID: \"7081718f-3039-42ea-863d-7ccabdcc8808\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72" Mar 12 14:46:22.394463 master-0 kubenswrapper[37036]: I0312 14:46:22.394410 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxwm8\" (UniqueName: \"kubernetes.io/projected/7081718f-3039-42ea-863d-7ccabdcc8808-kube-api-access-wxwm8\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72\" (UID: \"7081718f-3039-42ea-863d-7ccabdcc8808\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72" Mar 12 14:46:22.569189 master-0 kubenswrapper[37036]: I0312 14:46:22.569067 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72" Mar 12 14:46:22.984558 master-0 kubenswrapper[37036]: I0312 14:46:22.983817 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72"] Mar 12 14:46:22.989639 master-0 kubenswrapper[37036]: W0312 14:46:22.989572 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7081718f_3039_42ea_863d_7ccabdcc8808.slice/crio-43720f24de1d29295c47d969c2a652e5e3b1e24cbef61b5bedaa1a5f10d6e5cd WatchSource:0}: Error finding container 43720f24de1d29295c47d969c2a652e5e3b1e24cbef61b5bedaa1a5f10d6e5cd: Status 404 returned error can't find the container with id 43720f24de1d29295c47d969c2a652e5e3b1e24cbef61b5bedaa1a5f10d6e5cd Mar 12 14:46:23.051331 master-0 kubenswrapper[37036]: I0312 14:46:23.051265 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72" event={"ID":"7081718f-3039-42ea-863d-7ccabdcc8808","Type":"ContainerStarted","Data":"43720f24de1d29295c47d969c2a652e5e3b1e24cbef61b5bedaa1a5f10d6e5cd"} Mar 12 14:46:24.061427 master-0 kubenswrapper[37036]: I0312 14:46:24.061373 37036 generic.go:334] "Generic (PLEG): container finished" podID="7081718f-3039-42ea-863d-7ccabdcc8808" containerID="fd2fe9c470ef5f3e4238485be05f82ea956be835de963d2701678c0cf9537339" exitCode=0 Mar 12 14:46:24.062091 master-0 kubenswrapper[37036]: I0312 14:46:24.061441 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72" event={"ID":"7081718f-3039-42ea-863d-7ccabdcc8808","Type":"ContainerDied","Data":"fd2fe9c470ef5f3e4238485be05f82ea956be835de963d2701678c0cf9537339"} Mar 12 14:46:26.090352 master-0 kubenswrapper[37036]: I0312 14:46:26.090295 37036 generic.go:334] "Generic (PLEG): container finished" podID="7081718f-3039-42ea-863d-7ccabdcc8808" containerID="e5c6cb1f1ca1a7e3370aaa5738c37f4290388a24bb68fc78eb4107df92f2b36a" exitCode=0 Mar 12 14:46:26.090352 master-0 kubenswrapper[37036]: I0312 14:46:26.090345 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72" event={"ID":"7081718f-3039-42ea-863d-7ccabdcc8808","Type":"ContainerDied","Data":"e5c6cb1f1ca1a7e3370aaa5738c37f4290388a24bb68fc78eb4107df92f2b36a"} Mar 12 14:46:27.100719 master-0 kubenswrapper[37036]: I0312 14:46:27.100654 37036 generic.go:334] "Generic (PLEG): container finished" podID="7081718f-3039-42ea-863d-7ccabdcc8808" containerID="0cf7996de1634c3f1c4da5ddc4332e17026d1604898f8985f6cfe8b8ab83decc" exitCode=0 Mar 12 14:46:27.100719 master-0 kubenswrapper[37036]: I0312 14:46:27.100702 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72" event={"ID":"7081718f-3039-42ea-863d-7ccabdcc8808","Type":"ContainerDied","Data":"0cf7996de1634c3f1c4da5ddc4332e17026d1604898f8985f6cfe8b8ab83decc"} Mar 12 14:46:28.453372 master-0 kubenswrapper[37036]: I0312 14:46:28.453314 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72" Mar 12 14:46:28.514021 master-0 kubenswrapper[37036]: I0312 14:46:28.513939 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7081718f-3039-42ea-863d-7ccabdcc8808-util\") pod \"7081718f-3039-42ea-863d-7ccabdcc8808\" (UID: \"7081718f-3039-42ea-863d-7ccabdcc8808\") " Mar 12 14:46:28.514325 master-0 kubenswrapper[37036]: I0312 14:46:28.514296 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7081718f-3039-42ea-863d-7ccabdcc8808-bundle\") pod \"7081718f-3039-42ea-863d-7ccabdcc8808\" (UID: \"7081718f-3039-42ea-863d-7ccabdcc8808\") " Mar 12 14:46:28.514450 master-0 kubenswrapper[37036]: I0312 14:46:28.514426 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxwm8\" (UniqueName: \"kubernetes.io/projected/7081718f-3039-42ea-863d-7ccabdcc8808-kube-api-access-wxwm8\") pod \"7081718f-3039-42ea-863d-7ccabdcc8808\" (UID: \"7081718f-3039-42ea-863d-7ccabdcc8808\") " Mar 12 14:46:28.515975 master-0 kubenswrapper[37036]: I0312 14:46:28.515939 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7081718f-3039-42ea-863d-7ccabdcc8808-bundle" (OuterVolumeSpecName: "bundle") pod "7081718f-3039-42ea-863d-7ccabdcc8808" (UID: "7081718f-3039-42ea-863d-7ccabdcc8808"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:46:28.518347 master-0 kubenswrapper[37036]: I0312 14:46:28.518309 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7081718f-3039-42ea-863d-7ccabdcc8808-kube-api-access-wxwm8" (OuterVolumeSpecName: "kube-api-access-wxwm8") pod "7081718f-3039-42ea-863d-7ccabdcc8808" (UID: "7081718f-3039-42ea-863d-7ccabdcc8808"). InnerVolumeSpecName "kube-api-access-wxwm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:46:28.527396 master-0 kubenswrapper[37036]: I0312 14:46:28.527346 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7081718f-3039-42ea-863d-7ccabdcc8808-util" (OuterVolumeSpecName: "util") pod "7081718f-3039-42ea-863d-7ccabdcc8808" (UID: "7081718f-3039-42ea-863d-7ccabdcc8808"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:46:28.616166 master-0 kubenswrapper[37036]: I0312 14:46:28.616079 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxwm8\" (UniqueName: \"kubernetes.io/projected/7081718f-3039-42ea-863d-7ccabdcc8808-kube-api-access-wxwm8\") on node \"master-0\" DevicePath \"\"" Mar 12 14:46:28.616166 master-0 kubenswrapper[37036]: I0312 14:46:28.616147 37036 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7081718f-3039-42ea-863d-7ccabdcc8808-util\") on node \"master-0\" DevicePath \"\"" Mar 12 14:46:28.616166 master-0 kubenswrapper[37036]: I0312 14:46:28.616160 37036 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7081718f-3039-42ea-863d-7ccabdcc8808-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:46:29.121243 master-0 kubenswrapper[37036]: I0312 14:46:29.121182 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72" event={"ID":"7081718f-3039-42ea-863d-7ccabdcc8808","Type":"ContainerDied","Data":"43720f24de1d29295c47d969c2a652e5e3b1e24cbef61b5bedaa1a5f10d6e5cd"} Mar 12 14:46:29.121243 master-0 kubenswrapper[37036]: I0312 14:46:29.121241 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43720f24de1d29295c47d969c2a652e5e3b1e24cbef61b5bedaa1a5f10d6e5cd" Mar 12 14:46:29.121635 master-0 kubenswrapper[37036]: I0312 14:46:29.121308 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477klq72" Mar 12 14:46:31.634736 master-0 kubenswrapper[37036]: I0312 14:46:31.634667 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-65b9994cf8-xjww8"] Mar 12 14:46:31.635406 master-0 kubenswrapper[37036]: E0312 14:46:31.635232 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7081718f-3039-42ea-863d-7ccabdcc8808" containerName="util" Mar 12 14:46:31.635406 master-0 kubenswrapper[37036]: I0312 14:46:31.635253 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="7081718f-3039-42ea-863d-7ccabdcc8808" containerName="util" Mar 12 14:46:31.635406 master-0 kubenswrapper[37036]: E0312 14:46:31.635274 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7081718f-3039-42ea-863d-7ccabdcc8808" containerName="extract" Mar 12 14:46:31.635406 master-0 kubenswrapper[37036]: I0312 14:46:31.635283 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="7081718f-3039-42ea-863d-7ccabdcc8808" containerName="extract" Mar 12 14:46:31.635406 master-0 kubenswrapper[37036]: E0312 14:46:31.635319 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7081718f-3039-42ea-863d-7ccabdcc8808" containerName="pull" Mar 12 14:46:31.635406 master-0 kubenswrapper[37036]: I0312 14:46:31.635328 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="7081718f-3039-42ea-863d-7ccabdcc8808" containerName="pull" Mar 12 14:46:31.635597 master-0 kubenswrapper[37036]: I0312 14:46:31.635545 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="7081718f-3039-42ea-863d-7ccabdcc8808" containerName="extract" Mar 12 14:46:31.636510 master-0 kubenswrapper[37036]: I0312 14:46:31.636483 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-xjww8" Mar 12 14:46:31.685216 master-0 kubenswrapper[37036]: I0312 14:46:31.685139 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-65b9994cf8-xjww8"] Mar 12 14:46:31.769924 master-0 kubenswrapper[37036]: I0312 14:46:31.768649 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh9gv\" (UniqueName: \"kubernetes.io/projected/5025c698-0a43-4257-917f-a1438dbb7fd2-kube-api-access-dh9gv\") pod \"openstack-operator-controller-init-65b9994cf8-xjww8\" (UID: \"5025c698-0a43-4257-917f-a1438dbb7fd2\") " pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-xjww8" Mar 12 14:46:31.870649 master-0 kubenswrapper[37036]: I0312 14:46:31.870590 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh9gv\" (UniqueName: \"kubernetes.io/projected/5025c698-0a43-4257-917f-a1438dbb7fd2-kube-api-access-dh9gv\") pod \"openstack-operator-controller-init-65b9994cf8-xjww8\" (UID: \"5025c698-0a43-4257-917f-a1438dbb7fd2\") " pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-xjww8" Mar 12 14:46:31.898754 master-0 kubenswrapper[37036]: I0312 14:46:31.898659 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh9gv\" (UniqueName: \"kubernetes.io/projected/5025c698-0a43-4257-917f-a1438dbb7fd2-kube-api-access-dh9gv\") pod \"openstack-operator-controller-init-65b9994cf8-xjww8\" (UID: \"5025c698-0a43-4257-917f-a1438dbb7fd2\") " pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-xjww8" Mar 12 14:46:31.953233 master-0 kubenswrapper[37036]: I0312 14:46:31.953159 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-xjww8" Mar 12 14:46:33.135086 master-0 kubenswrapper[37036]: W0312 14:46:33.135020 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5025c698_0a43_4257_917f_a1438dbb7fd2.slice/crio-7a963bc955439a0c251f3f50e602869c37a54acfb7c753bf86f0dab527ae78ca WatchSource:0}: Error finding container 7a963bc955439a0c251f3f50e602869c37a54acfb7c753bf86f0dab527ae78ca: Status 404 returned error can't find the container with id 7a963bc955439a0c251f3f50e602869c37a54acfb7c753bf86f0dab527ae78ca Mar 12 14:46:33.144827 master-0 kubenswrapper[37036]: I0312 14:46:33.144393 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-65b9994cf8-xjww8"] Mar 12 14:46:33.161929 master-0 kubenswrapper[37036]: I0312 14:46:33.160222 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-xjww8" event={"ID":"5025c698-0a43-4257-917f-a1438dbb7fd2","Type":"ContainerStarted","Data":"7a963bc955439a0c251f3f50e602869c37a54acfb7c753bf86f0dab527ae78ca"} Mar 12 14:46:38.238637 master-0 kubenswrapper[37036]: I0312 14:46:38.238564 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-xjww8" event={"ID":"5025c698-0a43-4257-917f-a1438dbb7fd2","Type":"ContainerStarted","Data":"ddcada857a117fcdfcb3b4f7d7cc7533ed814ab4f75f4f8027b3aaf26032ffba"} Mar 12 14:46:38.240003 master-0 kubenswrapper[37036]: I0312 14:46:38.239446 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-xjww8" Mar 12 14:46:38.269462 master-0 kubenswrapper[37036]: I0312 14:46:38.269367 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-xjww8" podStartSLOduration=2.757875499 podStartE2EDuration="7.269350187s" podCreationTimestamp="2026-03-12 14:46:31 +0000 UTC" firstStartedPulling="2026-03-12 14:46:33.142006529 +0000 UTC m=+652.149747466" lastFinishedPulling="2026-03-12 14:46:37.653481227 +0000 UTC m=+656.661222154" observedRunningTime="2026-03-12 14:46:38.26563044 +0000 UTC m=+657.273371387" watchObservedRunningTime="2026-03-12 14:46:38.269350187 +0000 UTC m=+657.277091124" Mar 12 14:46:51.955836 master-0 kubenswrapper[37036]: I0312 14:46:51.955767 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-xjww8" Mar 12 14:47:12.697012 master-0 kubenswrapper[37036]: I0312 14:47:12.696849 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-qnrd8"] Mar 12 14:47:12.698007 master-0 kubenswrapper[37036]: I0312 14:47:12.697985 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-qnrd8" Mar 12 14:47:12.710843 master-0 kubenswrapper[37036]: I0312 14:47:12.710768 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-rrwhk"] Mar 12 14:47:12.715381 master-0 kubenswrapper[37036]: I0312 14:47:12.712096 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-rrwhk" Mar 12 14:47:12.745819 master-0 kubenswrapper[37036]: I0312 14:47:12.745680 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-rrwhk"] Mar 12 14:47:12.766926 master-0 kubenswrapper[37036]: I0312 14:47:12.766824 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-qnrd8"] Mar 12 14:47:12.786923 master-0 kubenswrapper[37036]: I0312 14:47:12.783946 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88mql\" (UniqueName: \"kubernetes.io/projected/25b8b8bd-9d46-4f49-9258-d9369124ceb9-kube-api-access-88mql\") pod \"barbican-operator-controller-manager-677bd678f7-qnrd8\" (UID: \"25b8b8bd-9d46-4f49-9258-d9369124ceb9\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-qnrd8" Mar 12 14:47:12.786923 master-0 kubenswrapper[37036]: I0312 14:47:12.784036 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4t7t\" (UniqueName: \"kubernetes.io/projected/4fca4c75-f7f5-4cd8-a1cf-301ac2bb22d0-kube-api-access-b4t7t\") pod \"cinder-operator-controller-manager-984cd4dcf-rrwhk\" (UID: \"4fca4c75-f7f5-4cd8-a1cf-301ac2bb22d0\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-rrwhk" Mar 12 14:47:12.795484 master-0 kubenswrapper[37036]: I0312 14:47:12.794960 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-4h9g4"] Mar 12 14:47:12.798842 master-0 kubenswrapper[37036]: I0312 14:47:12.796214 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-4h9g4" Mar 12 14:47:12.818348 master-0 kubenswrapper[37036]: I0312 14:47:12.818295 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-4h9g4"] Mar 12 14:47:12.874167 master-0 kubenswrapper[37036]: I0312 14:47:12.871679 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-qjq5j"] Mar 12 14:47:12.874167 master-0 kubenswrapper[37036]: I0312 14:47:12.872792 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qjq5j" Mar 12 14:47:12.899017 master-0 kubenswrapper[37036]: I0312 14:47:12.889770 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-qjq5j"] Mar 12 14:47:12.899017 master-0 kubenswrapper[37036]: I0312 14:47:12.891061 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4t7t\" (UniqueName: \"kubernetes.io/projected/4fca4c75-f7f5-4cd8-a1cf-301ac2bb22d0-kube-api-access-b4t7t\") pod \"cinder-operator-controller-manager-984cd4dcf-rrwhk\" (UID: \"4fca4c75-f7f5-4cd8-a1cf-301ac2bb22d0\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-rrwhk" Mar 12 14:47:12.899017 master-0 kubenswrapper[37036]: I0312 14:47:12.891133 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhv5t\" (UniqueName: \"kubernetes.io/projected/e8ae7bb7-a302-4bfa-b642-b8628b9a3e5b-kube-api-access-lhv5t\") pod \"designate-operator-controller-manager-66d56f6ff4-4h9g4\" (UID: \"e8ae7bb7-a302-4bfa-b642-b8628b9a3e5b\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-4h9g4" Mar 12 14:47:12.899017 master-0 kubenswrapper[37036]: I0312 14:47:12.891789 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88mql\" (UniqueName: \"kubernetes.io/projected/25b8b8bd-9d46-4f49-9258-d9369124ceb9-kube-api-access-88mql\") pod \"barbican-operator-controller-manager-677bd678f7-qnrd8\" (UID: \"25b8b8bd-9d46-4f49-9258-d9369124ceb9\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-qnrd8" Mar 12 14:47:12.899017 master-0 kubenswrapper[37036]: I0312 14:47:12.891865 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crfxb\" (UniqueName: \"kubernetes.io/projected/d754c42d-f9cc-4bae-9941-56246ef0cda2-kube-api-access-crfxb\") pod \"glance-operator-controller-manager-5964f64c48-qjq5j\" (UID: \"d754c42d-f9cc-4bae-9941-56246ef0cda2\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qjq5j" Mar 12 14:47:12.942029 master-0 kubenswrapper[37036]: I0312 14:47:12.941660 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4t7t\" (UniqueName: \"kubernetes.io/projected/4fca4c75-f7f5-4cd8-a1cf-301ac2bb22d0-kube-api-access-b4t7t\") pod \"cinder-operator-controller-manager-984cd4dcf-rrwhk\" (UID: \"4fca4c75-f7f5-4cd8-a1cf-301ac2bb22d0\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-rrwhk" Mar 12 14:47:12.947838 master-0 kubenswrapper[37036]: I0312 14:47:12.947661 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88mql\" (UniqueName: \"kubernetes.io/projected/25b8b8bd-9d46-4f49-9258-d9369124ceb9-kube-api-access-88mql\") pod \"barbican-operator-controller-manager-677bd678f7-qnrd8\" (UID: \"25b8b8bd-9d46-4f49-9258-d9369124ceb9\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-qnrd8" Mar 12 14:47:12.964067 master-0 kubenswrapper[37036]: I0312 14:47:12.964002 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-kxs4z"] Mar 12 14:47:12.970809 master-0 kubenswrapper[37036]: I0312 14:47:12.970756 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-kxs4z" Mar 12 14:47:12.984146 master-0 kubenswrapper[37036]: I0312 14:47:12.984050 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-kxs4z"] Mar 12 14:47:12.999844 master-0 kubenswrapper[37036]: I0312 14:47:12.998531 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crfxb\" (UniqueName: \"kubernetes.io/projected/d754c42d-f9cc-4bae-9941-56246ef0cda2-kube-api-access-crfxb\") pod \"glance-operator-controller-manager-5964f64c48-qjq5j\" (UID: \"d754c42d-f9cc-4bae-9941-56246ef0cda2\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qjq5j" Mar 12 14:47:12.999844 master-0 kubenswrapper[37036]: I0312 14:47:12.998669 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhv5t\" (UniqueName: \"kubernetes.io/projected/e8ae7bb7-a302-4bfa-b642-b8628b9a3e5b-kube-api-access-lhv5t\") pod \"designate-operator-controller-manager-66d56f6ff4-4h9g4\" (UID: \"e8ae7bb7-a302-4bfa-b642-b8628b9a3e5b\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-4h9g4" Mar 12 14:47:12.999844 master-0 kubenswrapper[37036]: I0312 14:47:12.998746 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbpz7\" (UniqueName: \"kubernetes.io/projected/c9d290e9-31e4-4ccd-98f6-ed3e39fb767f-kube-api-access-hbpz7\") pod \"heat-operator-controller-manager-77b6666d85-kxs4z\" (UID: \"c9d290e9-31e4-4ccd-98f6-ed3e39fb767f\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-kxs4z" Mar 12 14:47:13.001298 master-0 kubenswrapper[37036]: I0312 14:47:13.001191 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4xbpv"] Mar 12 14:47:13.018578 master-0 kubenswrapper[37036]: I0312 14:47:13.017069 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4xbpv" Mar 12 14:47:13.039139 master-0 kubenswrapper[37036]: I0312 14:47:13.039088 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-b8c8d7cc8-qr5m5"] Mar 12 14:47:13.043474 master-0 kubenswrapper[37036]: I0312 14:47:13.043428 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhv5t\" (UniqueName: \"kubernetes.io/projected/e8ae7bb7-a302-4bfa-b642-b8628b9a3e5b-kube-api-access-lhv5t\") pod \"designate-operator-controller-manager-66d56f6ff4-4h9g4\" (UID: \"e8ae7bb7-a302-4bfa-b642-b8628b9a3e5b\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-4h9g4" Mar 12 14:47:13.044824 master-0 kubenswrapper[37036]: I0312 14:47:13.044795 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4xbpv"] Mar 12 14:47:13.053490 master-0 kubenswrapper[37036]: I0312 14:47:13.053445 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-qr5m5" Mar 12 14:47:13.059104 master-0 kubenswrapper[37036]: I0312 14:47:13.058887 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 12 14:47:13.063042 master-0 kubenswrapper[37036]: I0312 14:47:13.059755 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crfxb\" (UniqueName: \"kubernetes.io/projected/d754c42d-f9cc-4bae-9941-56246ef0cda2-kube-api-access-crfxb\") pod \"glance-operator-controller-manager-5964f64c48-qjq5j\" (UID: \"d754c42d-f9cc-4bae-9941-56246ef0cda2\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qjq5j" Mar 12 14:47:13.063631 master-0 kubenswrapper[37036]: I0312 14:47:13.063593 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-qnrd8" Mar 12 14:47:13.087679 master-0 kubenswrapper[37036]: I0312 14:47:13.087610 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-b8c8d7cc8-qr5m5"] Mar 12 14:47:13.108207 master-0 kubenswrapper[37036]: I0312 14:47:13.104537 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64xxl\" (UniqueName: \"kubernetes.io/projected/2d83d2d1-5443-4cbd-9b12-535778ff3e9c-kube-api-access-64xxl\") pod \"infra-operator-controller-manager-b8c8d7cc8-qr5m5\" (UID: \"2d83d2d1-5443-4cbd-9b12-535778ff3e9c\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-qr5m5" Mar 12 14:47:13.108207 master-0 kubenswrapper[37036]: I0312 14:47:13.104871 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6mvt\" (UniqueName: \"kubernetes.io/projected/bba03fd8-df13-42d2-a75b-3f9497034686-kube-api-access-p6mvt\") pod \"horizon-operator-controller-manager-6d9d6b584d-4xbpv\" (UID: \"bba03fd8-df13-42d2-a75b-3f9497034686\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4xbpv" Mar 12 14:47:13.108207 master-0 kubenswrapper[37036]: I0312 14:47:13.105001 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbpz7\" (UniqueName: \"kubernetes.io/projected/c9d290e9-31e4-4ccd-98f6-ed3e39fb767f-kube-api-access-hbpz7\") pod \"heat-operator-controller-manager-77b6666d85-kxs4z\" (UID: \"c9d290e9-31e4-4ccd-98f6-ed3e39fb767f\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-kxs4z" Mar 12 14:47:13.108207 master-0 kubenswrapper[37036]: I0312 14:47:13.105093 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d83d2d1-5443-4cbd-9b12-535778ff3e9c-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-qr5m5\" (UID: \"2d83d2d1-5443-4cbd-9b12-535778ff3e9c\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-qr5m5" Mar 12 14:47:13.118521 master-0 kubenswrapper[37036]: I0312 14:47:13.118458 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-rrwhk" Mar 12 14:47:13.152393 master-0 kubenswrapper[37036]: I0312 14:47:13.152365 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbpz7\" (UniqueName: \"kubernetes.io/projected/c9d290e9-31e4-4ccd-98f6-ed3e39fb767f-kube-api-access-hbpz7\") pod \"heat-operator-controller-manager-77b6666d85-kxs4z\" (UID: \"c9d290e9-31e4-4ccd-98f6-ed3e39fb767f\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-kxs4z" Mar 12 14:47:13.168371 master-0 kubenswrapper[37036]: I0312 14:47:13.168315 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-4h9g4" Mar 12 14:47:13.179185 master-0 kubenswrapper[37036]: I0312 14:47:13.177149 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-6ts8m"] Mar 12 14:47:13.179185 master-0 kubenswrapper[37036]: I0312 14:47:13.178348 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-6ts8m" Mar 12 14:47:13.182814 master-0 kubenswrapper[37036]: I0312 14:47:13.182768 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-6ts8m"] Mar 12 14:47:13.207282 master-0 kubenswrapper[37036]: I0312 14:47:13.207164 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64xxl\" (UniqueName: \"kubernetes.io/projected/2d83d2d1-5443-4cbd-9b12-535778ff3e9c-kube-api-access-64xxl\") pod \"infra-operator-controller-manager-b8c8d7cc8-qr5m5\" (UID: \"2d83d2d1-5443-4cbd-9b12-535778ff3e9c\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-qr5m5" Mar 12 14:47:13.207282 master-0 kubenswrapper[37036]: I0312 14:47:13.207260 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6mvt\" (UniqueName: \"kubernetes.io/projected/bba03fd8-df13-42d2-a75b-3f9497034686-kube-api-access-p6mvt\") pod \"horizon-operator-controller-manager-6d9d6b584d-4xbpv\" (UID: \"bba03fd8-df13-42d2-a75b-3f9497034686\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4xbpv" Mar 12 14:47:13.207516 master-0 kubenswrapper[37036]: I0312 14:47:13.207406 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh9l4\" (UniqueName: \"kubernetes.io/projected/9ce8293b-69ff-412e-876b-e7ba1fa3bfaa-kube-api-access-mh9l4\") pod \"ironic-operator-controller-manager-6bbb499bbc-6ts8m\" (UID: \"9ce8293b-69ff-412e-876b-e7ba1fa3bfaa\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-6ts8m" Mar 12 14:47:13.207516 master-0 kubenswrapper[37036]: I0312 14:47:13.207488 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d83d2d1-5443-4cbd-9b12-535778ff3e9c-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-qr5m5\" (UID: \"2d83d2d1-5443-4cbd-9b12-535778ff3e9c\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-qr5m5" Mar 12 14:47:13.207610 master-0 kubenswrapper[37036]: E0312 14:47:13.207585 37036 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 14:47:13.207669 master-0 kubenswrapper[37036]: E0312 14:47:13.207634 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d83d2d1-5443-4cbd-9b12-535778ff3e9c-cert podName:2d83d2d1-5443-4cbd-9b12-535778ff3e9c nodeName:}" failed. No retries permitted until 2026-03-12 14:47:13.707619884 +0000 UTC m=+692.715360811 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d83d2d1-5443-4cbd-9b12-535778ff3e9c-cert") pod "infra-operator-controller-manager-b8c8d7cc8-qr5m5" (UID: "2d83d2d1-5443-4cbd-9b12-535778ff3e9c") : secret "infra-operator-webhook-server-cert" not found Mar 12 14:47:13.243773 master-0 kubenswrapper[37036]: I0312 14:47:13.241972 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-4dqbv"] Mar 12 14:47:13.243773 master-0 kubenswrapper[37036]: I0312 14:47:13.243117 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-4dqbv" Mar 12 14:47:13.289402 master-0 kubenswrapper[37036]: I0312 14:47:13.289343 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6mvt\" (UniqueName: \"kubernetes.io/projected/bba03fd8-df13-42d2-a75b-3f9497034686-kube-api-access-p6mvt\") pod \"horizon-operator-controller-manager-6d9d6b584d-4xbpv\" (UID: \"bba03fd8-df13-42d2-a75b-3f9497034686\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4xbpv" Mar 12 14:47:13.301999 master-0 kubenswrapper[37036]: I0312 14:47:13.300634 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64xxl\" (UniqueName: \"kubernetes.io/projected/2d83d2d1-5443-4cbd-9b12-535778ff3e9c-kube-api-access-64xxl\") pod \"infra-operator-controller-manager-b8c8d7cc8-qr5m5\" (UID: \"2d83d2d1-5443-4cbd-9b12-535778ff3e9c\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-qr5m5" Mar 12 14:47:13.310421 master-0 kubenswrapper[37036]: I0312 14:47:13.309840 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh9l4\" (UniqueName: \"kubernetes.io/projected/9ce8293b-69ff-412e-876b-e7ba1fa3bfaa-kube-api-access-mh9l4\") pod \"ironic-operator-controller-manager-6bbb499bbc-6ts8m\" (UID: \"9ce8293b-69ff-412e-876b-e7ba1fa3bfaa\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-6ts8m" Mar 12 14:47:13.310421 master-0 kubenswrapper[37036]: I0312 14:47:13.310029 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlxhr\" (UniqueName: \"kubernetes.io/projected/62bce21c-b10f-4f09-82e5-c5fe3d712f42-kube-api-access-wlxhr\") pod \"keystone-operator-controller-manager-684f77d66d-4dqbv\" (UID: \"62bce21c-b10f-4f09-82e5-c5fe3d712f42\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-4dqbv" Mar 12 14:47:13.340674 master-0 kubenswrapper[37036]: I0312 14:47:13.339288 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qjq5j" Mar 12 14:47:13.353260 master-0 kubenswrapper[37036]: I0312 14:47:13.351206 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh9l4\" (UniqueName: \"kubernetes.io/projected/9ce8293b-69ff-412e-876b-e7ba1fa3bfaa-kube-api-access-mh9l4\") pod \"ironic-operator-controller-manager-6bbb499bbc-6ts8m\" (UID: \"9ce8293b-69ff-412e-876b-e7ba1fa3bfaa\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-6ts8m" Mar 12 14:47:13.383163 master-0 kubenswrapper[37036]: I0312 14:47:13.359510 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-4dqbv"] Mar 12 14:47:13.383163 master-0 kubenswrapper[37036]: I0312 14:47:13.378416 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-2mwwh"] Mar 12 14:47:13.389176 master-0 kubenswrapper[37036]: I0312 14:47:13.384088 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-2mwwh" Mar 12 14:47:13.406010 master-0 kubenswrapper[37036]: I0312 14:47:13.405951 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-2mwwh"] Mar 12 14:47:13.417111 master-0 kubenswrapper[37036]: I0312 14:47:13.415293 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-r4gwb"] Mar 12 14:47:13.417111 master-0 kubenswrapper[37036]: I0312 14:47:13.416967 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lwlj\" (UniqueName: \"kubernetes.io/projected/e8099de8-3419-4264-a356-541d4e8df2d6-kube-api-access-2lwlj\") pod \"manila-operator-controller-manager-68f45f9d9f-2mwwh\" (UID: \"e8099de8-3419-4264-a356-541d4e8df2d6\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-2mwwh" Mar 12 14:47:13.417371 master-0 kubenswrapper[37036]: I0312 14:47:13.417208 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-r4gwb" Mar 12 14:47:13.417371 master-0 kubenswrapper[37036]: I0312 14:47:13.417279 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlxhr\" (UniqueName: \"kubernetes.io/projected/62bce21c-b10f-4f09-82e5-c5fe3d712f42-kube-api-access-wlxhr\") pod \"keystone-operator-controller-manager-684f77d66d-4dqbv\" (UID: \"62bce21c-b10f-4f09-82e5-c5fe3d712f42\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-4dqbv" Mar 12 14:47:13.444578 master-0 kubenswrapper[37036]: I0312 14:47:13.420889 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-kxs4z" Mar 12 14:47:13.444578 master-0 kubenswrapper[37036]: I0312 14:47:13.434336 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-r757m"] Mar 12 14:47:13.444578 master-0 kubenswrapper[37036]: I0312 14:47:13.435687 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-r757m" Mar 12 14:47:13.444578 master-0 kubenswrapper[37036]: I0312 14:47:13.443550 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-r4gwb"] Mar 12 14:47:13.481165 master-0 kubenswrapper[37036]: I0312 14:47:13.476756 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlxhr\" (UniqueName: \"kubernetes.io/projected/62bce21c-b10f-4f09-82e5-c5fe3d712f42-kube-api-access-wlxhr\") pod \"keystone-operator-controller-manager-684f77d66d-4dqbv\" (UID: \"62bce21c-b10f-4f09-82e5-c5fe3d712f42\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-4dqbv" Mar 12 14:47:13.481165 master-0 kubenswrapper[37036]: I0312 14:47:13.478162 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-r757m"] Mar 12 14:47:13.528412 master-0 kubenswrapper[37036]: I0312 14:47:13.524469 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6sf2\" (UniqueName: \"kubernetes.io/projected/08088c1c-bfe4-4a59-ab44-b2b72530488c-kube-api-access-c6sf2\") pod \"mariadb-operator-controller-manager-658d4cdd5-r4gwb\" (UID: \"08088c1c-bfe4-4a59-ab44-b2b72530488c\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-r4gwb" Mar 12 14:47:13.528412 master-0 kubenswrapper[37036]: I0312 14:47:13.524607 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lwlj\" (UniqueName: \"kubernetes.io/projected/e8099de8-3419-4264-a356-541d4e8df2d6-kube-api-access-2lwlj\") pod \"manila-operator-controller-manager-68f45f9d9f-2mwwh\" (UID: \"e8099de8-3419-4264-a356-541d4e8df2d6\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-2mwwh" Mar 12 14:47:13.528412 master-0 kubenswrapper[37036]: I0312 14:47:13.524644 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt9fj\" (UniqueName: \"kubernetes.io/projected/21cb5b95-e003-4c68-af84-f62f11dbeee9-kube-api-access-zt9fj\") pod \"neutron-operator-controller-manager-776c5696bf-r757m\" (UID: \"21cb5b95-e003-4c68-af84-f62f11dbeee9\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-r757m" Mar 12 14:47:13.547067 master-0 kubenswrapper[37036]: I0312 14:47:13.545863 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4xbpv" Mar 12 14:47:13.562381 master-0 kubenswrapper[37036]: I0312 14:47:13.562342 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lwlj\" (UniqueName: \"kubernetes.io/projected/e8099de8-3419-4264-a356-541d4e8df2d6-kube-api-access-2lwlj\") pod \"manila-operator-controller-manager-68f45f9d9f-2mwwh\" (UID: \"e8099de8-3419-4264-a356-541d4e8df2d6\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-2mwwh" Mar 12 14:47:13.632194 master-0 kubenswrapper[37036]: I0312 14:47:13.631292 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-bfh8s"] Mar 12 14:47:13.636043 master-0 kubenswrapper[37036]: I0312 14:47:13.635865 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-bfh8s" Mar 12 14:47:13.640255 master-0 kubenswrapper[37036]: I0312 14:47:13.640215 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-6ts8m" Mar 12 14:47:13.642674 master-0 kubenswrapper[37036]: I0312 14:47:13.641887 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-bfh8s"] Mar 12 14:47:13.645762 master-0 kubenswrapper[37036]: I0312 14:47:13.644481 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-4dqbv" Mar 12 14:47:13.645762 master-0 kubenswrapper[37036]: I0312 14:47:13.644670 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt9fj\" (UniqueName: \"kubernetes.io/projected/21cb5b95-e003-4c68-af84-f62f11dbeee9-kube-api-access-zt9fj\") pod \"neutron-operator-controller-manager-776c5696bf-r757m\" (UID: \"21cb5b95-e003-4c68-af84-f62f11dbeee9\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-r757m" Mar 12 14:47:13.645762 master-0 kubenswrapper[37036]: I0312 14:47:13.644824 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6sf2\" (UniqueName: \"kubernetes.io/projected/08088c1c-bfe4-4a59-ab44-b2b72530488c-kube-api-access-c6sf2\") pod \"mariadb-operator-controller-manager-658d4cdd5-r4gwb\" (UID: \"08088c1c-bfe4-4a59-ab44-b2b72530488c\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-r4gwb" Mar 12 14:47:13.680298 master-0 kubenswrapper[37036]: I0312 14:47:13.677503 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6sf2\" (UniqueName: \"kubernetes.io/projected/08088c1c-bfe4-4a59-ab44-b2b72530488c-kube-api-access-c6sf2\") pod \"mariadb-operator-controller-manager-658d4cdd5-r4gwb\" (UID: \"08088c1c-bfe4-4a59-ab44-b2b72530488c\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-r4gwb" Mar 12 14:47:13.683593 master-0 kubenswrapper[37036]: I0312 14:47:13.683537 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt9fj\" (UniqueName: \"kubernetes.io/projected/21cb5b95-e003-4c68-af84-f62f11dbeee9-kube-api-access-zt9fj\") pod \"neutron-operator-controller-manager-776c5696bf-r757m\" (UID: \"21cb5b95-e003-4c68-af84-f62f11dbeee9\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-r757m" Mar 12 14:47:13.733608 master-0 kubenswrapper[37036]: I0312 14:47:13.733462 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-ds7n6"] Mar 12 14:47:13.735024 master-0 kubenswrapper[37036]: I0312 14:47:13.734996 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-ds7n6" Mar 12 14:47:13.756235 master-0 kubenswrapper[37036]: I0312 14:47:13.747145 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsxvm\" (UniqueName: \"kubernetes.io/projected/4704aa20-f100-40da-9bc7-5e8f07d3fd85-kube-api-access-wsxvm\") pod \"nova-operator-controller-manager-569cc54c5-bfh8s\" (UID: \"4704aa20-f100-40da-9bc7-5e8f07d3fd85\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-bfh8s" Mar 12 14:47:13.756235 master-0 kubenswrapper[37036]: I0312 14:47:13.747302 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d83d2d1-5443-4cbd-9b12-535778ff3e9c-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-qr5m5\" (UID: \"2d83d2d1-5443-4cbd-9b12-535778ff3e9c\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-qr5m5" Mar 12 14:47:13.756235 master-0 kubenswrapper[37036]: E0312 14:47:13.748279 37036 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 14:47:13.756235 master-0 kubenswrapper[37036]: E0312 14:47:13.752072 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d83d2d1-5443-4cbd-9b12-535778ff3e9c-cert podName:2d83d2d1-5443-4cbd-9b12-535778ff3e9c nodeName:}" failed. No retries permitted until 2026-03-12 14:47:14.752048259 +0000 UTC m=+693.759789196 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d83d2d1-5443-4cbd-9b12-535778ff3e9c-cert") pod "infra-operator-controller-manager-b8c8d7cc8-qr5m5" (UID: "2d83d2d1-5443-4cbd-9b12-535778ff3e9c") : secret "infra-operator-webhook-server-cert" not found Mar 12 14:47:13.782400 master-0 kubenswrapper[37036]: I0312 14:47:13.774174 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-ds7n6"] Mar 12 14:47:13.795562 master-0 kubenswrapper[37036]: I0312 14:47:13.795020 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl"] Mar 12 14:47:13.797645 master-0 kubenswrapper[37036]: I0312 14:47:13.797612 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl" Mar 12 14:47:13.800558 master-0 kubenswrapper[37036]: I0312 14:47:13.800508 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 12 14:47:13.805733 master-0 kubenswrapper[37036]: I0312 14:47:13.805674 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-xcr9n"] Mar 12 14:47:13.807730 master-0 kubenswrapper[37036]: I0312 14:47:13.807292 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-xcr9n" Mar 12 14:47:13.820798 master-0 kubenswrapper[37036]: I0312 14:47:13.820758 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl"] Mar 12 14:47:13.840256 master-0 kubenswrapper[37036]: I0312 14:47:13.836624 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-xcr9n"] Mar 12 14:47:13.860408 master-0 kubenswrapper[37036]: I0312 14:47:13.859545 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-2mwwh" Mar 12 14:47:13.867350 master-0 kubenswrapper[37036]: I0312 14:47:13.866940 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwxmx\" (UniqueName: \"kubernetes.io/projected/f523db96-d327-4dbc-ad4d-ba4410801482-kube-api-access-lwxmx\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl\" (UID: \"f523db96-d327-4dbc-ad4d-ba4410801482\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl" Mar 12 14:47:13.867350 master-0 kubenswrapper[37036]: I0312 14:47:13.867049 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsxvm\" (UniqueName: \"kubernetes.io/projected/4704aa20-f100-40da-9bc7-5e8f07d3fd85-kube-api-access-wsxvm\") pod \"nova-operator-controller-manager-569cc54c5-bfh8s\" (UID: \"4704aa20-f100-40da-9bc7-5e8f07d3fd85\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-bfh8s" Mar 12 14:47:13.867350 master-0 kubenswrapper[37036]: I0312 14:47:13.867097 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cckpj\" (UniqueName: \"kubernetes.io/projected/11507e0a-bf03-45b3-918e-604c973a2411-kube-api-access-cckpj\") pod \"octavia-operator-controller-manager-5f4f55cb5c-ds7n6\" (UID: \"11507e0a-bf03-45b3-918e-604c973a2411\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-ds7n6" Mar 12 14:47:13.867350 master-0 kubenswrapper[37036]: I0312 14:47:13.867152 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f523db96-d327-4dbc-ad4d-ba4410801482-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl\" (UID: \"f523db96-d327-4dbc-ad4d-ba4410801482\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl" Mar 12 14:47:13.892764 master-0 kubenswrapper[37036]: I0312 14:47:13.892490 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsxvm\" (UniqueName: \"kubernetes.io/projected/4704aa20-f100-40da-9bc7-5e8f07d3fd85-kube-api-access-wsxvm\") pod \"nova-operator-controller-manager-569cc54c5-bfh8s\" (UID: \"4704aa20-f100-40da-9bc7-5e8f07d3fd85\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-bfh8s" Mar 12 14:47:13.893008 master-0 kubenswrapper[37036]: I0312 14:47:13.892823 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-f7xf5"] Mar 12 14:47:13.894338 master-0 kubenswrapper[37036]: I0312 14:47:13.894309 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-f7xf5" Mar 12 14:47:13.903510 master-0 kubenswrapper[37036]: I0312 14:47:13.903263 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-f7xf5"] Mar 12 14:47:13.924924 master-0 kubenswrapper[37036]: I0312 14:47:13.924863 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-r4gwb" Mar 12 14:47:13.940832 master-0 kubenswrapper[37036]: I0312 14:47:13.940783 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-r757m" Mar 12 14:47:13.945405 master-0 kubenswrapper[37036]: I0312 14:47:13.945372 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-pgjs4"] Mar 12 14:47:13.947059 master-0 kubenswrapper[37036]: I0312 14:47:13.947035 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-677c674df7-pgjs4" Mar 12 14:47:13.962199 master-0 kubenswrapper[37036]: I0312 14:47:13.962126 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-r5hwv"] Mar 12 14:47:13.966448 master-0 kubenswrapper[37036]: I0312 14:47:13.963469 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-r5hwv" Mar 12 14:47:13.966448 master-0 kubenswrapper[37036]: I0312 14:47:13.964955 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-w7gjf"] Mar 12 14:47:13.966448 master-0 kubenswrapper[37036]: I0312 14:47:13.966229 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-bfh8s" Mar 12 14:47:13.967807 master-0 kubenswrapper[37036]: I0312 14:47:13.966682 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-w7gjf" Mar 12 14:47:13.969112 master-0 kubenswrapper[37036]: I0312 14:47:13.969060 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwxmx\" (UniqueName: \"kubernetes.io/projected/f523db96-d327-4dbc-ad4d-ba4410801482-kube-api-access-lwxmx\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl\" (UID: \"f523db96-d327-4dbc-ad4d-ba4410801482\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl" Mar 12 14:47:13.976174 master-0 kubenswrapper[37036]: I0312 14:47:13.975487 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-pgjs4"] Mar 12 14:47:13.976677 master-0 kubenswrapper[37036]: I0312 14:47:13.976562 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x98wg\" (UniqueName: \"kubernetes.io/projected/0e26a6b2-5283-44c6-9bd8-ec8834c1e4f4-kube-api-access-x98wg\") pod \"ovn-operator-controller-manager-bbc5b68f9-xcr9n\" (UID: \"0e26a6b2-5283-44c6-9bd8-ec8834c1e4f4\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-xcr9n" Mar 12 14:47:13.976677 master-0 kubenswrapper[37036]: I0312 14:47:13.976646 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cckpj\" (UniqueName: \"kubernetes.io/projected/11507e0a-bf03-45b3-918e-604c973a2411-kube-api-access-cckpj\") pod \"octavia-operator-controller-manager-5f4f55cb5c-ds7n6\" (UID: \"11507e0a-bf03-45b3-918e-604c973a2411\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-ds7n6" Mar 12 14:47:13.976837 master-0 kubenswrapper[37036]: I0312 14:47:13.976724 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f523db96-d327-4dbc-ad4d-ba4410801482-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl\" (UID: \"f523db96-d327-4dbc-ad4d-ba4410801482\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl" Mar 12 14:47:13.977005 master-0 kubenswrapper[37036]: E0312 14:47:13.976969 37036 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 14:47:13.977162 master-0 kubenswrapper[37036]: E0312 14:47:13.977105 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f523db96-d327-4dbc-ad4d-ba4410801482-cert podName:f523db96-d327-4dbc-ad4d-ba4410801482 nodeName:}" failed. No retries permitted until 2026-03-12 14:47:14.477090905 +0000 UTC m=+693.484831842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f523db96-d327-4dbc-ad4d-ba4410801482-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl" (UID: "f523db96-d327-4dbc-ad4d-ba4410801482") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 14:47:13.987327 master-0 kubenswrapper[37036]: I0312 14:47:13.987129 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-r5hwv"] Mar 12 14:47:13.990564 master-0 kubenswrapper[37036]: I0312 14:47:13.990357 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-w7gjf"] Mar 12 14:47:13.998242 master-0 kubenswrapper[37036]: I0312 14:47:13.998204 37036 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 14:47:14.000851 master-0 kubenswrapper[37036]: I0312 14:47:14.000815 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cckpj\" (UniqueName: \"kubernetes.io/projected/11507e0a-bf03-45b3-918e-604c973a2411-kube-api-access-cckpj\") pod \"octavia-operator-controller-manager-5f4f55cb5c-ds7n6\" (UID: \"11507e0a-bf03-45b3-918e-604c973a2411\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-ds7n6" Mar 12 14:47:14.008592 master-0 kubenswrapper[37036]: I0312 14:47:14.008545 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwxmx\" (UniqueName: \"kubernetes.io/projected/f523db96-d327-4dbc-ad4d-ba4410801482-kube-api-access-lwxmx\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl\" (UID: \"f523db96-d327-4dbc-ad4d-ba4410801482\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl" Mar 12 14:47:14.018790 master-0 kubenswrapper[37036]: I0312 14:47:14.018741 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-9vd6t"] Mar 12 14:47:14.024317 master-0 kubenswrapper[37036]: I0312 14:47:14.024253 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-9vd6t" Mar 12 14:47:14.024546 master-0 kubenswrapper[37036]: I0312 14:47:14.024481 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-9vd6t"] Mar 12 14:47:14.050638 master-0 kubenswrapper[37036]: I0312 14:47:14.049722 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995"] Mar 12 14:47:14.051286 master-0 kubenswrapper[37036]: I0312 14:47:14.051248 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" Mar 12 14:47:14.057320 master-0 kubenswrapper[37036]: I0312 14:47:14.057291 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 12 14:47:14.057423 master-0 kubenswrapper[37036]: I0312 14:47:14.057330 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 12 14:47:14.063324 master-0 kubenswrapper[37036]: I0312 14:47:14.063265 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995"] Mar 12 14:47:14.073987 master-0 kubenswrapper[37036]: I0312 14:47:14.073921 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ngpnz"] Mar 12 14:47:14.075302 master-0 kubenswrapper[37036]: I0312 14:47:14.075262 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ngpnz" Mar 12 14:47:14.082987 master-0 kubenswrapper[37036]: I0312 14:47:14.080843 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ngpnz"] Mar 12 14:47:14.082987 master-0 kubenswrapper[37036]: I0312 14:47:14.081932 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x98wg\" (UniqueName: \"kubernetes.io/projected/0e26a6b2-5283-44c6-9bd8-ec8834c1e4f4-kube-api-access-x98wg\") pod \"ovn-operator-controller-manager-bbc5b68f9-xcr9n\" (UID: \"0e26a6b2-5283-44c6-9bd8-ec8834c1e4f4\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-xcr9n" Mar 12 14:47:14.083198 master-0 kubenswrapper[37036]: I0312 14:47:14.083076 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dhd7\" (UniqueName: \"kubernetes.io/projected/1692da52-39de-4063-8610-2b66a0b54306-kube-api-access-9dhd7\") pod \"placement-operator-controller-manager-574d45c66c-f7xf5\" (UID: \"1692da52-39de-4063-8610-2b66a0b54306\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-f7xf5" Mar 12 14:47:14.083307 master-0 kubenswrapper[37036]: I0312 14:47:14.083259 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgfw5\" (UniqueName: \"kubernetes.io/projected/7982f981-768c-43fe-92b3-d6398027b9ad-kube-api-access-tgfw5\") pod \"swift-operator-controller-manager-677c674df7-pgjs4\" (UID: \"7982f981-768c-43fe-92b3-d6398027b9ad\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-pgjs4" Mar 12 14:47:14.083387 master-0 kubenswrapper[37036]: I0312 14:47:14.083369 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7gth\" (UniqueName: \"kubernetes.io/projected/a0f55f59-240d-48c1-878f-f03802adc0ab-kube-api-access-w7gth\") pod \"test-operator-controller-manager-5c5cb9c4d7-w7gjf\" (UID: \"a0f55f59-240d-48c1-878f-f03802adc0ab\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-w7gjf" Mar 12 14:47:14.083530 master-0 kubenswrapper[37036]: I0312 14:47:14.083485 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8f5s\" (UniqueName: \"kubernetes.io/projected/2b100f60-5512-47b6-b614-c245cf300c02-kube-api-access-j8f5s\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-r5hwv\" (UID: \"2b100f60-5512-47b6-b614-c245cf300c02\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-r5hwv" Mar 12 14:47:14.108113 master-0 kubenswrapper[37036]: I0312 14:47:14.104666 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x98wg\" (UniqueName: \"kubernetes.io/projected/0e26a6b2-5283-44c6-9bd8-ec8834c1e4f4-kube-api-access-x98wg\") pod \"ovn-operator-controller-manager-bbc5b68f9-xcr9n\" (UID: \"0e26a6b2-5283-44c6-9bd8-ec8834c1e4f4\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-xcr9n" Mar 12 14:47:14.132006 master-0 kubenswrapper[37036]: I0312 14:47:14.129305 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-ds7n6" Mar 12 14:47:14.138683 master-0 kubenswrapper[37036]: I0312 14:47:14.137408 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-qnrd8"] Mar 12 14:47:14.191224 master-0 kubenswrapper[37036]: I0312 14:47:14.189576 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dhd7\" (UniqueName: \"kubernetes.io/projected/1692da52-39de-4063-8610-2b66a0b54306-kube-api-access-9dhd7\") pod \"placement-operator-controller-manager-574d45c66c-f7xf5\" (UID: \"1692da52-39de-4063-8610-2b66a0b54306\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-f7xf5" Mar 12 14:47:14.191224 master-0 kubenswrapper[37036]: I0312 14:47:14.189685 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgfw5\" (UniqueName: \"kubernetes.io/projected/7982f981-768c-43fe-92b3-d6398027b9ad-kube-api-access-tgfw5\") pod \"swift-operator-controller-manager-677c674df7-pgjs4\" (UID: \"7982f981-768c-43fe-92b3-d6398027b9ad\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-pgjs4" Mar 12 14:47:14.191224 master-0 kubenswrapper[37036]: I0312 14:47:14.189767 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmnm6\" (UniqueName: \"kubernetes.io/projected/106e54be-84f8-4b24-a4c8-8050468fef60-kube-api-access-jmnm6\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ngpnz\" (UID: \"106e54be-84f8-4b24-a4c8-8050468fef60\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ngpnz" Mar 12 14:47:14.191224 master-0 kubenswrapper[37036]: I0312 14:47:14.189843 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj87h\" (UniqueName: \"kubernetes.io/projected/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-kube-api-access-sj87h\") pod \"openstack-operator-controller-manager-7795b46f77-wm995\" (UID: \"78642ab2-7b7c-4ca6-bf5a-6da28d829d3e\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" Mar 12 14:47:14.191224 master-0 kubenswrapper[37036]: I0312 14:47:14.189963 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7mfg\" (UniqueName: \"kubernetes.io/projected/78b9b80b-bf83-470c-8712-70c7fd04e021-kube-api-access-r7mfg\") pod \"watcher-operator-controller-manager-6dd88c6f67-9vd6t\" (UID: \"78b9b80b-bf83-470c-8712-70c7fd04e021\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-9vd6t" Mar 12 14:47:14.191224 master-0 kubenswrapper[37036]: I0312 14:47:14.190166 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-wm995\" (UID: \"78642ab2-7b7c-4ca6-bf5a-6da28d829d3e\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" Mar 12 14:47:14.191224 master-0 kubenswrapper[37036]: I0312 14:47:14.190213 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-wm995\" (UID: \"78642ab2-7b7c-4ca6-bf5a-6da28d829d3e\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" Mar 12 14:47:14.191224 master-0 kubenswrapper[37036]: I0312 14:47:14.190246 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7gth\" (UniqueName: \"kubernetes.io/projected/a0f55f59-240d-48c1-878f-f03802adc0ab-kube-api-access-w7gth\") pod \"test-operator-controller-manager-5c5cb9c4d7-w7gjf\" (UID: \"a0f55f59-240d-48c1-878f-f03802adc0ab\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-w7gjf" Mar 12 14:47:14.191224 master-0 kubenswrapper[37036]: I0312 14:47:14.190295 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8f5s\" (UniqueName: \"kubernetes.io/projected/2b100f60-5512-47b6-b614-c245cf300c02-kube-api-access-j8f5s\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-r5hwv\" (UID: \"2b100f60-5512-47b6-b614-c245cf300c02\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-r5hwv" Mar 12 14:47:14.212205 master-0 kubenswrapper[37036]: I0312 14:47:14.212117 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dhd7\" (UniqueName: \"kubernetes.io/projected/1692da52-39de-4063-8610-2b66a0b54306-kube-api-access-9dhd7\") pod \"placement-operator-controller-manager-574d45c66c-f7xf5\" (UID: \"1692da52-39de-4063-8610-2b66a0b54306\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-f7xf5" Mar 12 14:47:14.213260 master-0 kubenswrapper[37036]: I0312 14:47:14.213206 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgfw5\" (UniqueName: \"kubernetes.io/projected/7982f981-768c-43fe-92b3-d6398027b9ad-kube-api-access-tgfw5\") pod \"swift-operator-controller-manager-677c674df7-pgjs4\" (UID: \"7982f981-768c-43fe-92b3-d6398027b9ad\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-pgjs4" Mar 12 14:47:14.213504 master-0 kubenswrapper[37036]: I0312 14:47:14.213453 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8f5s\" (UniqueName: \"kubernetes.io/projected/2b100f60-5512-47b6-b614-c245cf300c02-kube-api-access-j8f5s\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-r5hwv\" (UID: \"2b100f60-5512-47b6-b614-c245cf300c02\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-r5hwv" Mar 12 14:47:14.218115 master-0 kubenswrapper[37036]: I0312 14:47:14.218003 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7gth\" (UniqueName: \"kubernetes.io/projected/a0f55f59-240d-48c1-878f-f03802adc0ab-kube-api-access-w7gth\") pod \"test-operator-controller-manager-5c5cb9c4d7-w7gjf\" (UID: \"a0f55f59-240d-48c1-878f-f03802adc0ab\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-w7gjf" Mar 12 14:47:14.218115 master-0 kubenswrapper[37036]: I0312 14:47:14.218087 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-xcr9n" Mar 12 14:47:14.239346 master-0 kubenswrapper[37036]: I0312 14:47:14.236975 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-677c674df7-pgjs4" Mar 12 14:47:14.258293 master-0 kubenswrapper[37036]: I0312 14:47:14.255342 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-r5hwv" Mar 12 14:47:14.285883 master-0 kubenswrapper[37036]: I0312 14:47:14.285812 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-w7gjf" Mar 12 14:47:14.295733 master-0 kubenswrapper[37036]: I0312 14:47:14.295666 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-wm995\" (UID: \"78642ab2-7b7c-4ca6-bf5a-6da28d829d3e\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" Mar 12 14:47:14.295816 master-0 kubenswrapper[37036]: I0312 14:47:14.295758 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-wm995\" (UID: \"78642ab2-7b7c-4ca6-bf5a-6da28d829d3e\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" Mar 12 14:47:14.295935 master-0 kubenswrapper[37036]: I0312 14:47:14.295912 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmnm6\" (UniqueName: \"kubernetes.io/projected/106e54be-84f8-4b24-a4c8-8050468fef60-kube-api-access-jmnm6\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ngpnz\" (UID: \"106e54be-84f8-4b24-a4c8-8050468fef60\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ngpnz" Mar 12 14:47:14.296093 master-0 kubenswrapper[37036]: I0312 14:47:14.296059 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj87h\" (UniqueName: \"kubernetes.io/projected/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-kube-api-access-sj87h\") pod \"openstack-operator-controller-manager-7795b46f77-wm995\" (UID: \"78642ab2-7b7c-4ca6-bf5a-6da28d829d3e\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" Mar 12 14:47:14.296135 master-0 kubenswrapper[37036]: I0312 14:47:14.296123 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7mfg\" (UniqueName: \"kubernetes.io/projected/78b9b80b-bf83-470c-8712-70c7fd04e021-kube-api-access-r7mfg\") pod \"watcher-operator-controller-manager-6dd88c6f67-9vd6t\" (UID: \"78b9b80b-bf83-470c-8712-70c7fd04e021\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-9vd6t" Mar 12 14:47:14.296173 master-0 kubenswrapper[37036]: E0312 14:47:14.296127 37036 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 14:47:14.296232 master-0 kubenswrapper[37036]: E0312 14:47:14.296214 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-webhook-certs podName:78642ab2-7b7c-4ca6-bf5a-6da28d829d3e nodeName:}" failed. No retries permitted until 2026-03-12 14:47:14.796192369 +0000 UTC m=+693.803933296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-wm995" (UID: "78642ab2-7b7c-4ca6-bf5a-6da28d829d3e") : secret "webhook-server-cert" not found Mar 12 14:47:14.296579 master-0 kubenswrapper[37036]: E0312 14:47:14.296520 37036 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 14:47:14.296626 master-0 kubenswrapper[37036]: E0312 14:47:14.296588 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-metrics-certs podName:78642ab2-7b7c-4ca6-bf5a-6da28d829d3e nodeName:}" failed. No retries permitted until 2026-03-12 14:47:14.796570696 +0000 UTC m=+693.804311623 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-wm995" (UID: "78642ab2-7b7c-4ca6-bf5a-6da28d829d3e") : secret "metrics-server-cert" not found Mar 12 14:47:14.319295 master-0 kubenswrapper[37036]: I0312 14:47:14.319249 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7mfg\" (UniqueName: \"kubernetes.io/projected/78b9b80b-bf83-470c-8712-70c7fd04e021-kube-api-access-r7mfg\") pod \"watcher-operator-controller-manager-6dd88c6f67-9vd6t\" (UID: \"78b9b80b-bf83-470c-8712-70c7fd04e021\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-9vd6t" Mar 12 14:47:14.320995 master-0 kubenswrapper[37036]: I0312 14:47:14.320892 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj87h\" (UniqueName: \"kubernetes.io/projected/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-kube-api-access-sj87h\") pod \"openstack-operator-controller-manager-7795b46f77-wm995\" (UID: \"78642ab2-7b7c-4ca6-bf5a-6da28d829d3e\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" Mar 12 14:47:14.321063 master-0 kubenswrapper[37036]: I0312 14:47:14.320964 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmnm6\" (UniqueName: \"kubernetes.io/projected/106e54be-84f8-4b24-a4c8-8050468fef60-kube-api-access-jmnm6\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ngpnz\" (UID: \"106e54be-84f8-4b24-a4c8-8050468fef60\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ngpnz" Mar 12 14:47:14.394272 master-0 kubenswrapper[37036]: I0312 14:47:14.394217 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ngpnz" Mar 12 14:47:14.416101 master-0 kubenswrapper[37036]: I0312 14:47:14.415987 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-f7xf5" Mar 12 14:47:14.537984 master-0 kubenswrapper[37036]: I0312 14:47:14.511839 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f523db96-d327-4dbc-ad4d-ba4410801482-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl\" (UID: \"f523db96-d327-4dbc-ad4d-ba4410801482\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl" Mar 12 14:47:14.537984 master-0 kubenswrapper[37036]: E0312 14:47:14.512539 37036 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 14:47:14.537984 master-0 kubenswrapper[37036]: E0312 14:47:14.513283 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f523db96-d327-4dbc-ad4d-ba4410801482-cert podName:f523db96-d327-4dbc-ad4d-ba4410801482 nodeName:}" failed. No retries permitted until 2026-03-12 14:47:15.512582756 +0000 UTC m=+694.520323693 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f523db96-d327-4dbc-ad4d-ba4410801482-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl" (UID: "f523db96-d327-4dbc-ad4d-ba4410801482") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 14:47:14.604535 master-0 kubenswrapper[37036]: I0312 14:47:14.604477 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-9vd6t" Mar 12 14:47:14.679947 master-0 kubenswrapper[37036]: I0312 14:47:14.667622 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-qnrd8" event={"ID":"25b8b8bd-9d46-4f49-9258-d9369124ceb9","Type":"ContainerStarted","Data":"4d634a017f0f9b45de8076595e6dd850c1af27ee2d47b925657aea64d784dde8"} Mar 12 14:47:14.690277 master-0 kubenswrapper[37036]: I0312 14:47:14.686272 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-rrwhk"] Mar 12 14:47:14.690277 master-0 kubenswrapper[37036]: W0312 14:47:14.687639 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd754c42d_f9cc_4bae_9941_56246ef0cda2.slice/crio-736e4e8dd66dae9134d308b3dfb657d10f13e1205f7c3ad74900fd617acd96f2 WatchSource:0}: Error finding container 736e4e8dd66dae9134d308b3dfb657d10f13e1205f7c3ad74900fd617acd96f2: Status 404 returned error can't find the container with id 736e4e8dd66dae9134d308b3dfb657d10f13e1205f7c3ad74900fd617acd96f2 Mar 12 14:47:14.709170 master-0 kubenswrapper[37036]: I0312 14:47:14.704669 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-kxs4z"] Mar 12 14:47:14.730697 master-0 kubenswrapper[37036]: I0312 14:47:14.728640 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-qjq5j"] Mar 12 14:47:14.739235 master-0 kubenswrapper[37036]: I0312 14:47:14.738585 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-4h9g4"] Mar 12 14:47:14.751556 master-0 kubenswrapper[37036]: I0312 14:47:14.749965 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4xbpv"] Mar 12 14:47:14.776424 master-0 kubenswrapper[37036]: I0312 14:47:14.773865 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-6ts8m"] Mar 12 14:47:14.819214 master-0 kubenswrapper[37036]: I0312 14:47:14.819132 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-wm995\" (UID: \"78642ab2-7b7c-4ca6-bf5a-6da28d829d3e\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" Mar 12 14:47:14.819214 master-0 kubenswrapper[37036]: I0312 14:47:14.819193 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d83d2d1-5443-4cbd-9b12-535778ff3e9c-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-qr5m5\" (UID: \"2d83d2d1-5443-4cbd-9b12-535778ff3e9c\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-qr5m5" Mar 12 14:47:14.819214 master-0 kubenswrapper[37036]: I0312 14:47:14.819217 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-wm995\" (UID: \"78642ab2-7b7c-4ca6-bf5a-6da28d829d3e\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" Mar 12 14:47:14.819530 master-0 kubenswrapper[37036]: E0312 14:47:14.819376 37036 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 14:47:14.819530 master-0 kubenswrapper[37036]: E0312 14:47:14.819442 37036 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 14:47:14.819530 master-0 kubenswrapper[37036]: E0312 14:47:14.819474 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-metrics-certs podName:78642ab2-7b7c-4ca6-bf5a-6da28d829d3e nodeName:}" failed. No retries permitted until 2026-03-12 14:47:15.819451438 +0000 UTC m=+694.827192425 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-wm995" (UID: "78642ab2-7b7c-4ca6-bf5a-6da28d829d3e") : secret "metrics-server-cert" not found Mar 12 14:47:14.821006 master-0 kubenswrapper[37036]: E0312 14:47:14.819808 37036 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 14:47:14.821006 master-0 kubenswrapper[37036]: E0312 14:47:14.819498 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-webhook-certs podName:78642ab2-7b7c-4ca6-bf5a-6da28d829d3e nodeName:}" failed. No retries permitted until 2026-03-12 14:47:15.819489619 +0000 UTC m=+694.827230666 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-wm995" (UID: "78642ab2-7b7c-4ca6-bf5a-6da28d829d3e") : secret "webhook-server-cert" not found Mar 12 14:47:14.821006 master-0 kubenswrapper[37036]: E0312 14:47:14.820041 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d83d2d1-5443-4cbd-9b12-535778ff3e9c-cert podName:2d83d2d1-5443-4cbd-9b12-535778ff3e9c nodeName:}" failed. No retries permitted until 2026-03-12 14:47:16.820025821 +0000 UTC m=+695.827766768 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d83d2d1-5443-4cbd-9b12-535778ff3e9c-cert") pod "infra-operator-controller-manager-b8c8d7cc8-qr5m5" (UID: "2d83d2d1-5443-4cbd-9b12-535778ff3e9c") : secret "infra-operator-webhook-server-cert" not found Mar 12 14:47:15.392856 master-0 kubenswrapper[37036]: I0312 14:47:15.392439 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-ds7n6"] Mar 12 14:47:15.408164 master-0 kubenswrapper[37036]: I0312 14:47:15.407839 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-r757m"] Mar 12 14:47:15.410206 master-0 kubenswrapper[37036]: W0312 14:47:15.410006 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08088c1c_bfe4_4a59_ab44_b2b72530488c.slice/crio-9bb00265be6a6a5f4cc4acf0e0c4366a58fd355e9b7dc823ed956e81211b1cca WatchSource:0}: Error finding container 9bb00265be6a6a5f4cc4acf0e0c4366a58fd355e9b7dc823ed956e81211b1cca: Status 404 returned error can't find the container with id 9bb00265be6a6a5f4cc4acf0e0c4366a58fd355e9b7dc823ed956e81211b1cca Mar 12 14:47:15.416354 master-0 kubenswrapper[37036]: W0312 14:47:15.415425 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62bce21c_b10f_4f09_82e5_c5fe3d712f42.slice/crio-e1a9b4b12314d294490dedafbfd8bcc229877ff46f3bad5586d750706b970f8c WatchSource:0}: Error finding container e1a9b4b12314d294490dedafbfd8bcc229877ff46f3bad5586d750706b970f8c: Status 404 returned error can't find the container with id e1a9b4b12314d294490dedafbfd8bcc229877ff46f3bad5586d750706b970f8c Mar 12 14:47:15.421529 master-0 kubenswrapper[37036]: I0312 14:47:15.420672 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-4dqbv"] Mar 12 14:47:15.421529 master-0 kubenswrapper[37036]: W0312 14:47:15.420934 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8099de8_3419_4264_a356_541d4e8df2d6.slice/crio-e73e8b0b082809b7ece82d0aaee3c77d51b36f2ff7c693f92d52b0176ae0658a WatchSource:0}: Error finding container e73e8b0b082809b7ece82d0aaee3c77d51b36f2ff7c693f92d52b0176ae0658a: Status 404 returned error can't find the container with id e73e8b0b082809b7ece82d0aaee3c77d51b36f2ff7c693f92d52b0176ae0658a Mar 12 14:47:15.445945 master-0 kubenswrapper[37036]: W0312 14:47:15.445154 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4704aa20_f100_40da_9bc7_5e8f07d3fd85.slice/crio-caceeee216f3e0ddca39eb6f4c99b3ed06cb617abf483b90f10a7ed371b5723e WatchSource:0}: Error finding container caceeee216f3e0ddca39eb6f4c99b3ed06cb617abf483b90f10a7ed371b5723e: Status 404 returned error can't find the container with id caceeee216f3e0ddca39eb6f4c99b3ed06cb617abf483b90f10a7ed371b5723e Mar 12 14:47:15.483776 master-0 kubenswrapper[37036]: I0312 14:47:15.483666 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-r4gwb"] Mar 12 14:47:15.527141 master-0 kubenswrapper[37036]: I0312 14:47:15.526944 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-bfh8s"] Mar 12 14:47:15.551169 master-0 kubenswrapper[37036]: I0312 14:47:15.550886 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-2mwwh"] Mar 12 14:47:15.554837 master-0 kubenswrapper[37036]: I0312 14:47:15.554780 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f523db96-d327-4dbc-ad4d-ba4410801482-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl\" (UID: \"f523db96-d327-4dbc-ad4d-ba4410801482\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl" Mar 12 14:47:15.555139 master-0 kubenswrapper[37036]: E0312 14:47:15.555114 37036 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 14:47:15.555231 master-0 kubenswrapper[37036]: E0312 14:47:15.555214 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f523db96-d327-4dbc-ad4d-ba4410801482-cert podName:f523db96-d327-4dbc-ad4d-ba4410801482 nodeName:}" failed. No retries permitted until 2026-03-12 14:47:17.555194187 +0000 UTC m=+696.562935124 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f523db96-d327-4dbc-ad4d-ba4410801482-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl" (UID: "f523db96-d327-4dbc-ad4d-ba4410801482") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 14:47:15.721512 master-0 kubenswrapper[37036]: I0312 14:47:15.720977 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-kxs4z" event={"ID":"c9d290e9-31e4-4ccd-98f6-ed3e39fb767f","Type":"ContainerStarted","Data":"2273dc6db189a57c7fc5201e88780fe5282141d3fe01fd4c91e205c3e22fcd37"} Mar 12 14:47:15.726085 master-0 kubenswrapper[37036]: I0312 14:47:15.726028 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-4h9g4" event={"ID":"e8ae7bb7-a302-4bfa-b642-b8628b9a3e5b","Type":"ContainerStarted","Data":"4076008345f253503f49db8f47107a2b3d4fc711fdc159caf99cd4f35d7b6cc1"} Mar 12 14:47:15.731755 master-0 kubenswrapper[37036]: I0312 14:47:15.731692 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-r4gwb" event={"ID":"08088c1c-bfe4-4a59-ab44-b2b72530488c","Type":"ContainerStarted","Data":"9bb00265be6a6a5f4cc4acf0e0c4366a58fd355e9b7dc823ed956e81211b1cca"} Mar 12 14:47:15.734836 master-0 kubenswrapper[37036]: I0312 14:47:15.734785 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4xbpv" event={"ID":"bba03fd8-df13-42d2-a75b-3f9497034686","Type":"ContainerStarted","Data":"c3053bd8ea5293579ff14760aab7463fbce0f163818301b8e6cd8ee817c9d002"} Mar 12 14:47:15.737697 master-0 kubenswrapper[37036]: I0312 14:47:15.737658 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-6ts8m" event={"ID":"9ce8293b-69ff-412e-876b-e7ba1fa3bfaa","Type":"ContainerStarted","Data":"7adb1283eaf2e0f436b2a4a1378d82d4f0d0c5f55935fba0a37951414d99d511"} Mar 12 14:47:15.776564 master-0 kubenswrapper[37036]: I0312 14:47:15.748784 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-4dqbv" event={"ID":"62bce21c-b10f-4f09-82e5-c5fe3d712f42","Type":"ContainerStarted","Data":"e1a9b4b12314d294490dedafbfd8bcc229877ff46f3bad5586d750706b970f8c"} Mar 12 14:47:15.776564 master-0 kubenswrapper[37036]: I0312 14:47:15.751172 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-2mwwh" event={"ID":"e8099de8-3419-4264-a356-541d4e8df2d6","Type":"ContainerStarted","Data":"e73e8b0b082809b7ece82d0aaee3c77d51b36f2ff7c693f92d52b0176ae0658a"} Mar 12 14:47:15.776564 master-0 kubenswrapper[37036]: I0312 14:47:15.756585 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-w7gjf"] Mar 12 14:47:15.776564 master-0 kubenswrapper[37036]: I0312 14:47:15.758275 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qjq5j" event={"ID":"d754c42d-f9cc-4bae-9941-56246ef0cda2","Type":"ContainerStarted","Data":"736e4e8dd66dae9134d308b3dfb657d10f13e1205f7c3ad74900fd617acd96f2"} Mar 12 14:47:15.776564 master-0 kubenswrapper[37036]: I0312 14:47:15.765771 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-r757m" event={"ID":"21cb5b95-e003-4c68-af84-f62f11dbeee9","Type":"ContainerStarted","Data":"00dc5e82f1ba28bdc838c8a795e27b1dcc1f8587fa4f54befac6d9615cd80fbf"} Mar 12 14:47:15.776564 master-0 kubenswrapper[37036]: I0312 14:47:15.765863 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-r5hwv"] Mar 12 14:47:15.776564 master-0 kubenswrapper[37036]: I0312 14:47:15.772447 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-ds7n6" event={"ID":"11507e0a-bf03-45b3-918e-604c973a2411","Type":"ContainerStarted","Data":"e6fc00c08a1da3f758399d3826dede83309059c6b898cdf1b0640100b8ae743d"} Mar 12 14:47:15.776564 master-0 kubenswrapper[37036]: I0312 14:47:15.775475 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ngpnz"] Mar 12 14:47:15.776564 master-0 kubenswrapper[37036]: I0312 14:47:15.775547 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-bfh8s" event={"ID":"4704aa20-f100-40da-9bc7-5e8f07d3fd85","Type":"ContainerStarted","Data":"caceeee216f3e0ddca39eb6f4c99b3ed06cb617abf483b90f10a7ed371b5723e"} Mar 12 14:47:15.778399 master-0 kubenswrapper[37036]: I0312 14:47:15.777650 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-rrwhk" event={"ID":"4fca4c75-f7f5-4cd8-a1cf-301ac2bb22d0","Type":"ContainerStarted","Data":"f8cd5998458fc84895023aecfc8c683536c562a8930638c4a1db58c616600101"} Mar 12 14:47:15.781815 master-0 kubenswrapper[37036]: I0312 14:47:15.781768 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-pgjs4"] Mar 12 14:47:15.788782 master-0 kubenswrapper[37036]: I0312 14:47:15.788724 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-xcr9n"] Mar 12 14:47:15.869509 master-0 kubenswrapper[37036]: I0312 14:47:15.869419 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-wm995\" (UID: \"78642ab2-7b7c-4ca6-bf5a-6da28d829d3e\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" Mar 12 14:47:15.869509 master-0 kubenswrapper[37036]: I0312 14:47:15.869512 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-wm995\" (UID: \"78642ab2-7b7c-4ca6-bf5a-6da28d829d3e\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" Mar 12 14:47:15.869768 master-0 kubenswrapper[37036]: E0312 14:47:15.869666 37036 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 14:47:15.869768 master-0 kubenswrapper[37036]: E0312 14:47:15.869692 37036 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 14:47:15.869768 master-0 kubenswrapper[37036]: E0312 14:47:15.869739 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-webhook-certs podName:78642ab2-7b7c-4ca6-bf5a-6da28d829d3e nodeName:}" failed. No retries permitted until 2026-03-12 14:47:17.869724687 +0000 UTC m=+696.877465624 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-wm995" (UID: "78642ab2-7b7c-4ca6-bf5a-6da28d829d3e") : secret "webhook-server-cert" not found Mar 12 14:47:15.869768 master-0 kubenswrapper[37036]: E0312 14:47:15.869773 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-metrics-certs podName:78642ab2-7b7c-4ca6-bf5a-6da28d829d3e nodeName:}" failed. No retries permitted until 2026-03-12 14:47:17.869764167 +0000 UTC m=+696.877505104 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-wm995" (UID: "78642ab2-7b7c-4ca6-bf5a-6da28d829d3e") : secret "metrics-server-cert" not found Mar 12 14:47:15.935473 master-0 kubenswrapper[37036]: I0312 14:47:15.935383 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-f7xf5"] Mar 12 14:47:15.945656 master-0 kubenswrapper[37036]: I0312 14:47:15.945149 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-9vd6t"] Mar 12 14:47:16.901796 master-0 kubenswrapper[37036]: I0312 14:47:16.901728 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d83d2d1-5443-4cbd-9b12-535778ff3e9c-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-qr5m5\" (UID: \"2d83d2d1-5443-4cbd-9b12-535778ff3e9c\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-qr5m5" Mar 12 14:47:16.903051 master-0 kubenswrapper[37036]: E0312 14:47:16.901962 37036 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 14:47:16.903051 master-0 kubenswrapper[37036]: E0312 14:47:16.902030 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d83d2d1-5443-4cbd-9b12-535778ff3e9c-cert podName:2d83d2d1-5443-4cbd-9b12-535778ff3e9c nodeName:}" failed. No retries permitted until 2026-03-12 14:47:20.902009006 +0000 UTC m=+699.909749943 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d83d2d1-5443-4cbd-9b12-535778ff3e9c-cert") pod "infra-operator-controller-manager-b8c8d7cc8-qr5m5" (UID: "2d83d2d1-5443-4cbd-9b12-535778ff3e9c") : secret "infra-operator-webhook-server-cert" not found Mar 12 14:47:17.027368 master-0 kubenswrapper[37036]: W0312 14:47:17.027283 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e26a6b2_5283_44c6_9bd8_ec8834c1e4f4.slice/crio-2cb2f293e7a282ff7e787507f5c7c01f638455590e20c840c3e19509c441fbc0 WatchSource:0}: Error finding container 2cb2f293e7a282ff7e787507f5c7c01f638455590e20c840c3e19509c441fbc0: Status 404 returned error can't find the container with id 2cb2f293e7a282ff7e787507f5c7c01f638455590e20c840c3e19509c441fbc0 Mar 12 14:47:17.027779 master-0 kubenswrapper[37036]: W0312 14:47:17.027720 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1692da52_39de_4063_8610_2b66a0b54306.slice/crio-186c65cd7fe71628ab0fd8d65bba6070efca395a11576cea65a70ef3e5d82f88 WatchSource:0}: Error finding container 186c65cd7fe71628ab0fd8d65bba6070efca395a11576cea65a70ef3e5d82f88: Status 404 returned error can't find the container with id 186c65cd7fe71628ab0fd8d65bba6070efca395a11576cea65a70ef3e5d82f88 Mar 12 14:47:17.033953 master-0 kubenswrapper[37036]: W0312 14:47:17.033905 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b100f60_5512_47b6_b614_c245cf300c02.slice/crio-45819de10bf80132bc2cc04d33609d6ab6c0ec73eacfaa6f37b2d20401f44f9f WatchSource:0}: Error finding container 45819de10bf80132bc2cc04d33609d6ab6c0ec73eacfaa6f37b2d20401f44f9f: Status 404 returned error can't find the container with id 45819de10bf80132bc2cc04d33609d6ab6c0ec73eacfaa6f37b2d20401f44f9f Mar 12 14:47:17.035564 master-0 kubenswrapper[37036]: W0312 14:47:17.035533 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78b9b80b_bf83_470c_8712_70c7fd04e021.slice/crio-50a0aca830e3d20b0eb9447f078c0c4baf26855ec6f5ffc1e582e71e4cb01547 WatchSource:0}: Error finding container 50a0aca830e3d20b0eb9447f078c0c4baf26855ec6f5ffc1e582e71e4cb01547: Status 404 returned error can't find the container with id 50a0aca830e3d20b0eb9447f078c0c4baf26855ec6f5ffc1e582e71e4cb01547 Mar 12 14:47:17.037772 master-0 kubenswrapper[37036]: W0312 14:47:17.037731 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0f55f59_240d_48c1_878f_f03802adc0ab.slice/crio-9c2c0832bad42c9abcc727a61e18c72fb4777672a760432cb009f84600045cfc WatchSource:0}: Error finding container 9c2c0832bad42c9abcc727a61e18c72fb4777672a760432cb009f84600045cfc: Status 404 returned error can't find the container with id 9c2c0832bad42c9abcc727a61e18c72fb4777672a760432cb009f84600045cfc Mar 12 14:47:17.616548 master-0 kubenswrapper[37036]: I0312 14:47:17.616487 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f523db96-d327-4dbc-ad4d-ba4410801482-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl\" (UID: \"f523db96-d327-4dbc-ad4d-ba4410801482\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl" Mar 12 14:47:17.616789 master-0 kubenswrapper[37036]: E0312 14:47:17.616753 37036 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 14:47:17.616843 master-0 kubenswrapper[37036]: E0312 14:47:17.616818 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f523db96-d327-4dbc-ad4d-ba4410801482-cert podName:f523db96-d327-4dbc-ad4d-ba4410801482 nodeName:}" failed. No retries permitted until 2026-03-12 14:47:21.616803744 +0000 UTC m=+700.624544681 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f523db96-d327-4dbc-ad4d-ba4410801482-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl" (UID: "f523db96-d327-4dbc-ad4d-ba4410801482") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 14:47:17.780781 master-0 kubenswrapper[37036]: W0312 14:47:17.780716 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7982f981_768c_43fe_92b3_d6398027b9ad.slice/crio-518cfe5cd98af8ab999cbcc71e7211251682a2d780eb808bb65931bf34e0ede6 WatchSource:0}: Error finding container 518cfe5cd98af8ab999cbcc71e7211251682a2d780eb808bb65931bf34e0ede6: Status 404 returned error can't find the container with id 518cfe5cd98af8ab999cbcc71e7211251682a2d780eb808bb65931bf34e0ede6 Mar 12 14:47:17.782752 master-0 kubenswrapper[37036]: W0312 14:47:17.782681 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod106e54be_84f8_4b24_a4c8_8050468fef60.slice/crio-50ea5ce1a5f2bceab74e0d3f2d63386210484b70381efa836ae28153dfb5bacf WatchSource:0}: Error finding container 50ea5ce1a5f2bceab74e0d3f2d63386210484b70381efa836ae28153dfb5bacf: Status 404 returned error can't find the container with id 50ea5ce1a5f2bceab74e0d3f2d63386210484b70381efa836ae28153dfb5bacf Mar 12 14:47:17.805084 master-0 kubenswrapper[37036]: I0312 14:47:17.805025 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-r5hwv" event={"ID":"2b100f60-5512-47b6-b614-c245cf300c02","Type":"ContainerStarted","Data":"45819de10bf80132bc2cc04d33609d6ab6c0ec73eacfaa6f37b2d20401f44f9f"} Mar 12 14:47:17.807837 master-0 kubenswrapper[37036]: I0312 14:47:17.806722 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-f7xf5" event={"ID":"1692da52-39de-4063-8610-2b66a0b54306","Type":"ContainerStarted","Data":"186c65cd7fe71628ab0fd8d65bba6070efca395a11576cea65a70ef3e5d82f88"} Mar 12 14:47:17.808045 master-0 kubenswrapper[37036]: I0312 14:47:17.807974 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-9vd6t" event={"ID":"78b9b80b-bf83-470c-8712-70c7fd04e021","Type":"ContainerStarted","Data":"50a0aca830e3d20b0eb9447f078c0c4baf26855ec6f5ffc1e582e71e4cb01547"} Mar 12 14:47:17.809670 master-0 kubenswrapper[37036]: I0312 14:47:17.809642 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-xcr9n" event={"ID":"0e26a6b2-5283-44c6-9bd8-ec8834c1e4f4","Type":"ContainerStarted","Data":"2cb2f293e7a282ff7e787507f5c7c01f638455590e20c840c3e19509c441fbc0"} Mar 12 14:47:17.810878 master-0 kubenswrapper[37036]: I0312 14:47:17.810851 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-w7gjf" event={"ID":"a0f55f59-240d-48c1-878f-f03802adc0ab","Type":"ContainerStarted","Data":"9c2c0832bad42c9abcc727a61e18c72fb4777672a760432cb009f84600045cfc"} Mar 12 14:47:17.812229 master-0 kubenswrapper[37036]: I0312 14:47:17.812074 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-pgjs4" event={"ID":"7982f981-768c-43fe-92b3-d6398027b9ad","Type":"ContainerStarted","Data":"518cfe5cd98af8ab999cbcc71e7211251682a2d780eb808bb65931bf34e0ede6"} Mar 12 14:47:17.813497 master-0 kubenswrapper[37036]: I0312 14:47:17.813464 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ngpnz" event={"ID":"106e54be-84f8-4b24-a4c8-8050468fef60","Type":"ContainerStarted","Data":"50ea5ce1a5f2bceab74e0d3f2d63386210484b70381efa836ae28153dfb5bacf"} Mar 12 14:47:17.920758 master-0 kubenswrapper[37036]: I0312 14:47:17.920679 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-wm995\" (UID: \"78642ab2-7b7c-4ca6-bf5a-6da28d829d3e\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" Mar 12 14:47:17.920758 master-0 kubenswrapper[37036]: I0312 14:47:17.920757 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-wm995\" (UID: \"78642ab2-7b7c-4ca6-bf5a-6da28d829d3e\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" Mar 12 14:47:17.921398 master-0 kubenswrapper[37036]: E0312 14:47:17.920939 37036 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 14:47:17.921398 master-0 kubenswrapper[37036]: E0312 14:47:17.921040 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-metrics-certs podName:78642ab2-7b7c-4ca6-bf5a-6da28d829d3e nodeName:}" failed. No retries permitted until 2026-03-12 14:47:21.921016303 +0000 UTC m=+700.928757240 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-wm995" (UID: "78642ab2-7b7c-4ca6-bf5a-6da28d829d3e") : secret "metrics-server-cert" not found Mar 12 14:47:17.921398 master-0 kubenswrapper[37036]: E0312 14:47:17.921050 37036 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 14:47:17.921398 master-0 kubenswrapper[37036]: E0312 14:47:17.921107 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-webhook-certs podName:78642ab2-7b7c-4ca6-bf5a-6da28d829d3e nodeName:}" failed. No retries permitted until 2026-03-12 14:47:21.921089435 +0000 UTC m=+700.928830372 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-wm995" (UID: "78642ab2-7b7c-4ca6-bf5a-6da28d829d3e") : secret "webhook-server-cert" not found Mar 12 14:47:21.001842 master-0 kubenswrapper[37036]: I0312 14:47:21.001788 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d83d2d1-5443-4cbd-9b12-535778ff3e9c-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-qr5m5\" (UID: \"2d83d2d1-5443-4cbd-9b12-535778ff3e9c\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-qr5m5" Mar 12 14:47:21.002556 master-0 kubenswrapper[37036]: E0312 14:47:21.002093 37036 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 14:47:21.002556 master-0 kubenswrapper[37036]: E0312 14:47:21.002200 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d83d2d1-5443-4cbd-9b12-535778ff3e9c-cert podName:2d83d2d1-5443-4cbd-9b12-535778ff3e9c nodeName:}" failed. No retries permitted until 2026-03-12 14:47:29.002180729 +0000 UTC m=+708.009921656 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d83d2d1-5443-4cbd-9b12-535778ff3e9c-cert") pod "infra-operator-controller-manager-b8c8d7cc8-qr5m5" (UID: "2d83d2d1-5443-4cbd-9b12-535778ff3e9c") : secret "infra-operator-webhook-server-cert" not found Mar 12 14:47:21.618076 master-0 kubenswrapper[37036]: I0312 14:47:21.617992 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f523db96-d327-4dbc-ad4d-ba4410801482-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl\" (UID: \"f523db96-d327-4dbc-ad4d-ba4410801482\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl" Mar 12 14:47:21.618307 master-0 kubenswrapper[37036]: E0312 14:47:21.618268 37036 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 14:47:21.618368 master-0 kubenswrapper[37036]: E0312 14:47:21.618339 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f523db96-d327-4dbc-ad4d-ba4410801482-cert podName:f523db96-d327-4dbc-ad4d-ba4410801482 nodeName:}" failed. No retries permitted until 2026-03-12 14:47:29.618321284 +0000 UTC m=+708.626062222 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f523db96-d327-4dbc-ad4d-ba4410801482-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl" (UID: "f523db96-d327-4dbc-ad4d-ba4410801482") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 14:47:21.923184 master-0 kubenswrapper[37036]: I0312 14:47:21.922968 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-wm995\" (UID: \"78642ab2-7b7c-4ca6-bf5a-6da28d829d3e\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" Mar 12 14:47:21.923184 master-0 kubenswrapper[37036]: I0312 14:47:21.923048 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-wm995\" (UID: \"78642ab2-7b7c-4ca6-bf5a-6da28d829d3e\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" Mar 12 14:47:21.923427 master-0 kubenswrapper[37036]: E0312 14:47:21.923181 37036 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 14:47:21.923427 master-0 kubenswrapper[37036]: E0312 14:47:21.923282 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-metrics-certs podName:78642ab2-7b7c-4ca6-bf5a-6da28d829d3e nodeName:}" failed. No retries permitted until 2026-03-12 14:47:29.923260948 +0000 UTC m=+708.931001885 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-wm995" (UID: "78642ab2-7b7c-4ca6-bf5a-6da28d829d3e") : secret "metrics-server-cert" not found Mar 12 14:47:21.923427 master-0 kubenswrapper[37036]: E0312 14:47:21.923282 37036 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 14:47:21.923427 master-0 kubenswrapper[37036]: E0312 14:47:21.923350 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-webhook-certs podName:78642ab2-7b7c-4ca6-bf5a-6da28d829d3e nodeName:}" failed. No retries permitted until 2026-03-12 14:47:29.923328659 +0000 UTC m=+708.931069646 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-wm995" (UID: "78642ab2-7b7c-4ca6-bf5a-6da28d829d3e") : secret "webhook-server-cert" not found Mar 12 14:47:29.063862 master-0 kubenswrapper[37036]: I0312 14:47:29.063796 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d83d2d1-5443-4cbd-9b12-535778ff3e9c-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-qr5m5\" (UID: \"2d83d2d1-5443-4cbd-9b12-535778ff3e9c\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-qr5m5" Mar 12 14:47:29.067714 master-0 kubenswrapper[37036]: I0312 14:47:29.067489 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d83d2d1-5443-4cbd-9b12-535778ff3e9c-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-qr5m5\" (UID: \"2d83d2d1-5443-4cbd-9b12-535778ff3e9c\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-qr5m5" Mar 12 14:47:29.118120 master-0 kubenswrapper[37036]: I0312 14:47:29.117872 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-qr5m5" Mar 12 14:47:29.677522 master-0 kubenswrapper[37036]: I0312 14:47:29.677445 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f523db96-d327-4dbc-ad4d-ba4410801482-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl\" (UID: \"f523db96-d327-4dbc-ad4d-ba4410801482\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl" Mar 12 14:47:29.677829 master-0 kubenswrapper[37036]: E0312 14:47:29.677702 37036 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 14:47:29.677829 master-0 kubenswrapper[37036]: E0312 14:47:29.677820 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f523db96-d327-4dbc-ad4d-ba4410801482-cert podName:f523db96-d327-4dbc-ad4d-ba4410801482 nodeName:}" failed. No retries permitted until 2026-03-12 14:47:45.677794742 +0000 UTC m=+724.685535689 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f523db96-d327-4dbc-ad4d-ba4410801482-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl" (UID: "f523db96-d327-4dbc-ad4d-ba4410801482") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 14:47:29.983329 master-0 kubenswrapper[37036]: I0312 14:47:29.983191 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-wm995\" (UID: \"78642ab2-7b7c-4ca6-bf5a-6da28d829d3e\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" Mar 12 14:47:29.983538 master-0 kubenswrapper[37036]: E0312 14:47:29.983401 37036 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 14:47:29.983538 master-0 kubenswrapper[37036]: E0312 14:47:29.983497 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-webhook-certs podName:78642ab2-7b7c-4ca6-bf5a-6da28d829d3e nodeName:}" failed. No retries permitted until 2026-03-12 14:47:45.983478881 +0000 UTC m=+724.991219808 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-wm995" (UID: "78642ab2-7b7c-4ca6-bf5a-6da28d829d3e") : secret "webhook-server-cert" not found Mar 12 14:47:29.983538 master-0 kubenswrapper[37036]: I0312 14:47:29.983529 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-wm995\" (UID: \"78642ab2-7b7c-4ca6-bf5a-6da28d829d3e\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" Mar 12 14:47:29.984026 master-0 kubenswrapper[37036]: E0312 14:47:29.983878 37036 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 14:47:29.984086 master-0 kubenswrapper[37036]: E0312 14:47:29.984060 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-metrics-certs podName:78642ab2-7b7c-4ca6-bf5a-6da28d829d3e nodeName:}" failed. No retries permitted until 2026-03-12 14:47:45.984036792 +0000 UTC m=+724.991777779 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-wm995" (UID: "78642ab2-7b7c-4ca6-bf5a-6da28d829d3e") : secret "metrics-server-cert" not found Mar 12 14:47:35.993155 master-0 kubenswrapper[37036]: I0312 14:47:35.993098 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-6ts8m" event={"ID":"9ce8293b-69ff-412e-876b-e7ba1fa3bfaa","Type":"ContainerStarted","Data":"3097338134da9ed59de41ed099b0d4bd745fce26be3d42da1f03b6556b5b9aa6"} Mar 12 14:47:35.993879 master-0 kubenswrapper[37036]: I0312 14:47:35.993838 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-6ts8m" Mar 12 14:47:35.997069 master-0 kubenswrapper[37036]: I0312 14:47:35.996999 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-qnrd8" event={"ID":"25b8b8bd-9d46-4f49-9258-d9369124ceb9","Type":"ContainerStarted","Data":"e166180067e7b3482e30252a6c16dc02e871d4d0438e31d6b069ef408cca6b5e"} Mar 12 14:47:35.997302 master-0 kubenswrapper[37036]: I0312 14:47:35.997264 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-qnrd8" Mar 12 14:47:36.002171 master-0 kubenswrapper[37036]: I0312 14:47:36.002135 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4xbpv" event={"ID":"bba03fd8-df13-42d2-a75b-3f9497034686","Type":"ContainerStarted","Data":"48fec29a0942aad0102dba921224ef4863354a8bf526b13405736a41f7d9e880"} Mar 12 14:47:36.002990 master-0 kubenswrapper[37036]: I0312 14:47:36.002961 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4xbpv" Mar 12 14:47:36.022789 master-0 kubenswrapper[37036]: I0312 14:47:36.022711 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-6ts8m" podStartSLOduration=6.367283034 podStartE2EDuration="24.022694188s" podCreationTimestamp="2026-03-12 14:47:12 +0000 UTC" firstStartedPulling="2026-03-12 14:47:14.776959757 +0000 UTC m=+693.784700694" lastFinishedPulling="2026-03-12 14:47:32.432370911 +0000 UTC m=+711.440111848" observedRunningTime="2026-03-12 14:47:36.016582033 +0000 UTC m=+715.024322980" watchObservedRunningTime="2026-03-12 14:47:36.022694188 +0000 UTC m=+715.030435125" Mar 12 14:47:36.058095 master-0 kubenswrapper[37036]: I0312 14:47:36.058015 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4xbpv" podStartSLOduration=7.495436281 podStartE2EDuration="24.057990332s" podCreationTimestamp="2026-03-12 14:47:12 +0000 UTC" firstStartedPulling="2026-03-12 14:47:14.713457196 +0000 UTC m=+693.721198133" lastFinishedPulling="2026-03-12 14:47:31.276011247 +0000 UTC m=+710.283752184" observedRunningTime="2026-03-12 14:47:36.047432496 +0000 UTC m=+715.055173433" watchObservedRunningTime="2026-03-12 14:47:36.057990332 +0000 UTC m=+715.065731269" Mar 12 14:47:36.090338 master-0 kubenswrapper[37036]: I0312 14:47:36.090217 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-qnrd8" podStartSLOduration=3.532805527 podStartE2EDuration="24.090190582s" podCreationTimestamp="2026-03-12 14:47:12 +0000 UTC" firstStartedPulling="2026-03-12 14:47:13.998140116 +0000 UTC m=+693.005881053" lastFinishedPulling="2026-03-12 14:47:34.555525161 +0000 UTC m=+713.563266108" observedRunningTime="2026-03-12 14:47:36.072711474 +0000 UTC m=+715.080452411" watchObservedRunningTime="2026-03-12 14:47:36.090190582 +0000 UTC m=+715.097931529" Mar 12 14:47:36.115410 master-0 kubenswrapper[37036]: I0312 14:47:36.115222 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-b8c8d7cc8-qr5m5"] Mar 12 14:47:37.023451 master-0 kubenswrapper[37036]: I0312 14:47:37.023389 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-pgjs4" event={"ID":"7982f981-768c-43fe-92b3-d6398027b9ad","Type":"ContainerStarted","Data":"543386b975d39c694053ec6bcbce11162ef4a54bd117ccc9d62ea038849964be"} Mar 12 14:47:37.026253 master-0 kubenswrapper[37036]: I0312 14:47:37.024519 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-677c674df7-pgjs4" Mar 12 14:47:37.027682 master-0 kubenswrapper[37036]: I0312 14:47:37.027643 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qjq5j" event={"ID":"d754c42d-f9cc-4bae-9941-56246ef0cda2","Type":"ContainerStarted","Data":"c6c47b75a86bb724cb412eba25c478082dd79e798a44452ee1c40534720709dc"} Mar 12 14:47:37.028563 master-0 kubenswrapper[37036]: I0312 14:47:37.028546 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qjq5j" Mar 12 14:47:37.029519 master-0 kubenswrapper[37036]: I0312 14:47:37.029496 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-qr5m5" event={"ID":"2d83d2d1-5443-4cbd-9b12-535778ff3e9c","Type":"ContainerStarted","Data":"d61527be045ff2c72c28853f03c3279e1d418e23b6d40ffb48c505dba6ba0792"} Mar 12 14:47:37.047650 master-0 kubenswrapper[37036]: I0312 14:47:37.047592 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-bfh8s" event={"ID":"4704aa20-f100-40da-9bc7-5e8f07d3fd85","Type":"ContainerStarted","Data":"48b420323a343a8d74a5074733a3fa0243fd673d850163d51efe288ccfaa9180"} Mar 12 14:47:37.048821 master-0 kubenswrapper[37036]: I0312 14:47:37.048780 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-bfh8s" Mar 12 14:47:37.068277 master-0 kubenswrapper[37036]: I0312 14:47:37.068202 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-2mwwh" event={"ID":"e8099de8-3419-4264-a356-541d4e8df2d6","Type":"ContainerStarted","Data":"62f7c3c5f73b24f3e3ddd58811dd5e3e95fffc3a49b82c33f0ab94726509608c"} Mar 12 14:47:37.069438 master-0 kubenswrapper[37036]: I0312 14:47:37.069123 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-2mwwh" Mar 12 14:47:37.087395 master-0 kubenswrapper[37036]: I0312 14:47:37.087332 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-4h9g4" event={"ID":"e8ae7bb7-a302-4bfa-b642-b8628b9a3e5b","Type":"ContainerStarted","Data":"1741a8c007f97f53a62b3fed2f295451db773021bc52b4ae420a0442b79ec9cc"} Mar 12 14:47:37.087620 master-0 kubenswrapper[37036]: I0312 14:47:37.087429 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-4h9g4" Mar 12 14:47:37.091559 master-0 kubenswrapper[37036]: I0312 14:47:37.091508 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-r4gwb" event={"ID":"08088c1c-bfe4-4a59-ab44-b2b72530488c","Type":"ContainerStarted","Data":"2744e86a7f5aa2d8d6a2ef3a407cca4e5c07fde326a5f27a95d9cb458d2fc810"} Mar 12 14:47:37.092457 master-0 kubenswrapper[37036]: I0312 14:47:37.092423 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-r4gwb" Mar 12 14:47:37.094027 master-0 kubenswrapper[37036]: I0312 14:47:37.093986 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-r757m" event={"ID":"21cb5b95-e003-4c68-af84-f62f11dbeee9","Type":"ContainerStarted","Data":"3df12a3e1fbe7346ae56300a3292b349c3554e4e9808f12d440a281e9696615d"} Mar 12 14:47:37.094726 master-0 kubenswrapper[37036]: I0312 14:47:37.094672 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-r757m" Mar 12 14:47:37.096024 master-0 kubenswrapper[37036]: I0312 14:47:37.096000 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-9vd6t" event={"ID":"78b9b80b-bf83-470c-8712-70c7fd04e021","Type":"ContainerStarted","Data":"01b175c057f6ba42779923d70262f7596ffb2a3b12b391a8219adede911e0d84"} Mar 12 14:47:37.096463 master-0 kubenswrapper[37036]: I0312 14:47:37.096434 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-9vd6t" Mar 12 14:47:37.097981 master-0 kubenswrapper[37036]: I0312 14:47:37.097795 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-kxs4z" event={"ID":"c9d290e9-31e4-4ccd-98f6-ed3e39fb767f","Type":"ContainerStarted","Data":"45c2d0ad53f05fe36c142fbe58ff743bf88f13d34be6b9111e333b072e43b588"} Mar 12 14:47:37.098431 master-0 kubenswrapper[37036]: I0312 14:47:37.098399 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-kxs4z" Mar 12 14:47:37.100128 master-0 kubenswrapper[37036]: I0312 14:47:37.100087 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-rrwhk" event={"ID":"4fca4c75-f7f5-4cd8-a1cf-301ac2bb22d0","Type":"ContainerStarted","Data":"ad1b3e3745511c87e097a549649242e0dd0d12e3194ac9c1ef88f4e134078cd7"} Mar 12 14:47:37.100976 master-0 kubenswrapper[37036]: I0312 14:47:37.100956 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-rrwhk" Mar 12 14:47:37.102559 master-0 kubenswrapper[37036]: I0312 14:47:37.102527 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-xcr9n" event={"ID":"0e26a6b2-5283-44c6-9bd8-ec8834c1e4f4","Type":"ContainerStarted","Data":"2c5971b952c50e05cf14e1e1875789991eeb760671782922de83e82d3564e0d4"} Mar 12 14:47:37.103116 master-0 kubenswrapper[37036]: I0312 14:47:37.103094 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-xcr9n" Mar 12 14:47:37.112816 master-0 kubenswrapper[37036]: I0312 14:47:37.112790 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ngpnz" event={"ID":"106e54be-84f8-4b24-a4c8-8050468fef60","Type":"ContainerStarted","Data":"1a2b4954c59065d669924cce00c023de646a265629fe9226b290a610ea1f69a7"} Mar 12 14:47:37.118206 master-0 kubenswrapper[37036]: I0312 14:47:37.116329 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-r5hwv" event={"ID":"2b100f60-5512-47b6-b614-c245cf300c02","Type":"ContainerStarted","Data":"2c78516a9e8433070ef39ef9d3fe3a8d0e2bf5ba67aa1b0e184f712a87f136c6"} Mar 12 14:47:37.118206 master-0 kubenswrapper[37036]: I0312 14:47:37.116856 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-r5hwv" Mar 12 14:47:37.119264 master-0 kubenswrapper[37036]: I0312 14:47:37.119228 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-f7xf5" event={"ID":"1692da52-39de-4063-8610-2b66a0b54306","Type":"ContainerStarted","Data":"7eafdb667dd49bc756c195ffc0684a5f0787f6382336d8ba8c3d5667b2f38466"} Mar 12 14:47:37.119439 master-0 kubenswrapper[37036]: I0312 14:47:37.119357 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-f7xf5" Mar 12 14:47:37.121836 master-0 kubenswrapper[37036]: I0312 14:47:37.121803 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-ds7n6" event={"ID":"11507e0a-bf03-45b3-918e-604c973a2411","Type":"ContainerStarted","Data":"3e6d221ba4fb93ee9622c69228126067b5cd6c4a2e2f9557e89f08443c814627"} Mar 12 14:47:37.122337 master-0 kubenswrapper[37036]: I0312 14:47:37.122310 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-ds7n6" Mar 12 14:47:37.482336 master-0 kubenswrapper[37036]: I0312 14:47:37.482246 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-677c674df7-pgjs4" podStartSLOduration=6.589124232 podStartE2EDuration="24.482232059s" podCreationTimestamp="2026-03-12 14:47:13 +0000 UTC" firstStartedPulling="2026-03-12 14:47:17.784066155 +0000 UTC m=+696.791807092" lastFinishedPulling="2026-03-12 14:47:35.677173982 +0000 UTC m=+714.684914919" observedRunningTime="2026-03-12 14:47:37.479127925 +0000 UTC m=+716.486868872" watchObservedRunningTime="2026-03-12 14:47:37.482232059 +0000 UTC m=+716.489972986" Mar 12 14:47:37.572067 master-0 kubenswrapper[37036]: I0312 14:47:37.566179 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qjq5j" podStartSLOduration=5.723801388 podStartE2EDuration="25.5661588s" podCreationTimestamp="2026-03-12 14:47:12 +0000 UTC" firstStartedPulling="2026-03-12 14:47:14.713174549 +0000 UTC m=+693.720915486" lastFinishedPulling="2026-03-12 14:47:34.555531961 +0000 UTC m=+713.563272898" observedRunningTime="2026-03-12 14:47:37.514149584 +0000 UTC m=+716.521890541" watchObservedRunningTime="2026-03-12 14:47:37.5661588 +0000 UTC m=+716.573899737" Mar 12 14:47:37.620577 master-0 kubenswrapper[37036]: I0312 14:47:37.619099 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-xcr9n" podStartSLOduration=6.561607156 podStartE2EDuration="24.619075285s" podCreationTimestamp="2026-03-12 14:47:13 +0000 UTC" firstStartedPulling="2026-03-12 14:47:17.035371551 +0000 UTC m=+696.043112478" lastFinishedPulling="2026-03-12 14:47:35.09283967 +0000 UTC m=+714.100580607" observedRunningTime="2026-03-12 14:47:37.554270567 +0000 UTC m=+716.562011514" watchObservedRunningTime="2026-03-12 14:47:37.619075285 +0000 UTC m=+716.626816232" Mar 12 14:47:37.621350 master-0 kubenswrapper[37036]: I0312 14:47:37.621267 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-kxs4z" podStartSLOduration=5.781167246 podStartE2EDuration="25.621252931s" podCreationTimestamp="2026-03-12 14:47:12 +0000 UTC" firstStartedPulling="2026-03-12 14:47:14.715512857 +0000 UTC m=+693.723253794" lastFinishedPulling="2026-03-12 14:47:34.555598542 +0000 UTC m=+713.563339479" observedRunningTime="2026-03-12 14:47:37.605221551 +0000 UTC m=+716.612962508" watchObservedRunningTime="2026-03-12 14:47:37.621252931 +0000 UTC m=+716.628993868" Mar 12 14:47:38.081770 master-0 kubenswrapper[37036]: I0312 14:47:38.081674 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-2mwwh" podStartSLOduration=5.913498958 podStartE2EDuration="26.081647562s" podCreationTimestamp="2026-03-12 14:47:12 +0000 UTC" firstStartedPulling="2026-03-12 14:47:15.4398225 +0000 UTC m=+694.447563437" lastFinishedPulling="2026-03-12 14:47:35.607971094 +0000 UTC m=+714.615712041" observedRunningTime="2026-03-12 14:47:38.071622496 +0000 UTC m=+717.079363433" watchObservedRunningTime="2026-03-12 14:47:38.081647562 +0000 UTC m=+717.089388499" Mar 12 14:47:38.114795 master-0 kubenswrapper[37036]: I0312 14:47:38.114719 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-r4gwb" podStartSLOduration=5.869324693 podStartE2EDuration="26.114702389s" podCreationTimestamp="2026-03-12 14:47:12 +0000 UTC" firstStartedPulling="2026-03-12 14:47:15.412110853 +0000 UTC m=+694.419851790" lastFinishedPulling="2026-03-12 14:47:35.657488549 +0000 UTC m=+714.665229486" observedRunningTime="2026-03-12 14:47:38.112475754 +0000 UTC m=+717.120216691" watchObservedRunningTime="2026-03-12 14:47:38.114702389 +0000 UTC m=+717.122443316" Mar 12 14:47:38.145924 master-0 kubenswrapper[37036]: I0312 14:47:38.143514 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-bfh8s" podStartSLOduration=5.922109746 podStartE2EDuration="26.14349033s" podCreationTimestamp="2026-03-12 14:47:12 +0000 UTC" firstStartedPulling="2026-03-12 14:47:15.457025014 +0000 UTC m=+694.464765951" lastFinishedPulling="2026-03-12 14:47:35.678405598 +0000 UTC m=+714.686146535" observedRunningTime="2026-03-12 14:47:38.136246932 +0000 UTC m=+717.143987879" watchObservedRunningTime="2026-03-12 14:47:38.14349033 +0000 UTC m=+717.151231257" Mar 12 14:47:38.149920 master-0 kubenswrapper[37036]: I0312 14:47:38.147158 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-4dqbv" event={"ID":"62bce21c-b10f-4f09-82e5-c5fe3d712f42","Type":"ContainerStarted","Data":"fb56813175d3a012841a7739c6ea5b62b98380b73684b22bf3554d52dab76367"} Mar 12 14:47:38.149920 master-0 kubenswrapper[37036]: I0312 14:47:38.148569 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-4dqbv" Mar 12 14:47:38.163925 master-0 kubenswrapper[37036]: I0312 14:47:38.162155 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-w7gjf" event={"ID":"a0f55f59-240d-48c1-878f-f03802adc0ab","Type":"ContainerStarted","Data":"6ea11b2a63e6da6a87694d0e86b234a3161f8a03f968a163440104ab463be495"} Mar 12 14:47:38.163925 master-0 kubenswrapper[37036]: I0312 14:47:38.162303 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-w7gjf" Mar 12 14:47:38.192922 master-0 kubenswrapper[37036]: I0312 14:47:38.192020 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-r5hwv" podStartSLOduration=6.571865896 podStartE2EDuration="25.191997234s" podCreationTimestamp="2026-03-12 14:47:13 +0000 UTC" firstStartedPulling="2026-03-12 14:47:17.037350911 +0000 UTC m=+696.045091848" lastFinishedPulling="2026-03-12 14:47:35.657482249 +0000 UTC m=+714.665223186" observedRunningTime="2026-03-12 14:47:38.175341733 +0000 UTC m=+717.183082690" watchObservedRunningTime="2026-03-12 14:47:38.191997234 +0000 UTC m=+717.199738171" Mar 12 14:47:38.217927 master-0 kubenswrapper[37036]: I0312 14:47:38.215119 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-9vd6t" podStartSLOduration=7.17233051 podStartE2EDuration="25.215097088s" podCreationTimestamp="2026-03-12 14:47:13 +0000 UTC" firstStartedPulling="2026-03-12 14:47:17.051192845 +0000 UTC m=+696.058933782" lastFinishedPulling="2026-03-12 14:47:35.093959423 +0000 UTC m=+714.101700360" observedRunningTime="2026-03-12 14:47:38.20447557 +0000 UTC m=+717.212216517" watchObservedRunningTime="2026-03-12 14:47:38.215097088 +0000 UTC m=+717.222838025" Mar 12 14:47:38.235918 master-0 kubenswrapper[37036]: I0312 14:47:38.235016 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ngpnz" podStartSLOduration=7.342402848 podStartE2EDuration="25.234998766s" podCreationTimestamp="2026-03-12 14:47:13 +0000 UTC" firstStartedPulling="2026-03-12 14:47:17.786566856 +0000 UTC m=+696.794307793" lastFinishedPulling="2026-03-12 14:47:35.679162774 +0000 UTC m=+714.686903711" observedRunningTime="2026-03-12 14:47:38.23226389 +0000 UTC m=+717.240004827" watchObservedRunningTime="2026-03-12 14:47:38.234998766 +0000 UTC m=+717.242739703" Mar 12 14:47:38.270874 master-0 kubenswrapper[37036]: I0312 14:47:38.270763 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-ds7n6" podStartSLOduration=5.018492674 podStartE2EDuration="25.27073965s" podCreationTimestamp="2026-03-12 14:47:13 +0000 UTC" firstStartedPulling="2026-03-12 14:47:15.40270408 +0000 UTC m=+694.410445017" lastFinishedPulling="2026-03-12 14:47:35.654951056 +0000 UTC m=+714.662691993" observedRunningTime="2026-03-12 14:47:38.256528988 +0000 UTC m=+717.264269925" watchObservedRunningTime="2026-03-12 14:47:38.27073965 +0000 UTC m=+717.278480587" Mar 12 14:47:38.328386 master-0 kubenswrapper[37036]: I0312 14:47:38.328312 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-f7xf5" podStartSLOduration=6.684308834 podStartE2EDuration="25.32829096s" podCreationTimestamp="2026-03-12 14:47:13 +0000 UTC" firstStartedPulling="2026-03-12 14:47:17.035093706 +0000 UTC m=+696.042834643" lastFinishedPulling="2026-03-12 14:47:35.679075832 +0000 UTC m=+714.686816769" observedRunningTime="2026-03-12 14:47:38.285285358 +0000 UTC m=+717.293026305" watchObservedRunningTime="2026-03-12 14:47:38.32829096 +0000 UTC m=+717.336031897" Mar 12 14:47:38.637886 master-0 kubenswrapper[37036]: I0312 14:47:38.637800 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-4h9g4" podStartSLOduration=6.322024507 podStartE2EDuration="26.637780987s" podCreationTimestamp="2026-03-12 14:47:12 +0000 UTC" firstStartedPulling="2026-03-12 14:47:14.740583331 +0000 UTC m=+693.748324268" lastFinishedPulling="2026-03-12 14:47:35.056339811 +0000 UTC m=+714.064080748" observedRunningTime="2026-03-12 14:47:38.606462574 +0000 UTC m=+717.614203511" watchObservedRunningTime="2026-03-12 14:47:38.637780987 +0000 UTC m=+717.645521924" Mar 12 14:47:38.638769 master-0 kubenswrapper[37036]: I0312 14:47:38.638735 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-r757m" podStartSLOduration=6.470747836 podStartE2EDuration="26.638728165s" podCreationTimestamp="2026-03-12 14:47:12 +0000 UTC" firstStartedPulling="2026-03-12 14:47:15.440011665 +0000 UTC m=+694.447752602" lastFinishedPulling="2026-03-12 14:47:35.607991994 +0000 UTC m=+714.615732931" observedRunningTime="2026-03-12 14:47:38.629240582 +0000 UTC m=+717.636981529" watchObservedRunningTime="2026-03-12 14:47:38.638728165 +0000 UTC m=+717.646469102" Mar 12 14:47:38.707922 master-0 kubenswrapper[37036]: I0312 14:47:38.707097 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-rrwhk" podStartSLOduration=6.367949968 podStartE2EDuration="26.707075507s" podCreationTimestamp="2026-03-12 14:47:12 +0000 UTC" firstStartedPulling="2026-03-12 14:47:14.717215642 +0000 UTC m=+693.724956579" lastFinishedPulling="2026-03-12 14:47:35.056341181 +0000 UTC m=+714.064082118" observedRunningTime="2026-03-12 14:47:38.683169667 +0000 UTC m=+717.690910594" watchObservedRunningTime="2026-03-12 14:47:38.707075507 +0000 UTC m=+717.714816444" Mar 12 14:47:38.712923 master-0 kubenswrapper[37036]: I0312 14:47:38.711123 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-w7gjf" podStartSLOduration=7.071414222 podStartE2EDuration="25.71110452s" podCreationTimestamp="2026-03-12 14:47:13 +0000 UTC" firstStartedPulling="2026-03-12 14:47:17.042260103 +0000 UTC m=+696.050001040" lastFinishedPulling="2026-03-12 14:47:35.681950391 +0000 UTC m=+714.689691338" observedRunningTime="2026-03-12 14:47:38.70769013 +0000 UTC m=+717.715431067" watchObservedRunningTime="2026-03-12 14:47:38.71110452 +0000 UTC m=+717.718845457" Mar 12 14:47:38.755919 master-0 kubenswrapper[37036]: I0312 14:47:38.755206 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-4dqbv" podStartSLOduration=6.357633507 podStartE2EDuration="26.755183084s" podCreationTimestamp="2026-03-12 14:47:12 +0000 UTC" firstStartedPulling="2026-03-12 14:47:15.438947633 +0000 UTC m=+694.446688570" lastFinishedPulling="2026-03-12 14:47:35.83649721 +0000 UTC m=+714.844238147" observedRunningTime="2026-03-12 14:47:38.733740804 +0000 UTC m=+717.741481751" watchObservedRunningTime="2026-03-12 14:47:38.755183084 +0000 UTC m=+717.762924021" Mar 12 14:47:40.185232 master-0 kubenswrapper[37036]: I0312 14:47:40.185149 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-qr5m5" event={"ID":"2d83d2d1-5443-4cbd-9b12-535778ff3e9c","Type":"ContainerStarted","Data":"156a1007f9b385e6c7e542e6624a4eb4c1a25713212e5b1c2fe02dedfd3e058d"} Mar 12 14:47:40.185815 master-0 kubenswrapper[37036]: I0312 14:47:40.185372 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-qr5m5" Mar 12 14:47:40.207352 master-0 kubenswrapper[37036]: I0312 14:47:40.207243 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-qr5m5" podStartSLOduration=24.340535566 podStartE2EDuration="28.207216951s" podCreationTimestamp="2026-03-12 14:47:12 +0000 UTC" firstStartedPulling="2026-03-12 14:47:36.150580741 +0000 UTC m=+715.158321668" lastFinishedPulling="2026-03-12 14:47:40.017262116 +0000 UTC m=+719.025003053" observedRunningTime="2026-03-12 14:47:40.200728718 +0000 UTC m=+719.208469655" watchObservedRunningTime="2026-03-12 14:47:40.207216951 +0000 UTC m=+719.214957888" Mar 12 14:47:43.067237 master-0 kubenswrapper[37036]: I0312 14:47:43.067155 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-qnrd8" Mar 12 14:47:43.125524 master-0 kubenswrapper[37036]: I0312 14:47:43.125207 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-rrwhk" Mar 12 14:47:43.188680 master-0 kubenswrapper[37036]: I0312 14:47:43.188614 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-4h9g4" Mar 12 14:47:43.341887 master-0 kubenswrapper[37036]: I0312 14:47:43.341783 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qjq5j" Mar 12 14:47:43.425039 master-0 kubenswrapper[37036]: I0312 14:47:43.424978 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-kxs4z" Mar 12 14:47:43.549679 master-0 kubenswrapper[37036]: I0312 14:47:43.549625 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4xbpv" Mar 12 14:47:43.645990 master-0 kubenswrapper[37036]: I0312 14:47:43.645851 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-6ts8m" Mar 12 14:47:43.648051 master-0 kubenswrapper[37036]: I0312 14:47:43.648022 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-4dqbv" Mar 12 14:47:43.862947 master-0 kubenswrapper[37036]: I0312 14:47:43.862870 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-2mwwh" Mar 12 14:47:43.927564 master-0 kubenswrapper[37036]: I0312 14:47:43.927396 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-r4gwb" Mar 12 14:47:43.945870 master-0 kubenswrapper[37036]: I0312 14:47:43.945796 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-r757m" Mar 12 14:47:43.970700 master-0 kubenswrapper[37036]: I0312 14:47:43.970661 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-bfh8s" Mar 12 14:47:44.132542 master-0 kubenswrapper[37036]: I0312 14:47:44.132483 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-ds7n6" Mar 12 14:47:44.221756 master-0 kubenswrapper[37036]: I0312 14:47:44.221639 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-xcr9n" Mar 12 14:47:44.243224 master-0 kubenswrapper[37036]: I0312 14:47:44.240063 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-677c674df7-pgjs4" Mar 12 14:47:44.258910 master-0 kubenswrapper[37036]: I0312 14:47:44.258824 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-r5hwv" Mar 12 14:47:44.287930 master-0 kubenswrapper[37036]: I0312 14:47:44.287863 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-w7gjf" Mar 12 14:47:44.412470 master-0 kubenswrapper[37036]: I0312 14:47:44.412421 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-f7xf5" Mar 12 14:47:44.608869 master-0 kubenswrapper[37036]: I0312 14:47:44.608808 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-9vd6t" Mar 12 14:47:45.743207 master-0 kubenswrapper[37036]: I0312 14:47:45.743091 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f523db96-d327-4dbc-ad4d-ba4410801482-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl\" (UID: \"f523db96-d327-4dbc-ad4d-ba4410801482\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl" Mar 12 14:47:45.748924 master-0 kubenswrapper[37036]: I0312 14:47:45.748831 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f523db96-d327-4dbc-ad4d-ba4410801482-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl\" (UID: \"f523db96-d327-4dbc-ad4d-ba4410801482\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl" Mar 12 14:47:45.994055 master-0 kubenswrapper[37036]: I0312 14:47:45.993876 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl" Mar 12 14:47:46.050586 master-0 kubenswrapper[37036]: I0312 14:47:46.050506 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-wm995\" (UID: \"78642ab2-7b7c-4ca6-bf5a-6da28d829d3e\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" Mar 12 14:47:46.050586 master-0 kubenswrapper[37036]: I0312 14:47:46.050571 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-wm995\" (UID: \"78642ab2-7b7c-4ca6-bf5a-6da28d829d3e\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" Mar 12 14:47:46.054613 master-0 kubenswrapper[37036]: I0312 14:47:46.054556 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-wm995\" (UID: \"78642ab2-7b7c-4ca6-bf5a-6da28d829d3e\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" Mar 12 14:47:46.055305 master-0 kubenswrapper[37036]: I0312 14:47:46.055272 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78642ab2-7b7c-4ca6-bf5a-6da28d829d3e-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-wm995\" (UID: \"78642ab2-7b7c-4ca6-bf5a-6da28d829d3e\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" Mar 12 14:47:46.170761 master-0 kubenswrapper[37036]: I0312 14:47:46.170703 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" Mar 12 14:47:47.108023 master-0 kubenswrapper[37036]: I0312 14:47:47.107770 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl"] Mar 12 14:47:47.312178 master-0 kubenswrapper[37036]: I0312 14:47:47.309305 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995"] Mar 12 14:47:47.334179 master-0 kubenswrapper[37036]: I0312 14:47:47.334116 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl" event={"ID":"f523db96-d327-4dbc-ad4d-ba4410801482","Type":"ContainerStarted","Data":"96351543278fe2e46d1609ab997cec7c70aeb6ddcc1816d0a10ce5bea2110f4e"} Mar 12 14:47:47.338702 master-0 kubenswrapper[37036]: I0312 14:47:47.338616 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" event={"ID":"78642ab2-7b7c-4ca6-bf5a-6da28d829d3e","Type":"ContainerStarted","Data":"ad578b2cfb24cd87a45b792824886f2eebfbc3ccfed0091f9a92390a95a38dd0"} Mar 12 14:47:48.351453 master-0 kubenswrapper[37036]: I0312 14:47:48.351340 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" event={"ID":"78642ab2-7b7c-4ca6-bf5a-6da28d829d3e","Type":"ContainerStarted","Data":"5adf40692f55839d28f8b2179d7dd6fab950d987dd5189c88f0c5004d702233d"} Mar 12 14:47:48.352129 master-0 kubenswrapper[37036]: I0312 14:47:48.352101 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" Mar 12 14:47:48.400643 master-0 kubenswrapper[37036]: I0312 14:47:48.400538 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" podStartSLOduration=35.400512362 podStartE2EDuration="35.400512362s" podCreationTimestamp="2026-03-12 14:47:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:47:48.383373711 +0000 UTC m=+727.391114648" watchObservedRunningTime="2026-03-12 14:47:48.400512362 +0000 UTC m=+727.408253309" Mar 12 14:47:49.126934 master-0 kubenswrapper[37036]: I0312 14:47:49.126869 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-qr5m5" Mar 12 14:47:49.363058 master-0 kubenswrapper[37036]: I0312 14:47:49.362913 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl" event={"ID":"f523db96-d327-4dbc-ad4d-ba4410801482","Type":"ContainerStarted","Data":"9e9318a9dfbc893ecb7f7f20affae37e363516ac4b5d10c0fcaa33fa8193e183"} Mar 12 14:47:49.363058 master-0 kubenswrapper[37036]: I0312 14:47:49.362988 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl" Mar 12 14:47:49.394887 master-0 kubenswrapper[37036]: I0312 14:47:49.394807 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl" podStartSLOduration=34.464213832 podStartE2EDuration="36.394785393s" podCreationTimestamp="2026-03-12 14:47:13 +0000 UTC" firstStartedPulling="2026-03-12 14:47:47.064206129 +0000 UTC m=+726.071947076" lastFinishedPulling="2026-03-12 14:47:48.9947777 +0000 UTC m=+728.002518637" observedRunningTime="2026-03-12 14:47:49.392018536 +0000 UTC m=+728.399759483" watchObservedRunningTime="2026-03-12 14:47:49.394785393 +0000 UTC m=+728.402526330" Mar 12 14:47:56.000326 master-0 kubenswrapper[37036]: I0312 14:47:56.000224 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-c5cnl" Mar 12 14:47:56.184175 master-0 kubenswrapper[37036]: I0312 14:47:56.184102 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-wm995" Mar 12 14:48:38.492948 master-0 kubenswrapper[37036]: I0312 14:48:38.490977 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-l8v4f"] Mar 12 14:48:38.493509 master-0 kubenswrapper[37036]: I0312 14:48:38.493025 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-l8v4f" Mar 12 14:48:38.509063 master-0 kubenswrapper[37036]: I0312 14:48:38.508694 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-l8v4f"] Mar 12 14:48:38.520302 master-0 kubenswrapper[37036]: I0312 14:48:38.518643 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 12 14:48:38.520302 master-0 kubenswrapper[37036]: I0312 14:48:38.518858 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 12 14:48:38.520302 master-0 kubenswrapper[37036]: I0312 14:48:38.518982 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 12 14:48:38.583038 master-0 kubenswrapper[37036]: I0312 14:48:38.582234 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjlxf\" (UniqueName: \"kubernetes.io/projected/cd13635d-f1f1-4c88-ab76-560157eb3878-kube-api-access-gjlxf\") pod \"dnsmasq-dns-685c76cf85-l8v4f\" (UID: \"cd13635d-f1f1-4c88-ab76-560157eb3878\") " pod="openstack/dnsmasq-dns-685c76cf85-l8v4f" Mar 12 14:48:38.583038 master-0 kubenswrapper[37036]: I0312 14:48:38.582389 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd13635d-f1f1-4c88-ab76-560157eb3878-config\") pod \"dnsmasq-dns-685c76cf85-l8v4f\" (UID: \"cd13635d-f1f1-4c88-ab76-560157eb3878\") " pod="openstack/dnsmasq-dns-685c76cf85-l8v4f" Mar 12 14:48:38.593617 master-0 kubenswrapper[37036]: I0312 14:48:38.593551 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-2nlww"] Mar 12 14:48:38.595469 master-0 kubenswrapper[37036]: I0312 14:48:38.595434 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-2nlww" Mar 12 14:48:38.598395 master-0 kubenswrapper[37036]: I0312 14:48:38.597852 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 12 14:48:38.641631 master-0 kubenswrapper[37036]: I0312 14:48:38.641502 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-2nlww"] Mar 12 14:48:38.683724 master-0 kubenswrapper[37036]: I0312 14:48:38.683654 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83-config\") pod \"dnsmasq-dns-8476fd89bc-2nlww\" (UID: \"c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83\") " pod="openstack/dnsmasq-dns-8476fd89bc-2nlww" Mar 12 14:48:38.683724 master-0 kubenswrapper[37036]: I0312 14:48:38.683715 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9lcj\" (UniqueName: \"kubernetes.io/projected/c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83-kube-api-access-x9lcj\") pod \"dnsmasq-dns-8476fd89bc-2nlww\" (UID: \"c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83\") " pod="openstack/dnsmasq-dns-8476fd89bc-2nlww" Mar 12 14:48:38.684018 master-0 kubenswrapper[37036]: I0312 14:48:38.683752 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjlxf\" (UniqueName: \"kubernetes.io/projected/cd13635d-f1f1-4c88-ab76-560157eb3878-kube-api-access-gjlxf\") pod \"dnsmasq-dns-685c76cf85-l8v4f\" (UID: \"cd13635d-f1f1-4c88-ab76-560157eb3878\") " pod="openstack/dnsmasq-dns-685c76cf85-l8v4f" Mar 12 14:48:38.684018 master-0 kubenswrapper[37036]: I0312 14:48:38.683862 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd13635d-f1f1-4c88-ab76-560157eb3878-config\") pod \"dnsmasq-dns-685c76cf85-l8v4f\" (UID: \"cd13635d-f1f1-4c88-ab76-560157eb3878\") " pod="openstack/dnsmasq-dns-685c76cf85-l8v4f" Mar 12 14:48:38.684018 master-0 kubenswrapper[37036]: I0312 14:48:38.683915 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83-dns-svc\") pod \"dnsmasq-dns-8476fd89bc-2nlww\" (UID: \"c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83\") " pod="openstack/dnsmasq-dns-8476fd89bc-2nlww" Mar 12 14:48:38.685063 master-0 kubenswrapper[37036]: I0312 14:48:38.685013 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd13635d-f1f1-4c88-ab76-560157eb3878-config\") pod \"dnsmasq-dns-685c76cf85-l8v4f\" (UID: \"cd13635d-f1f1-4c88-ab76-560157eb3878\") " pod="openstack/dnsmasq-dns-685c76cf85-l8v4f" Mar 12 14:48:38.699347 master-0 kubenswrapper[37036]: I0312 14:48:38.699286 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjlxf\" (UniqueName: \"kubernetes.io/projected/cd13635d-f1f1-4c88-ab76-560157eb3878-kube-api-access-gjlxf\") pod \"dnsmasq-dns-685c76cf85-l8v4f\" (UID: \"cd13635d-f1f1-4c88-ab76-560157eb3878\") " pod="openstack/dnsmasq-dns-685c76cf85-l8v4f" Mar 12 14:48:38.785776 master-0 kubenswrapper[37036]: I0312 14:48:38.785495 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83-dns-svc\") pod \"dnsmasq-dns-8476fd89bc-2nlww\" (UID: \"c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83\") " pod="openstack/dnsmasq-dns-8476fd89bc-2nlww" Mar 12 14:48:38.785776 master-0 kubenswrapper[37036]: I0312 14:48:38.785631 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83-config\") pod \"dnsmasq-dns-8476fd89bc-2nlww\" (UID: \"c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83\") " pod="openstack/dnsmasq-dns-8476fd89bc-2nlww" Mar 12 14:48:38.785776 master-0 kubenswrapper[37036]: I0312 14:48:38.785655 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9lcj\" (UniqueName: \"kubernetes.io/projected/c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83-kube-api-access-x9lcj\") pod \"dnsmasq-dns-8476fd89bc-2nlww\" (UID: \"c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83\") " pod="openstack/dnsmasq-dns-8476fd89bc-2nlww" Mar 12 14:48:38.787086 master-0 kubenswrapper[37036]: I0312 14:48:38.786993 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83-dns-svc\") pod \"dnsmasq-dns-8476fd89bc-2nlww\" (UID: \"c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83\") " pod="openstack/dnsmasq-dns-8476fd89bc-2nlww" Mar 12 14:48:38.787223 master-0 kubenswrapper[37036]: I0312 14:48:38.787118 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83-config\") pod \"dnsmasq-dns-8476fd89bc-2nlww\" (UID: \"c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83\") " pod="openstack/dnsmasq-dns-8476fd89bc-2nlww" Mar 12 14:48:38.804639 master-0 kubenswrapper[37036]: I0312 14:48:38.804594 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9lcj\" (UniqueName: \"kubernetes.io/projected/c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83-kube-api-access-x9lcj\") pod \"dnsmasq-dns-8476fd89bc-2nlww\" (UID: \"c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83\") " pod="openstack/dnsmasq-dns-8476fd89bc-2nlww" Mar 12 14:48:38.829054 master-0 kubenswrapper[37036]: I0312 14:48:38.828999 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-l8v4f" Mar 12 14:48:38.928018 master-0 kubenswrapper[37036]: I0312 14:48:38.927452 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-2nlww" Mar 12 14:48:39.357389 master-0 kubenswrapper[37036]: I0312 14:48:39.350515 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-l8v4f"] Mar 12 14:48:39.444490 master-0 kubenswrapper[37036]: I0312 14:48:39.444005 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-2nlww"] Mar 12 14:48:39.868423 master-0 kubenswrapper[37036]: I0312 14:48:39.868343 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685c76cf85-l8v4f" event={"ID":"cd13635d-f1f1-4c88-ab76-560157eb3878","Type":"ContainerStarted","Data":"b885e5ff268839b8b10b993a8462a2ae523baaf496a7580a15c00820f902c3a1"} Mar 12 14:48:39.871613 master-0 kubenswrapper[37036]: I0312 14:48:39.870714 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476fd89bc-2nlww" event={"ID":"c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83","Type":"ContainerStarted","Data":"42958e97293f4dc711a4ffd619c8e33515d6b2f488c40cb8fa433340e2db32c4"} Mar 12 14:48:42.809664 master-0 kubenswrapper[37036]: I0312 14:48:42.808507 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-l8v4f"] Mar 12 14:48:42.886485 master-0 kubenswrapper[37036]: I0312 14:48:42.886379 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-5gmb4"] Mar 12 14:48:42.897704 master-0 kubenswrapper[37036]: I0312 14:48:42.896041 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586dbdbb8c-5gmb4" Mar 12 14:48:42.920988 master-0 kubenswrapper[37036]: I0312 14:48:42.919827 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-5gmb4"] Mar 12 14:48:43.011005 master-0 kubenswrapper[37036]: I0312 14:48:43.010953 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-288hc\" (UniqueName: \"kubernetes.io/projected/4510d55e-0c1c-463c-b311-74e9c8864474-kube-api-access-288hc\") pod \"dnsmasq-dns-586dbdbb8c-5gmb4\" (UID: \"4510d55e-0c1c-463c-b311-74e9c8864474\") " pod="openstack/dnsmasq-dns-586dbdbb8c-5gmb4" Mar 12 14:48:43.011222 master-0 kubenswrapper[37036]: I0312 14:48:43.011063 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4510d55e-0c1c-463c-b311-74e9c8864474-dns-svc\") pod \"dnsmasq-dns-586dbdbb8c-5gmb4\" (UID: \"4510d55e-0c1c-463c-b311-74e9c8864474\") " pod="openstack/dnsmasq-dns-586dbdbb8c-5gmb4" Mar 12 14:48:43.011222 master-0 kubenswrapper[37036]: I0312 14:48:43.011191 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4510d55e-0c1c-463c-b311-74e9c8864474-config\") pod \"dnsmasq-dns-586dbdbb8c-5gmb4\" (UID: \"4510d55e-0c1c-463c-b311-74e9c8864474\") " pod="openstack/dnsmasq-dns-586dbdbb8c-5gmb4" Mar 12 14:48:43.112978 master-0 kubenswrapper[37036]: I0312 14:48:43.112586 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-288hc\" (UniqueName: \"kubernetes.io/projected/4510d55e-0c1c-463c-b311-74e9c8864474-kube-api-access-288hc\") pod \"dnsmasq-dns-586dbdbb8c-5gmb4\" (UID: \"4510d55e-0c1c-463c-b311-74e9c8864474\") " pod="openstack/dnsmasq-dns-586dbdbb8c-5gmb4" Mar 12 14:48:43.112978 master-0 kubenswrapper[37036]: I0312 14:48:43.112659 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4510d55e-0c1c-463c-b311-74e9c8864474-dns-svc\") pod \"dnsmasq-dns-586dbdbb8c-5gmb4\" (UID: \"4510d55e-0c1c-463c-b311-74e9c8864474\") " pod="openstack/dnsmasq-dns-586dbdbb8c-5gmb4" Mar 12 14:48:43.112978 master-0 kubenswrapper[37036]: I0312 14:48:43.112731 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4510d55e-0c1c-463c-b311-74e9c8864474-config\") pod \"dnsmasq-dns-586dbdbb8c-5gmb4\" (UID: \"4510d55e-0c1c-463c-b311-74e9c8864474\") " pod="openstack/dnsmasq-dns-586dbdbb8c-5gmb4" Mar 12 14:48:43.113825 master-0 kubenswrapper[37036]: I0312 14:48:43.113737 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4510d55e-0c1c-463c-b311-74e9c8864474-config\") pod \"dnsmasq-dns-586dbdbb8c-5gmb4\" (UID: \"4510d55e-0c1c-463c-b311-74e9c8864474\") " pod="openstack/dnsmasq-dns-586dbdbb8c-5gmb4" Mar 12 14:48:43.121508 master-0 kubenswrapper[37036]: I0312 14:48:43.116658 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4510d55e-0c1c-463c-b311-74e9c8864474-dns-svc\") pod \"dnsmasq-dns-586dbdbb8c-5gmb4\" (UID: \"4510d55e-0c1c-463c-b311-74e9c8864474\") " pod="openstack/dnsmasq-dns-586dbdbb8c-5gmb4" Mar 12 14:48:43.168971 master-0 kubenswrapper[37036]: I0312 14:48:43.163931 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-288hc\" (UniqueName: \"kubernetes.io/projected/4510d55e-0c1c-463c-b311-74e9c8864474-kube-api-access-288hc\") pod \"dnsmasq-dns-586dbdbb8c-5gmb4\" (UID: \"4510d55e-0c1c-463c-b311-74e9c8864474\") " pod="openstack/dnsmasq-dns-586dbdbb8c-5gmb4" Mar 12 14:48:43.227559 master-0 kubenswrapper[37036]: I0312 14:48:43.227508 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586dbdbb8c-5gmb4" Mar 12 14:48:43.317966 master-0 kubenswrapper[37036]: I0312 14:48:43.315563 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-2nlww"] Mar 12 14:48:43.422365 master-0 kubenswrapper[37036]: I0312 14:48:43.422300 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8"] Mar 12 14:48:43.434498 master-0 kubenswrapper[37036]: I0312 14:48:43.434448 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8"] Mar 12 14:48:43.434722 master-0 kubenswrapper[37036]: I0312 14:48:43.434558 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8" Mar 12 14:48:43.533972 master-0 kubenswrapper[37036]: I0312 14:48:43.533782 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9866f383-8abf-4106-9cf6-9e6265fe07b4-config\") pod \"dnsmasq-dns-6ff8fd9d5c-wn7k8\" (UID: \"9866f383-8abf-4106-9cf6-9e6265fe07b4\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8" Mar 12 14:48:43.534197 master-0 kubenswrapper[37036]: I0312 14:48:43.534080 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9866f383-8abf-4106-9cf6-9e6265fe07b4-dns-svc\") pod \"dnsmasq-dns-6ff8fd9d5c-wn7k8\" (UID: \"9866f383-8abf-4106-9cf6-9e6265fe07b4\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8" Mar 12 14:48:43.534338 master-0 kubenswrapper[37036]: I0312 14:48:43.534217 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwswk\" (UniqueName: \"kubernetes.io/projected/9866f383-8abf-4106-9cf6-9e6265fe07b4-kube-api-access-pwswk\") pod \"dnsmasq-dns-6ff8fd9d5c-wn7k8\" (UID: \"9866f383-8abf-4106-9cf6-9e6265fe07b4\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8" Mar 12 14:48:43.637442 master-0 kubenswrapper[37036]: I0312 14:48:43.637371 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9866f383-8abf-4106-9cf6-9e6265fe07b4-config\") pod \"dnsmasq-dns-6ff8fd9d5c-wn7k8\" (UID: \"9866f383-8abf-4106-9cf6-9e6265fe07b4\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8" Mar 12 14:48:43.644395 master-0 kubenswrapper[37036]: I0312 14:48:43.638248 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9866f383-8abf-4106-9cf6-9e6265fe07b4-dns-svc\") pod \"dnsmasq-dns-6ff8fd9d5c-wn7k8\" (UID: \"9866f383-8abf-4106-9cf6-9e6265fe07b4\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8" Mar 12 14:48:43.644709 master-0 kubenswrapper[37036]: I0312 14:48:43.638421 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwswk\" (UniqueName: \"kubernetes.io/projected/9866f383-8abf-4106-9cf6-9e6265fe07b4-kube-api-access-pwswk\") pod \"dnsmasq-dns-6ff8fd9d5c-wn7k8\" (UID: \"9866f383-8abf-4106-9cf6-9e6265fe07b4\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8" Mar 12 14:48:43.652794 master-0 kubenswrapper[37036]: I0312 14:48:43.652653 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9866f383-8abf-4106-9cf6-9e6265fe07b4-config\") pod \"dnsmasq-dns-6ff8fd9d5c-wn7k8\" (UID: \"9866f383-8abf-4106-9cf6-9e6265fe07b4\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8" Mar 12 14:48:43.660434 master-0 kubenswrapper[37036]: I0312 14:48:43.658705 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9866f383-8abf-4106-9cf6-9e6265fe07b4-dns-svc\") pod \"dnsmasq-dns-6ff8fd9d5c-wn7k8\" (UID: \"9866f383-8abf-4106-9cf6-9e6265fe07b4\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8" Mar 12 14:48:43.681350 master-0 kubenswrapper[37036]: I0312 14:48:43.664773 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwswk\" (UniqueName: \"kubernetes.io/projected/9866f383-8abf-4106-9cf6-9e6265fe07b4-kube-api-access-pwswk\") pod \"dnsmasq-dns-6ff8fd9d5c-wn7k8\" (UID: \"9866f383-8abf-4106-9cf6-9e6265fe07b4\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8" Mar 12 14:48:43.796494 master-0 kubenswrapper[37036]: I0312 14:48:43.796008 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8" Mar 12 14:48:45.078281 master-0 kubenswrapper[37036]: I0312 14:48:45.077330 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 12 14:48:45.082216 master-0 kubenswrapper[37036]: I0312 14:48:45.079014 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 12 14:48:45.098364 master-0 kubenswrapper[37036]: I0312 14:48:45.098315 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 12 14:48:45.098569 master-0 kubenswrapper[37036]: I0312 14:48:45.098326 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 12 14:48:45.107021 master-0 kubenswrapper[37036]: I0312 14:48:45.106847 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 12 14:48:45.146807 master-0 kubenswrapper[37036]: I0312 14:48:45.140139 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 12 14:48:45.206006 master-0 kubenswrapper[37036]: I0312 14:48:45.204076 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m7wt\" (UniqueName: \"kubernetes.io/projected/b6a9660b-6127-48b0-82e7-cf5e38a66429-kube-api-access-9m7wt\") pod \"memcached-0\" (UID: \"b6a9660b-6127-48b0-82e7-cf5e38a66429\") " pod="openstack/memcached-0" Mar 12 14:48:45.206196 master-0 kubenswrapper[37036]: I0312 14:48:45.206098 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b6a9660b-6127-48b0-82e7-cf5e38a66429-kolla-config\") pod \"memcached-0\" (UID: \"b6a9660b-6127-48b0-82e7-cf5e38a66429\") " pod="openstack/memcached-0" Mar 12 14:48:45.206196 master-0 kubenswrapper[37036]: I0312 14:48:45.206153 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6a9660b-6127-48b0-82e7-cf5e38a66429-config-data\") pod \"memcached-0\" (UID: \"b6a9660b-6127-48b0-82e7-cf5e38a66429\") " pod="openstack/memcached-0" Mar 12 14:48:45.206267 master-0 kubenswrapper[37036]: I0312 14:48:45.206213 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6a9660b-6127-48b0-82e7-cf5e38a66429-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b6a9660b-6127-48b0-82e7-cf5e38a66429\") " pod="openstack/memcached-0" Mar 12 14:48:45.206401 master-0 kubenswrapper[37036]: I0312 14:48:45.206379 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6a9660b-6127-48b0-82e7-cf5e38a66429-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b6a9660b-6127-48b0-82e7-cf5e38a66429\") " pod="openstack/memcached-0" Mar 12 14:48:45.309199 master-0 kubenswrapper[37036]: I0312 14:48:45.309116 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6a9660b-6127-48b0-82e7-cf5e38a66429-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b6a9660b-6127-48b0-82e7-cf5e38a66429\") " pod="openstack/memcached-0" Mar 12 14:48:45.309199 master-0 kubenswrapper[37036]: I0312 14:48:45.309188 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m7wt\" (UniqueName: \"kubernetes.io/projected/b6a9660b-6127-48b0-82e7-cf5e38a66429-kube-api-access-9m7wt\") pod \"memcached-0\" (UID: \"b6a9660b-6127-48b0-82e7-cf5e38a66429\") " pod="openstack/memcached-0" Mar 12 14:48:45.309423 master-0 kubenswrapper[37036]: I0312 14:48:45.309221 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b6a9660b-6127-48b0-82e7-cf5e38a66429-kolla-config\") pod \"memcached-0\" (UID: \"b6a9660b-6127-48b0-82e7-cf5e38a66429\") " pod="openstack/memcached-0" Mar 12 14:48:45.309521 master-0 kubenswrapper[37036]: I0312 14:48:45.309490 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6a9660b-6127-48b0-82e7-cf5e38a66429-config-data\") pod \"memcached-0\" (UID: \"b6a9660b-6127-48b0-82e7-cf5e38a66429\") " pod="openstack/memcached-0" Mar 12 14:48:45.309749 master-0 kubenswrapper[37036]: I0312 14:48:45.309712 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6a9660b-6127-48b0-82e7-cf5e38a66429-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b6a9660b-6127-48b0-82e7-cf5e38a66429\") " pod="openstack/memcached-0" Mar 12 14:48:45.316723 master-0 kubenswrapper[37036]: I0312 14:48:45.316629 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6a9660b-6127-48b0-82e7-cf5e38a66429-config-data\") pod \"memcached-0\" (UID: \"b6a9660b-6127-48b0-82e7-cf5e38a66429\") " pod="openstack/memcached-0" Mar 12 14:48:45.325224 master-0 kubenswrapper[37036]: I0312 14:48:45.320097 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b6a9660b-6127-48b0-82e7-cf5e38a66429-kolla-config\") pod \"memcached-0\" (UID: \"b6a9660b-6127-48b0-82e7-cf5e38a66429\") " pod="openstack/memcached-0" Mar 12 14:48:45.325224 master-0 kubenswrapper[37036]: I0312 14:48:45.320888 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6a9660b-6127-48b0-82e7-cf5e38a66429-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b6a9660b-6127-48b0-82e7-cf5e38a66429\") " pod="openstack/memcached-0" Mar 12 14:48:45.338040 master-0 kubenswrapper[37036]: I0312 14:48:45.332324 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6a9660b-6127-48b0-82e7-cf5e38a66429-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b6a9660b-6127-48b0-82e7-cf5e38a66429\") " pod="openstack/memcached-0" Mar 12 14:48:45.342772 master-0 kubenswrapper[37036]: I0312 14:48:45.342719 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m7wt\" (UniqueName: \"kubernetes.io/projected/b6a9660b-6127-48b0-82e7-cf5e38a66429-kube-api-access-9m7wt\") pod \"memcached-0\" (UID: \"b6a9660b-6127-48b0-82e7-cf5e38a66429\") " pod="openstack/memcached-0" Mar 12 14:48:45.448371 master-0 kubenswrapper[37036]: I0312 14:48:45.447046 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 12 14:48:47.076513 master-0 kubenswrapper[37036]: I0312 14:48:47.076445 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 14:48:47.078656 master-0 kubenswrapper[37036]: I0312 14:48:47.077999 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.089390 master-0 kubenswrapper[37036]: I0312 14:48:47.089296 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 12 14:48:47.089618 master-0 kubenswrapper[37036]: I0312 14:48:47.089595 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 12 14:48:47.089751 master-0 kubenswrapper[37036]: I0312 14:48:47.089721 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 12 14:48:47.089881 master-0 kubenswrapper[37036]: I0312 14:48:47.089851 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 12 14:48:47.090066 master-0 kubenswrapper[37036]: I0312 14:48:47.090019 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 12 14:48:47.090188 master-0 kubenswrapper[37036]: I0312 14:48:47.090140 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 12 14:48:47.126388 master-0 kubenswrapper[37036]: I0312 14:48:47.126268 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 14:48:47.160108 master-0 kubenswrapper[37036]: I0312 14:48:47.160041 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.160108 master-0 kubenswrapper[37036]: I0312 14:48:47.160111 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.160379 master-0 kubenswrapper[37036]: I0312 14:48:47.160138 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.160379 master-0 kubenswrapper[37036]: I0312 14:48:47.160177 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e1972b05-301f-4c40-a041-6596e9cd118a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^5e110f0c-a6e7-4e18-b641-c7cd95c4b749\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.160379 master-0 kubenswrapper[37036]: I0312 14:48:47.160241 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.160379 master-0 kubenswrapper[37036]: I0312 14:48:47.160269 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-config-data\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.160379 master-0 kubenswrapper[37036]: I0312 14:48:47.160285 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv8xr\" (UniqueName: \"kubernetes.io/projected/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-kube-api-access-xv8xr\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.160379 master-0 kubenswrapper[37036]: I0312 14:48:47.160319 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.160379 master-0 kubenswrapper[37036]: I0312 14:48:47.160333 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.160379 master-0 kubenswrapper[37036]: I0312 14:48:47.160358 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.160620 master-0 kubenswrapper[37036]: I0312 14:48:47.160388 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.274159 master-0 kubenswrapper[37036]: I0312 14:48:47.274104 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-config-data\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.274402 master-0 kubenswrapper[37036]: I0312 14:48:47.274175 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv8xr\" (UniqueName: \"kubernetes.io/projected/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-kube-api-access-xv8xr\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.274402 master-0 kubenswrapper[37036]: I0312 14:48:47.274216 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.274402 master-0 kubenswrapper[37036]: I0312 14:48:47.274300 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.274402 master-0 kubenswrapper[37036]: I0312 14:48:47.274370 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.274539 master-0 kubenswrapper[37036]: I0312 14:48:47.274423 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.274760 master-0 kubenswrapper[37036]: I0312 14:48:47.274721 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.274984 master-0 kubenswrapper[37036]: I0312 14:48:47.274948 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.275045 master-0 kubenswrapper[37036]: I0312 14:48:47.275002 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.275080 master-0 kubenswrapper[37036]: I0312 14:48:47.275048 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e1972b05-301f-4c40-a041-6596e9cd118a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^5e110f0c-a6e7-4e18-b641-c7cd95c4b749\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.275194 master-0 kubenswrapper[37036]: I0312 14:48:47.275155 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.277381 master-0 kubenswrapper[37036]: I0312 14:48:47.277353 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.278086 master-0 kubenswrapper[37036]: I0312 14:48:47.278048 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-config-data\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.279269 master-0 kubenswrapper[37036]: I0312 14:48:47.279223 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.285934 master-0 kubenswrapper[37036]: I0312 14:48:47.280737 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.285934 master-0 kubenswrapper[37036]: I0312 14:48:47.284280 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.285934 master-0 kubenswrapper[37036]: I0312 14:48:47.284540 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.293097 master-0 kubenswrapper[37036]: I0312 14:48:47.291595 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.293097 master-0 kubenswrapper[37036]: I0312 14:48:47.291615 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.293097 master-0 kubenswrapper[37036]: I0312 14:48:47.291695 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.293097 master-0 kubenswrapper[37036]: I0312 14:48:47.292598 37036 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 14:48:47.293097 master-0 kubenswrapper[37036]: I0312 14:48:47.292643 37036 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e1972b05-301f-4c40-a041-6596e9cd118a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^5e110f0c-a6e7-4e18-b641-c7cd95c4b749\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/b0b51fa3a81d03444bf9591da77910f30e1693e362b91dd290e3a8eb82d9ea78/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 12 14:48:47.300723 master-0 kubenswrapper[37036]: I0312 14:48:47.300649 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv8xr\" (UniqueName: \"kubernetes.io/projected/78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e-kube-api-access-xv8xr\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:48.539956 master-0 kubenswrapper[37036]: I0312 14:48:48.530954 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 12 14:48:48.585925 master-0 kubenswrapper[37036]: I0312 14:48:48.582048 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 12 14:48:48.597921 master-0 kubenswrapper[37036]: I0312 14:48:48.594562 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 12 14:48:48.597921 master-0 kubenswrapper[37036]: I0312 14:48:48.594767 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 12 14:48:48.597921 master-0 kubenswrapper[37036]: I0312 14:48:48.594985 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 12 14:48:48.628961 master-0 kubenswrapper[37036]: I0312 14:48:48.624608 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7ca31eeb-5673-4291-881e-36fa35ff50e6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^1414a338-7b13-4bf3-8b46-75be8cba8e25\") pod \"openstack-galera-0\" (UID: \"114161cb-b5bb-41d9-b085-63a181ec3480\") " pod="openstack/openstack-galera-0" Mar 12 14:48:48.628961 master-0 kubenswrapper[37036]: I0312 14:48:48.624696 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/114161cb-b5bb-41d9-b085-63a181ec3480-operator-scripts\") pod \"openstack-galera-0\" (UID: \"114161cb-b5bb-41d9-b085-63a181ec3480\") " pod="openstack/openstack-galera-0" Mar 12 14:48:48.628961 master-0 kubenswrapper[37036]: I0312 14:48:48.624715 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/114161cb-b5bb-41d9-b085-63a181ec3480-kolla-config\") pod \"openstack-galera-0\" (UID: \"114161cb-b5bb-41d9-b085-63a181ec3480\") " pod="openstack/openstack-galera-0" Mar 12 14:48:48.628961 master-0 kubenswrapper[37036]: I0312 14:48:48.624856 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/114161cb-b5bb-41d9-b085-63a181ec3480-config-data-default\") pod \"openstack-galera-0\" (UID: \"114161cb-b5bb-41d9-b085-63a181ec3480\") " pod="openstack/openstack-galera-0" Mar 12 14:48:48.628961 master-0 kubenswrapper[37036]: I0312 14:48:48.624911 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/114161cb-b5bb-41d9-b085-63a181ec3480-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"114161cb-b5bb-41d9-b085-63a181ec3480\") " pod="openstack/openstack-galera-0" Mar 12 14:48:48.628961 master-0 kubenswrapper[37036]: I0312 14:48:48.624952 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/114161cb-b5bb-41d9-b085-63a181ec3480-config-data-generated\") pod \"openstack-galera-0\" (UID: \"114161cb-b5bb-41d9-b085-63a181ec3480\") " pod="openstack/openstack-galera-0" Mar 12 14:48:48.628961 master-0 kubenswrapper[37036]: I0312 14:48:48.624982 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74tlf\" (UniqueName: \"kubernetes.io/projected/114161cb-b5bb-41d9-b085-63a181ec3480-kube-api-access-74tlf\") pod \"openstack-galera-0\" (UID: \"114161cb-b5bb-41d9-b085-63a181ec3480\") " pod="openstack/openstack-galera-0" Mar 12 14:48:48.628961 master-0 kubenswrapper[37036]: I0312 14:48:48.625001 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/114161cb-b5bb-41d9-b085-63a181ec3480-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"114161cb-b5bb-41d9-b085-63a181ec3480\") " pod="openstack/openstack-galera-0" Mar 12 14:48:48.658406 master-0 kubenswrapper[37036]: I0312 14:48:48.658089 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 12 14:48:48.736942 master-0 kubenswrapper[37036]: I0312 14:48:48.732951 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/114161cb-b5bb-41d9-b085-63a181ec3480-config-data-default\") pod \"openstack-galera-0\" (UID: \"114161cb-b5bb-41d9-b085-63a181ec3480\") " pod="openstack/openstack-galera-0" Mar 12 14:48:48.736942 master-0 kubenswrapper[37036]: I0312 14:48:48.733037 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/114161cb-b5bb-41d9-b085-63a181ec3480-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"114161cb-b5bb-41d9-b085-63a181ec3480\") " pod="openstack/openstack-galera-0" Mar 12 14:48:48.736942 master-0 kubenswrapper[37036]: I0312 14:48:48.733121 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/114161cb-b5bb-41d9-b085-63a181ec3480-config-data-generated\") pod \"openstack-galera-0\" (UID: \"114161cb-b5bb-41d9-b085-63a181ec3480\") " pod="openstack/openstack-galera-0" Mar 12 14:48:48.736942 master-0 kubenswrapper[37036]: I0312 14:48:48.733167 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74tlf\" (UniqueName: \"kubernetes.io/projected/114161cb-b5bb-41d9-b085-63a181ec3480-kube-api-access-74tlf\") pod \"openstack-galera-0\" (UID: \"114161cb-b5bb-41d9-b085-63a181ec3480\") " pod="openstack/openstack-galera-0" Mar 12 14:48:48.736942 master-0 kubenswrapper[37036]: I0312 14:48:48.733187 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/114161cb-b5bb-41d9-b085-63a181ec3480-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"114161cb-b5bb-41d9-b085-63a181ec3480\") " pod="openstack/openstack-galera-0" Mar 12 14:48:48.736942 master-0 kubenswrapper[37036]: I0312 14:48:48.733237 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7ca31eeb-5673-4291-881e-36fa35ff50e6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^1414a338-7b13-4bf3-8b46-75be8cba8e25\") pod \"openstack-galera-0\" (UID: \"114161cb-b5bb-41d9-b085-63a181ec3480\") " pod="openstack/openstack-galera-0" Mar 12 14:48:48.736942 master-0 kubenswrapper[37036]: I0312 14:48:48.733276 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/114161cb-b5bb-41d9-b085-63a181ec3480-operator-scripts\") pod \"openstack-galera-0\" (UID: \"114161cb-b5bb-41d9-b085-63a181ec3480\") " pod="openstack/openstack-galera-0" Mar 12 14:48:48.736942 master-0 kubenswrapper[37036]: I0312 14:48:48.733294 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/114161cb-b5bb-41d9-b085-63a181ec3480-kolla-config\") pod \"openstack-galera-0\" (UID: \"114161cb-b5bb-41d9-b085-63a181ec3480\") " pod="openstack/openstack-galera-0" Mar 12 14:48:48.736942 master-0 kubenswrapper[37036]: I0312 14:48:48.735576 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/114161cb-b5bb-41d9-b085-63a181ec3480-kolla-config\") pod \"openstack-galera-0\" (UID: \"114161cb-b5bb-41d9-b085-63a181ec3480\") " pod="openstack/openstack-galera-0" Mar 12 14:48:48.740208 master-0 kubenswrapper[37036]: I0312 14:48:48.737450 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/114161cb-b5bb-41d9-b085-63a181ec3480-config-data-generated\") pod \"openstack-galera-0\" (UID: \"114161cb-b5bb-41d9-b085-63a181ec3480\") " pod="openstack/openstack-galera-0" Mar 12 14:48:48.740208 master-0 kubenswrapper[37036]: I0312 14:48:48.738241 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/114161cb-b5bb-41d9-b085-63a181ec3480-operator-scripts\") pod \"openstack-galera-0\" (UID: \"114161cb-b5bb-41d9-b085-63a181ec3480\") " pod="openstack/openstack-galera-0" Mar 12 14:48:48.741586 master-0 kubenswrapper[37036]: I0312 14:48:48.741540 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/114161cb-b5bb-41d9-b085-63a181ec3480-config-data-default\") pod \"openstack-galera-0\" (UID: \"114161cb-b5bb-41d9-b085-63a181ec3480\") " pod="openstack/openstack-galera-0" Mar 12 14:48:48.746542 master-0 kubenswrapper[37036]: I0312 14:48:48.745972 37036 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 14:48:48.746542 master-0 kubenswrapper[37036]: I0312 14:48:48.746268 37036 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7ca31eeb-5673-4291-881e-36fa35ff50e6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^1414a338-7b13-4bf3-8b46-75be8cba8e25\") pod \"openstack-galera-0\" (UID: \"114161cb-b5bb-41d9-b085-63a181ec3480\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/b44ba4040fbf532644a947d74e177392f6093cca5dd1c9dd55a50999038555ed/globalmount\"" pod="openstack/openstack-galera-0" Mar 12 14:48:48.756651 master-0 kubenswrapper[37036]: I0312 14:48:48.756578 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/114161cb-b5bb-41d9-b085-63a181ec3480-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"114161cb-b5bb-41d9-b085-63a181ec3480\") " pod="openstack/openstack-galera-0" Mar 12 14:48:48.771431 master-0 kubenswrapper[37036]: I0312 14:48:48.771387 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/114161cb-b5bb-41d9-b085-63a181ec3480-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"114161cb-b5bb-41d9-b085-63a181ec3480\") " pod="openstack/openstack-galera-0" Mar 12 14:48:48.790416 master-0 kubenswrapper[37036]: I0312 14:48:48.790361 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74tlf\" (UniqueName: \"kubernetes.io/projected/114161cb-b5bb-41d9-b085-63a181ec3480-kube-api-access-74tlf\") pod \"openstack-galera-0\" (UID: \"114161cb-b5bb-41d9-b085-63a181ec3480\") " pod="openstack/openstack-galera-0" Mar 12 14:48:49.062216 master-0 kubenswrapper[37036]: I0312 14:48:49.062127 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e1972b05-301f-4c40-a041-6596e9cd118a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^5e110f0c-a6e7-4e18-b641-c7cd95c4b749\") pod \"rabbitmq-server-0\" (UID: \"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e\") " pod="openstack/rabbitmq-server-0" Mar 12 14:48:49.285188 master-0 kubenswrapper[37036]: I0312 14:48:49.284984 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 12 14:48:49.591485 master-0 kubenswrapper[37036]: I0312 14:48:49.591415 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 14:48:49.598308 master-0 kubenswrapper[37036]: I0312 14:48:49.598239 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.608348 master-0 kubenswrapper[37036]: I0312 14:48:49.608275 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 12 14:48:49.608637 master-0 kubenswrapper[37036]: I0312 14:48:49.608537 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 12 14:48:49.608744 master-0 kubenswrapper[37036]: I0312 14:48:49.608697 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 12 14:48:49.608825 master-0 kubenswrapper[37036]: I0312 14:48:49.608805 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 12 14:48:49.608952 master-0 kubenswrapper[37036]: I0312 14:48:49.608935 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 12 14:48:49.609110 master-0 kubenswrapper[37036]: I0312 14:48:49.609062 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 12 14:48:49.657017 master-0 kubenswrapper[37036]: I0312 14:48:49.656952 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 14:48:49.762018 master-0 kubenswrapper[37036]: I0312 14:48:49.758818 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f063fb36-4428-461a-8b29-3750c3f8217f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.762018 master-0 kubenswrapper[37036]: I0312 14:48:49.758959 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f063fb36-4428-461a-8b29-3750c3f8217f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.762018 master-0 kubenswrapper[37036]: I0312 14:48:49.759024 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f063fb36-4428-461a-8b29-3750c3f8217f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.762018 master-0 kubenswrapper[37036]: I0312 14:48:49.759173 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f063fb36-4428-461a-8b29-3750c3f8217f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.762018 master-0 kubenswrapper[37036]: I0312 14:48:49.759221 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f063fb36-4428-461a-8b29-3750c3f8217f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.762018 master-0 kubenswrapper[37036]: I0312 14:48:49.759288 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-61c2ff73-4257-4af9-9dde-b43c9315431d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^df84f46e-9a17-4b48-b812-fe3f92af0f79\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.762018 master-0 kubenswrapper[37036]: I0312 14:48:49.759383 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f063fb36-4428-461a-8b29-3750c3f8217f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.762018 master-0 kubenswrapper[37036]: I0312 14:48:49.759444 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs6k9\" (UniqueName: \"kubernetes.io/projected/f063fb36-4428-461a-8b29-3750c3f8217f-kube-api-access-gs6k9\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.762018 master-0 kubenswrapper[37036]: I0312 14:48:49.759518 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f063fb36-4428-461a-8b29-3750c3f8217f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.762018 master-0 kubenswrapper[37036]: I0312 14:48:49.759597 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f063fb36-4428-461a-8b29-3750c3f8217f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.762018 master-0 kubenswrapper[37036]: I0312 14:48:49.759678 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f063fb36-4428-461a-8b29-3750c3f8217f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.861129 master-0 kubenswrapper[37036]: I0312 14:48:49.861005 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f063fb36-4428-461a-8b29-3750c3f8217f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.861129 master-0 kubenswrapper[37036]: I0312 14:48:49.861099 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f063fb36-4428-461a-8b29-3750c3f8217f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.861410 master-0 kubenswrapper[37036]: I0312 14:48:49.861147 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f063fb36-4428-461a-8b29-3750c3f8217f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.861410 master-0 kubenswrapper[37036]: I0312 14:48:49.861214 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f063fb36-4428-461a-8b29-3750c3f8217f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.861410 master-0 kubenswrapper[37036]: I0312 14:48:49.861256 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f063fb36-4428-461a-8b29-3750c3f8217f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.861410 master-0 kubenswrapper[37036]: I0312 14:48:49.861285 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-61c2ff73-4257-4af9-9dde-b43c9315431d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^df84f46e-9a17-4b48-b812-fe3f92af0f79\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.861410 master-0 kubenswrapper[37036]: I0312 14:48:49.861339 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f063fb36-4428-461a-8b29-3750c3f8217f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.861410 master-0 kubenswrapper[37036]: I0312 14:48:49.861369 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs6k9\" (UniqueName: \"kubernetes.io/projected/f063fb36-4428-461a-8b29-3750c3f8217f-kube-api-access-gs6k9\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.861410 master-0 kubenswrapper[37036]: I0312 14:48:49.861396 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f063fb36-4428-461a-8b29-3750c3f8217f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.861712 master-0 kubenswrapper[37036]: I0312 14:48:49.861425 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f063fb36-4428-461a-8b29-3750c3f8217f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.861712 master-0 kubenswrapper[37036]: I0312 14:48:49.861482 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f063fb36-4428-461a-8b29-3750c3f8217f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.863280 master-0 kubenswrapper[37036]: I0312 14:48:49.863208 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f063fb36-4428-461a-8b29-3750c3f8217f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.863411 master-0 kubenswrapper[37036]: I0312 14:48:49.863374 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f063fb36-4428-461a-8b29-3750c3f8217f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.864573 master-0 kubenswrapper[37036]: I0312 14:48:49.863987 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f063fb36-4428-461a-8b29-3750c3f8217f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.865241 master-0 kubenswrapper[37036]: I0312 14:48:49.865013 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f063fb36-4428-461a-8b29-3750c3f8217f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.865413 master-0 kubenswrapper[37036]: I0312 14:48:49.865387 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f063fb36-4428-461a-8b29-3750c3f8217f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.865770 master-0 kubenswrapper[37036]: I0312 14:48:49.865728 37036 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 14:48:49.865905 master-0 kubenswrapper[37036]: I0312 14:48:49.865811 37036 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-61c2ff73-4257-4af9-9dde-b43c9315431d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^df84f46e-9a17-4b48-b812-fe3f92af0f79\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/808f2d6b1d0ab3b77be9c3992bebc6710be054daa6b9fb73872d6368be43c693/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.867336 master-0 kubenswrapper[37036]: I0312 14:48:49.867143 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f063fb36-4428-461a-8b29-3750c3f8217f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.875322 master-0 kubenswrapper[37036]: I0312 14:48:49.875261 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f063fb36-4428-461a-8b29-3750c3f8217f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.877176 master-0 kubenswrapper[37036]: I0312 14:48:49.877126 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f063fb36-4428-461a-8b29-3750c3f8217f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.883702 master-0 kubenswrapper[37036]: I0312 14:48:49.883651 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f063fb36-4428-461a-8b29-3750c3f8217f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:49.887888 master-0 kubenswrapper[37036]: I0312 14:48:49.887818 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs6k9\" (UniqueName: \"kubernetes.io/projected/f063fb36-4428-461a-8b29-3750c3f8217f-kube-api-access-gs6k9\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:50.240512 master-0 kubenswrapper[37036]: I0312 14:48:50.240465 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7ca31eeb-5673-4291-881e-36fa35ff50e6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^1414a338-7b13-4bf3-8b46-75be8cba8e25\") pod \"openstack-galera-0\" (UID: \"114161cb-b5bb-41d9-b085-63a181ec3480\") " pod="openstack/openstack-galera-0" Mar 12 14:48:50.333950 master-0 kubenswrapper[37036]: I0312 14:48:50.332745 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 12 14:48:50.335003 master-0 kubenswrapper[37036]: I0312 14:48:50.334740 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:50.337311 master-0 kubenswrapper[37036]: I0312 14:48:50.336868 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 12 14:48:50.337311 master-0 kubenswrapper[37036]: I0312 14:48:50.337079 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 12 14:48:50.337311 master-0 kubenswrapper[37036]: I0312 14:48:50.337222 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 12 14:48:50.371205 master-0 kubenswrapper[37036]: I0312 14:48:50.370479 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 12 14:48:50.443743 master-0 kubenswrapper[37036]: I0312 14:48:50.443684 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 12 14:48:50.475348 master-0 kubenswrapper[37036]: I0312 14:48:50.475293 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e303709a-0166-4153-9e20-0351599d1a9c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e303709a-0166-4153-9e20-0351599d1a9c\") " pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:50.475556 master-0 kubenswrapper[37036]: I0312 14:48:50.475398 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e303709a-0166-4153-9e20-0351599d1a9c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e303709a-0166-4153-9e20-0351599d1a9c\") " pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:50.475556 master-0 kubenswrapper[37036]: I0312 14:48:50.475443 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e303709a-0166-4153-9e20-0351599d1a9c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e303709a-0166-4153-9e20-0351599d1a9c\") " pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:50.476313 master-0 kubenswrapper[37036]: I0312 14:48:50.476286 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f656dc5d-da7b-41b3-8258-fce1aa12e2bf\" (UniqueName: \"kubernetes.io/csi/topolvm.io^73c2cdf1-1d88-43d1-b8ea-3485125e425e\") pod \"openstack-cell1-galera-0\" (UID: \"e303709a-0166-4153-9e20-0351599d1a9c\") " pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:50.476381 master-0 kubenswrapper[37036]: I0312 14:48:50.476339 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbm7j\" (UniqueName: \"kubernetes.io/projected/e303709a-0166-4153-9e20-0351599d1a9c-kube-api-access-vbm7j\") pod \"openstack-cell1-galera-0\" (UID: \"e303709a-0166-4153-9e20-0351599d1a9c\") " pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:50.476381 master-0 kubenswrapper[37036]: I0312 14:48:50.476370 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e303709a-0166-4153-9e20-0351599d1a9c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e303709a-0166-4153-9e20-0351599d1a9c\") " pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:50.476444 master-0 kubenswrapper[37036]: I0312 14:48:50.476403 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e303709a-0166-4153-9e20-0351599d1a9c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e303709a-0166-4153-9e20-0351599d1a9c\") " pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:50.476444 master-0 kubenswrapper[37036]: I0312 14:48:50.476428 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e303709a-0166-4153-9e20-0351599d1a9c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e303709a-0166-4153-9e20-0351599d1a9c\") " pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:50.512012 master-0 kubenswrapper[37036]: I0312 14:48:50.511591 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4mq52"] Mar 12 14:48:50.515930 master-0 kubenswrapper[37036]: I0312 14:48:50.513409 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4mq52" Mar 12 14:48:50.517800 master-0 kubenswrapper[37036]: I0312 14:48:50.516673 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 12 14:48:50.517800 master-0 kubenswrapper[37036]: I0312 14:48:50.517128 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 12 14:48:50.522962 master-0 kubenswrapper[37036]: I0312 14:48:50.522917 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4mq52"] Mar 12 14:48:50.580719 master-0 kubenswrapper[37036]: I0312 14:48:50.580644 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e303709a-0166-4153-9e20-0351599d1a9c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e303709a-0166-4153-9e20-0351599d1a9c\") " pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:50.581034 master-0 kubenswrapper[37036]: I0312 14:48:50.580740 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e303709a-0166-4153-9e20-0351599d1a9c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e303709a-0166-4153-9e20-0351599d1a9c\") " pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:50.581034 master-0 kubenswrapper[37036]: I0312 14:48:50.580821 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f656dc5d-da7b-41b3-8258-fce1aa12e2bf\" (UniqueName: \"kubernetes.io/csi/topolvm.io^73c2cdf1-1d88-43d1-b8ea-3485125e425e\") pod \"openstack-cell1-galera-0\" (UID: \"e303709a-0166-4153-9e20-0351599d1a9c\") " pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:50.581034 master-0 kubenswrapper[37036]: I0312 14:48:50.580878 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbm7j\" (UniqueName: \"kubernetes.io/projected/e303709a-0166-4153-9e20-0351599d1a9c-kube-api-access-vbm7j\") pod \"openstack-cell1-galera-0\" (UID: \"e303709a-0166-4153-9e20-0351599d1a9c\") " pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:50.581216 master-0 kubenswrapper[37036]: I0312 14:48:50.581175 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e303709a-0166-4153-9e20-0351599d1a9c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e303709a-0166-4153-9e20-0351599d1a9c\") " pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:50.581274 master-0 kubenswrapper[37036]: I0312 14:48:50.581259 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e303709a-0166-4153-9e20-0351599d1a9c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e303709a-0166-4153-9e20-0351599d1a9c\") " pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:50.581331 master-0 kubenswrapper[37036]: I0312 14:48:50.581320 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e303709a-0166-4153-9e20-0351599d1a9c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e303709a-0166-4153-9e20-0351599d1a9c\") " pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:50.581370 master-0 kubenswrapper[37036]: I0312 14:48:50.581357 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e303709a-0166-4153-9e20-0351599d1a9c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e303709a-0166-4153-9e20-0351599d1a9c\") " pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:50.581538 master-0 kubenswrapper[37036]: I0312 14:48:50.581513 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e303709a-0166-4153-9e20-0351599d1a9c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e303709a-0166-4153-9e20-0351599d1a9c\") " pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:50.591923 master-0 kubenswrapper[37036]: I0312 14:48:50.581513 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-6l42m"] Mar 12 14:48:50.599956 master-0 kubenswrapper[37036]: I0312 14:48:50.596177 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e303709a-0166-4153-9e20-0351599d1a9c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e303709a-0166-4153-9e20-0351599d1a9c\") " pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:50.599956 master-0 kubenswrapper[37036]: I0312 14:48:50.593455 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e303709a-0166-4153-9e20-0351599d1a9c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e303709a-0166-4153-9e20-0351599d1a9c\") " pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:50.599956 master-0 kubenswrapper[37036]: I0312 14:48:50.596531 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e303709a-0166-4153-9e20-0351599d1a9c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e303709a-0166-4153-9e20-0351599d1a9c\") " pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:50.599956 master-0 kubenswrapper[37036]: I0312 14:48:50.597158 37036 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 14:48:50.599956 master-0 kubenswrapper[37036]: I0312 14:48:50.597204 37036 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f656dc5d-da7b-41b3-8258-fce1aa12e2bf\" (UniqueName: \"kubernetes.io/csi/topolvm.io^73c2cdf1-1d88-43d1-b8ea-3485125e425e\") pod \"openstack-cell1-galera-0\" (UID: \"e303709a-0166-4153-9e20-0351599d1a9c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/a5a4e79e1372754c37bace72e0baa6ea950313eacf414d73ba907c7cd8883a0f/globalmount\"" pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:50.627100 master-0 kubenswrapper[37036]: I0312 14:48:50.622836 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e303709a-0166-4153-9e20-0351599d1a9c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e303709a-0166-4153-9e20-0351599d1a9c\") " pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:50.627100 master-0 kubenswrapper[37036]: I0312 14:48:50.625061 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbm7j\" (UniqueName: \"kubernetes.io/projected/e303709a-0166-4153-9e20-0351599d1a9c-kube-api-access-vbm7j\") pod \"openstack-cell1-galera-0\" (UID: \"e303709a-0166-4153-9e20-0351599d1a9c\") " pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:50.632464 master-0 kubenswrapper[37036]: I0312 14:48:50.629684 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e303709a-0166-4153-9e20-0351599d1a9c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e303709a-0166-4153-9e20-0351599d1a9c\") " pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:50.632464 master-0 kubenswrapper[37036]: I0312 14:48:50.630922 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6l42m" Mar 12 14:48:50.640776 master-0 kubenswrapper[37036]: I0312 14:48:50.640685 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6l42m"] Mar 12 14:48:50.697212 master-0 kubenswrapper[37036]: I0312 14:48:50.696518 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a24e803b-32b7-4b4b-bb59-f58b9a506626-scripts\") pod \"ovn-controller-4mq52\" (UID: \"a24e803b-32b7-4b4b-bb59-f58b9a506626\") " pod="openstack/ovn-controller-4mq52" Mar 12 14:48:50.697212 master-0 kubenswrapper[37036]: I0312 14:48:50.696602 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a24e803b-32b7-4b4b-bb59-f58b9a506626-var-log-ovn\") pod \"ovn-controller-4mq52\" (UID: \"a24e803b-32b7-4b4b-bb59-f58b9a506626\") " pod="openstack/ovn-controller-4mq52" Mar 12 14:48:50.697212 master-0 kubenswrapper[37036]: I0312 14:48:50.696686 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k8kj\" (UniqueName: \"kubernetes.io/projected/a24e803b-32b7-4b4b-bb59-f58b9a506626-kube-api-access-6k8kj\") pod \"ovn-controller-4mq52\" (UID: \"a24e803b-32b7-4b4b-bb59-f58b9a506626\") " pod="openstack/ovn-controller-4mq52" Mar 12 14:48:50.697212 master-0 kubenswrapper[37036]: I0312 14:48:50.696744 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a24e803b-32b7-4b4b-bb59-f58b9a506626-ovn-controller-tls-certs\") pod \"ovn-controller-4mq52\" (UID: \"a24e803b-32b7-4b4b-bb59-f58b9a506626\") " pod="openstack/ovn-controller-4mq52" Mar 12 14:48:50.697212 master-0 kubenswrapper[37036]: I0312 14:48:50.696831 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a24e803b-32b7-4b4b-bb59-f58b9a506626-var-run-ovn\") pod \"ovn-controller-4mq52\" (UID: \"a24e803b-32b7-4b4b-bb59-f58b9a506626\") " pod="openstack/ovn-controller-4mq52" Mar 12 14:48:50.697212 master-0 kubenswrapper[37036]: I0312 14:48:50.696866 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a24e803b-32b7-4b4b-bb59-f58b9a506626-var-run\") pod \"ovn-controller-4mq52\" (UID: \"a24e803b-32b7-4b4b-bb59-f58b9a506626\") " pod="openstack/ovn-controller-4mq52" Mar 12 14:48:50.697212 master-0 kubenswrapper[37036]: I0312 14:48:50.696962 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a24e803b-32b7-4b4b-bb59-f58b9a506626-combined-ca-bundle\") pod \"ovn-controller-4mq52\" (UID: \"a24e803b-32b7-4b4b-bb59-f58b9a506626\") " pod="openstack/ovn-controller-4mq52" Mar 12 14:48:50.799531 master-0 kubenswrapper[37036]: I0312 14:48:50.799388 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30f28fa9-b72d-471a-b089-9a79f5669fae-scripts\") pod \"ovn-controller-ovs-6l42m\" (UID: \"30f28fa9-b72d-471a-b089-9a79f5669fae\") " pod="openstack/ovn-controller-ovs-6l42m" Mar 12 14:48:50.799531 master-0 kubenswrapper[37036]: I0312 14:48:50.799465 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a24e803b-32b7-4b4b-bb59-f58b9a506626-scripts\") pod \"ovn-controller-4mq52\" (UID: \"a24e803b-32b7-4b4b-bb59-f58b9a506626\") " pod="openstack/ovn-controller-4mq52" Mar 12 14:48:50.799531 master-0 kubenswrapper[37036]: I0312 14:48:50.799504 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a24e803b-32b7-4b4b-bb59-f58b9a506626-var-log-ovn\") pod \"ovn-controller-4mq52\" (UID: \"a24e803b-32b7-4b4b-bb59-f58b9a506626\") " pod="openstack/ovn-controller-4mq52" Mar 12 14:48:50.799807 master-0 kubenswrapper[37036]: I0312 14:48:50.799559 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k8kj\" (UniqueName: \"kubernetes.io/projected/a24e803b-32b7-4b4b-bb59-f58b9a506626-kube-api-access-6k8kj\") pod \"ovn-controller-4mq52\" (UID: \"a24e803b-32b7-4b4b-bb59-f58b9a506626\") " pod="openstack/ovn-controller-4mq52" Mar 12 14:48:50.799807 master-0 kubenswrapper[37036]: I0312 14:48:50.799588 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/30f28fa9-b72d-471a-b089-9a79f5669fae-var-lib\") pod \"ovn-controller-ovs-6l42m\" (UID: \"30f28fa9-b72d-471a-b089-9a79f5669fae\") " pod="openstack/ovn-controller-ovs-6l42m" Mar 12 14:48:50.799807 master-0 kubenswrapper[37036]: I0312 14:48:50.799613 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a24e803b-32b7-4b4b-bb59-f58b9a506626-ovn-controller-tls-certs\") pod \"ovn-controller-4mq52\" (UID: \"a24e803b-32b7-4b4b-bb59-f58b9a506626\") " pod="openstack/ovn-controller-4mq52" Mar 12 14:48:50.799807 master-0 kubenswrapper[37036]: I0312 14:48:50.799634 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk86h\" (UniqueName: \"kubernetes.io/projected/30f28fa9-b72d-471a-b089-9a79f5669fae-kube-api-access-tk86h\") pod \"ovn-controller-ovs-6l42m\" (UID: \"30f28fa9-b72d-471a-b089-9a79f5669fae\") " pod="openstack/ovn-controller-ovs-6l42m" Mar 12 14:48:50.799807 master-0 kubenswrapper[37036]: I0312 14:48:50.799677 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/30f28fa9-b72d-471a-b089-9a79f5669fae-etc-ovs\") pod \"ovn-controller-ovs-6l42m\" (UID: \"30f28fa9-b72d-471a-b089-9a79f5669fae\") " pod="openstack/ovn-controller-ovs-6l42m" Mar 12 14:48:50.799807 master-0 kubenswrapper[37036]: I0312 14:48:50.799716 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a24e803b-32b7-4b4b-bb59-f58b9a506626-var-run-ovn\") pod \"ovn-controller-4mq52\" (UID: \"a24e803b-32b7-4b4b-bb59-f58b9a506626\") " pod="openstack/ovn-controller-4mq52" Mar 12 14:48:50.799807 master-0 kubenswrapper[37036]: I0312 14:48:50.799759 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/30f28fa9-b72d-471a-b089-9a79f5669fae-var-log\") pod \"ovn-controller-ovs-6l42m\" (UID: \"30f28fa9-b72d-471a-b089-9a79f5669fae\") " pod="openstack/ovn-controller-ovs-6l42m" Mar 12 14:48:50.799807 master-0 kubenswrapper[37036]: I0312 14:48:50.799783 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a24e803b-32b7-4b4b-bb59-f58b9a506626-var-run\") pod \"ovn-controller-4mq52\" (UID: \"a24e803b-32b7-4b4b-bb59-f58b9a506626\") " pod="openstack/ovn-controller-4mq52" Mar 12 14:48:50.800174 master-0 kubenswrapper[37036]: I0312 14:48:50.799817 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a24e803b-32b7-4b4b-bb59-f58b9a506626-combined-ca-bundle\") pod \"ovn-controller-4mq52\" (UID: \"a24e803b-32b7-4b4b-bb59-f58b9a506626\") " pod="openstack/ovn-controller-4mq52" Mar 12 14:48:50.800174 master-0 kubenswrapper[37036]: I0312 14:48:50.799856 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/30f28fa9-b72d-471a-b089-9a79f5669fae-var-run\") pod \"ovn-controller-ovs-6l42m\" (UID: \"30f28fa9-b72d-471a-b089-9a79f5669fae\") " pod="openstack/ovn-controller-ovs-6l42m" Mar 12 14:48:50.800788 master-0 kubenswrapper[37036]: I0312 14:48:50.800755 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a24e803b-32b7-4b4b-bb59-f58b9a506626-var-log-ovn\") pod \"ovn-controller-4mq52\" (UID: \"a24e803b-32b7-4b4b-bb59-f58b9a506626\") " pod="openstack/ovn-controller-4mq52" Mar 12 14:48:50.801157 master-0 kubenswrapper[37036]: I0312 14:48:50.801089 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a24e803b-32b7-4b4b-bb59-f58b9a506626-var-run\") pod \"ovn-controller-4mq52\" (UID: \"a24e803b-32b7-4b4b-bb59-f58b9a506626\") " pod="openstack/ovn-controller-4mq52" Mar 12 14:48:50.801265 master-0 kubenswrapper[37036]: I0312 14:48:50.801235 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a24e803b-32b7-4b4b-bb59-f58b9a506626-var-run-ovn\") pod \"ovn-controller-4mq52\" (UID: \"a24e803b-32b7-4b4b-bb59-f58b9a506626\") " pod="openstack/ovn-controller-4mq52" Mar 12 14:48:50.803120 master-0 kubenswrapper[37036]: I0312 14:48:50.803078 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a24e803b-32b7-4b4b-bb59-f58b9a506626-scripts\") pod \"ovn-controller-4mq52\" (UID: \"a24e803b-32b7-4b4b-bb59-f58b9a506626\") " pod="openstack/ovn-controller-4mq52" Mar 12 14:48:50.805783 master-0 kubenswrapper[37036]: I0312 14:48:50.805613 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a24e803b-32b7-4b4b-bb59-f58b9a506626-ovn-controller-tls-certs\") pod \"ovn-controller-4mq52\" (UID: \"a24e803b-32b7-4b4b-bb59-f58b9a506626\") " pod="openstack/ovn-controller-4mq52" Mar 12 14:48:50.818831 master-0 kubenswrapper[37036]: I0312 14:48:50.818778 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a24e803b-32b7-4b4b-bb59-f58b9a506626-combined-ca-bundle\") pod \"ovn-controller-4mq52\" (UID: \"a24e803b-32b7-4b4b-bb59-f58b9a506626\") " pod="openstack/ovn-controller-4mq52" Mar 12 14:48:50.822671 master-0 kubenswrapper[37036]: I0312 14:48:50.821658 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k8kj\" (UniqueName: \"kubernetes.io/projected/a24e803b-32b7-4b4b-bb59-f58b9a506626-kube-api-access-6k8kj\") pod \"ovn-controller-4mq52\" (UID: \"a24e803b-32b7-4b4b-bb59-f58b9a506626\") " pod="openstack/ovn-controller-4mq52" Mar 12 14:48:50.859073 master-0 kubenswrapper[37036]: I0312 14:48:50.859023 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4mq52" Mar 12 14:48:50.901538 master-0 kubenswrapper[37036]: I0312 14:48:50.901494 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk86h\" (UniqueName: \"kubernetes.io/projected/30f28fa9-b72d-471a-b089-9a79f5669fae-kube-api-access-tk86h\") pod \"ovn-controller-ovs-6l42m\" (UID: \"30f28fa9-b72d-471a-b089-9a79f5669fae\") " pod="openstack/ovn-controller-ovs-6l42m" Mar 12 14:48:50.901673 master-0 kubenswrapper[37036]: I0312 14:48:50.901567 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/30f28fa9-b72d-471a-b089-9a79f5669fae-etc-ovs\") pod \"ovn-controller-ovs-6l42m\" (UID: \"30f28fa9-b72d-471a-b089-9a79f5669fae\") " pod="openstack/ovn-controller-ovs-6l42m" Mar 12 14:48:50.902223 master-0 kubenswrapper[37036]: I0312 14:48:50.901875 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/30f28fa9-b72d-471a-b089-9a79f5669fae-etc-ovs\") pod \"ovn-controller-ovs-6l42m\" (UID: \"30f28fa9-b72d-471a-b089-9a79f5669fae\") " pod="openstack/ovn-controller-ovs-6l42m" Mar 12 14:48:50.902223 master-0 kubenswrapper[37036]: I0312 14:48:50.901950 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/30f28fa9-b72d-471a-b089-9a79f5669fae-var-log\") pod \"ovn-controller-ovs-6l42m\" (UID: \"30f28fa9-b72d-471a-b089-9a79f5669fae\") " pod="openstack/ovn-controller-ovs-6l42m" Mar 12 14:48:50.902223 master-0 kubenswrapper[37036]: I0312 14:48:50.902071 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/30f28fa9-b72d-471a-b089-9a79f5669fae-var-log\") pod \"ovn-controller-ovs-6l42m\" (UID: \"30f28fa9-b72d-471a-b089-9a79f5669fae\") " pod="openstack/ovn-controller-ovs-6l42m" Mar 12 14:48:50.902223 master-0 kubenswrapper[37036]: I0312 14:48:50.902140 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/30f28fa9-b72d-471a-b089-9a79f5669fae-var-run\") pod \"ovn-controller-ovs-6l42m\" (UID: \"30f28fa9-b72d-471a-b089-9a79f5669fae\") " pod="openstack/ovn-controller-ovs-6l42m" Mar 12 14:48:50.902393 master-0 kubenswrapper[37036]: I0312 14:48:50.902189 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/30f28fa9-b72d-471a-b089-9a79f5669fae-var-run\") pod \"ovn-controller-ovs-6l42m\" (UID: \"30f28fa9-b72d-471a-b089-9a79f5669fae\") " pod="openstack/ovn-controller-ovs-6l42m" Mar 12 14:48:50.902393 master-0 kubenswrapper[37036]: I0312 14:48:50.902305 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30f28fa9-b72d-471a-b089-9a79f5669fae-scripts\") pod \"ovn-controller-ovs-6l42m\" (UID: \"30f28fa9-b72d-471a-b089-9a79f5669fae\") " pod="openstack/ovn-controller-ovs-6l42m" Mar 12 14:48:50.905381 master-0 kubenswrapper[37036]: I0312 14:48:50.905359 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30f28fa9-b72d-471a-b089-9a79f5669fae-scripts\") pod \"ovn-controller-ovs-6l42m\" (UID: \"30f28fa9-b72d-471a-b089-9a79f5669fae\") " pod="openstack/ovn-controller-ovs-6l42m" Mar 12 14:48:50.905471 master-0 kubenswrapper[37036]: I0312 14:48:50.905455 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/30f28fa9-b72d-471a-b089-9a79f5669fae-var-lib\") pod \"ovn-controller-ovs-6l42m\" (UID: \"30f28fa9-b72d-471a-b089-9a79f5669fae\") " pod="openstack/ovn-controller-ovs-6l42m" Mar 12 14:48:50.907142 master-0 kubenswrapper[37036]: I0312 14:48:50.907099 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/30f28fa9-b72d-471a-b089-9a79f5669fae-var-lib\") pod \"ovn-controller-ovs-6l42m\" (UID: \"30f28fa9-b72d-471a-b089-9a79f5669fae\") " pod="openstack/ovn-controller-ovs-6l42m" Mar 12 14:48:50.919468 master-0 kubenswrapper[37036]: I0312 14:48:50.919426 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk86h\" (UniqueName: \"kubernetes.io/projected/30f28fa9-b72d-471a-b089-9a79f5669fae-kube-api-access-tk86h\") pod \"ovn-controller-ovs-6l42m\" (UID: \"30f28fa9-b72d-471a-b089-9a79f5669fae\") " pod="openstack/ovn-controller-ovs-6l42m" Mar 12 14:48:51.009324 master-0 kubenswrapper[37036]: I0312 14:48:51.008574 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6l42m" Mar 12 14:48:51.567018 master-0 kubenswrapper[37036]: I0312 14:48:51.566264 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-61c2ff73-4257-4af9-9dde-b43c9315431d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^df84f46e-9a17-4b48-b812-fe3f92af0f79\") pod \"rabbitmq-cell1-server-0\" (UID: \"f063fb36-4428-461a-8b29-3750c3f8217f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:51.839339 master-0 kubenswrapper[37036]: I0312 14:48:51.838439 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:48:52.618288 master-0 kubenswrapper[37036]: I0312 14:48:52.618205 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f656dc5d-da7b-41b3-8258-fce1aa12e2bf\" (UniqueName: \"kubernetes.io/csi/topolvm.io^73c2cdf1-1d88-43d1-b8ea-3485125e425e\") pod \"openstack-cell1-galera-0\" (UID: \"e303709a-0166-4153-9e20-0351599d1a9c\") " pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:52.776987 master-0 kubenswrapper[37036]: I0312 14:48:52.776923 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 12 14:48:53.354557 master-0 kubenswrapper[37036]: I0312 14:48:53.354499 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 12 14:48:53.356483 master-0 kubenswrapper[37036]: I0312 14:48:53.356444 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:53.360763 master-0 kubenswrapper[37036]: I0312 14:48:53.360725 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 12 14:48:53.360993 master-0 kubenswrapper[37036]: I0312 14:48:53.360954 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 12 14:48:53.360993 master-0 kubenswrapper[37036]: I0312 14:48:53.360963 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 12 14:48:53.361118 master-0 kubenswrapper[37036]: I0312 14:48:53.361088 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 12 14:48:53.375820 master-0 kubenswrapper[37036]: I0312 14:48:53.375742 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 12 14:48:53.489295 master-0 kubenswrapper[37036]: I0312 14:48:53.489171 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf3d5632-6ab4-4408-a837-7897110106d4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"bf3d5632-6ab4-4408-a837-7897110106d4\") " pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:53.489680 master-0 kubenswrapper[37036]: I0312 14:48:53.489387 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf3d5632-6ab4-4408-a837-7897110106d4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"bf3d5632-6ab4-4408-a837-7897110106d4\") " pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:53.489680 master-0 kubenswrapper[37036]: I0312 14:48:53.489470 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf3d5632-6ab4-4408-a837-7897110106d4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"bf3d5632-6ab4-4408-a837-7897110106d4\") " pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:53.489680 master-0 kubenswrapper[37036]: I0312 14:48:53.489639 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf3d5632-6ab4-4408-a837-7897110106d4-config\") pod \"ovsdbserver-sb-0\" (UID: \"bf3d5632-6ab4-4408-a837-7897110106d4\") " pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:53.489833 master-0 kubenswrapper[37036]: I0312 14:48:53.489698 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf3d5632-6ab4-4408-a837-7897110106d4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"bf3d5632-6ab4-4408-a837-7897110106d4\") " pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:53.489833 master-0 kubenswrapper[37036]: I0312 14:48:53.489788 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bf3d5632-6ab4-4408-a837-7897110106d4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"bf3d5632-6ab4-4408-a837-7897110106d4\") " pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:53.489915 master-0 kubenswrapper[37036]: I0312 14:48:53.489836 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3fee956c-1930-49df-9001-dcf91e20b35e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^217f41f4-d7f6-4f1e-827a-054c2109a16e\") pod \"ovsdbserver-sb-0\" (UID: \"bf3d5632-6ab4-4408-a837-7897110106d4\") " pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:53.489915 master-0 kubenswrapper[37036]: I0312 14:48:53.489874 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lfxs\" (UniqueName: \"kubernetes.io/projected/bf3d5632-6ab4-4408-a837-7897110106d4-kube-api-access-6lfxs\") pod \"ovsdbserver-sb-0\" (UID: \"bf3d5632-6ab4-4408-a837-7897110106d4\") " pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:53.593732 master-0 kubenswrapper[37036]: I0312 14:48:53.593651 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf3d5632-6ab4-4408-a837-7897110106d4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"bf3d5632-6ab4-4408-a837-7897110106d4\") " pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:53.594043 master-0 kubenswrapper[37036]: I0312 14:48:53.593799 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf3d5632-6ab4-4408-a837-7897110106d4-config\") pod \"ovsdbserver-sb-0\" (UID: \"bf3d5632-6ab4-4408-a837-7897110106d4\") " pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:53.594043 master-0 kubenswrapper[37036]: I0312 14:48:53.593872 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf3d5632-6ab4-4408-a837-7897110106d4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"bf3d5632-6ab4-4408-a837-7897110106d4\") " pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:53.594043 master-0 kubenswrapper[37036]: I0312 14:48:53.593912 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bf3d5632-6ab4-4408-a837-7897110106d4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"bf3d5632-6ab4-4408-a837-7897110106d4\") " pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:53.594443 master-0 kubenswrapper[37036]: I0312 14:48:53.594408 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3fee956c-1930-49df-9001-dcf91e20b35e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^217f41f4-d7f6-4f1e-827a-054c2109a16e\") pod \"ovsdbserver-sb-0\" (UID: \"bf3d5632-6ab4-4408-a837-7897110106d4\") " pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:53.594507 master-0 kubenswrapper[37036]: I0312 14:48:53.594466 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lfxs\" (UniqueName: \"kubernetes.io/projected/bf3d5632-6ab4-4408-a837-7897110106d4-kube-api-access-6lfxs\") pod \"ovsdbserver-sb-0\" (UID: \"bf3d5632-6ab4-4408-a837-7897110106d4\") " pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:53.594771 master-0 kubenswrapper[37036]: I0312 14:48:53.594600 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf3d5632-6ab4-4408-a837-7897110106d4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"bf3d5632-6ab4-4408-a837-7897110106d4\") " pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:53.594771 master-0 kubenswrapper[37036]: I0312 14:48:53.594677 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf3d5632-6ab4-4408-a837-7897110106d4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"bf3d5632-6ab4-4408-a837-7897110106d4\") " pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:53.595950 master-0 kubenswrapper[37036]: I0312 14:48:53.595885 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bf3d5632-6ab4-4408-a837-7897110106d4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"bf3d5632-6ab4-4408-a837-7897110106d4\") " pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:53.596495 master-0 kubenswrapper[37036]: I0312 14:48:53.596459 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf3d5632-6ab4-4408-a837-7897110106d4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"bf3d5632-6ab4-4408-a837-7897110106d4\") " pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:53.600928 master-0 kubenswrapper[37036]: I0312 14:48:53.600808 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf3d5632-6ab4-4408-a837-7897110106d4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"bf3d5632-6ab4-4408-a837-7897110106d4\") " pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:53.601060 master-0 kubenswrapper[37036]: I0312 14:48:53.601008 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf3d5632-6ab4-4408-a837-7897110106d4-config\") pod \"ovsdbserver-sb-0\" (UID: \"bf3d5632-6ab4-4408-a837-7897110106d4\") " pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:53.601883 master-0 kubenswrapper[37036]: I0312 14:48:53.601853 37036 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 14:48:53.601975 master-0 kubenswrapper[37036]: I0312 14:48:53.601887 37036 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3fee956c-1930-49df-9001-dcf91e20b35e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^217f41f4-d7f6-4f1e-827a-054c2109a16e\") pod \"ovsdbserver-sb-0\" (UID: \"bf3d5632-6ab4-4408-a837-7897110106d4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/832528e9b85a14d07a3a0e69f200d6b92cf5cfc9e9c784e935ef80903e716ce9/globalmount\"" pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:53.602796 master-0 kubenswrapper[37036]: I0312 14:48:53.602755 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf3d5632-6ab4-4408-a837-7897110106d4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"bf3d5632-6ab4-4408-a837-7897110106d4\") " pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:53.606403 master-0 kubenswrapper[37036]: I0312 14:48:53.606323 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf3d5632-6ab4-4408-a837-7897110106d4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"bf3d5632-6ab4-4408-a837-7897110106d4\") " pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:53.614106 master-0 kubenswrapper[37036]: I0312 14:48:53.614048 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lfxs\" (UniqueName: \"kubernetes.io/projected/bf3d5632-6ab4-4408-a837-7897110106d4-kube-api-access-6lfxs\") pod \"ovsdbserver-sb-0\" (UID: \"bf3d5632-6ab4-4408-a837-7897110106d4\") " pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:55.053468 master-0 kubenswrapper[37036]: I0312 14:48:55.053399 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3fee956c-1930-49df-9001-dcf91e20b35e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^217f41f4-d7f6-4f1e-827a-054c2109a16e\") pod \"ovsdbserver-sb-0\" (UID: \"bf3d5632-6ab4-4408-a837-7897110106d4\") " pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:55.195849 master-0 kubenswrapper[37036]: I0312 14:48:55.195800 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 12 14:48:55.379675 master-0 kubenswrapper[37036]: I0312 14:48:55.379548 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 12 14:48:55.385086 master-0 kubenswrapper[37036]: I0312 14:48:55.383764 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:55.386639 master-0 kubenswrapper[37036]: I0312 14:48:55.386614 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 12 14:48:55.386765 master-0 kubenswrapper[37036]: I0312 14:48:55.386677 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 12 14:48:55.387010 master-0 kubenswrapper[37036]: I0312 14:48:55.386973 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 12 14:48:55.407732 master-0 kubenswrapper[37036]: I0312 14:48:55.406457 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 12 14:48:55.540418 master-0 kubenswrapper[37036]: I0312 14:48:55.540321 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-99ce366d-a3e0-4e6f-9e67-8266ee7447bc\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7ec2c716-b975-47af-8ae5-5a9dd1f7c891\") pod \"ovsdbserver-nb-0\" (UID: \"46cbbfbf-551d-40f6-ab13-5a988d23c1d4\") " pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:55.540418 master-0 kubenswrapper[37036]: I0312 14:48:55.540394 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46cbbfbf-551d-40f6-ab13-5a988d23c1d4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"46cbbfbf-551d-40f6-ab13-5a988d23c1d4\") " pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:55.540699 master-0 kubenswrapper[37036]: I0312 14:48:55.540480 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/46cbbfbf-551d-40f6-ab13-5a988d23c1d4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"46cbbfbf-551d-40f6-ab13-5a988d23c1d4\") " pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:55.540699 master-0 kubenswrapper[37036]: I0312 14:48:55.540565 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/46cbbfbf-551d-40f6-ab13-5a988d23c1d4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"46cbbfbf-551d-40f6-ab13-5a988d23c1d4\") " pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:55.540699 master-0 kubenswrapper[37036]: I0312 14:48:55.540686 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46cbbfbf-551d-40f6-ab13-5a988d23c1d4-config\") pod \"ovsdbserver-nb-0\" (UID: \"46cbbfbf-551d-40f6-ab13-5a988d23c1d4\") " pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:55.541724 master-0 kubenswrapper[37036]: I0312 14:48:55.541683 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64jsj\" (UniqueName: \"kubernetes.io/projected/46cbbfbf-551d-40f6-ab13-5a988d23c1d4-kube-api-access-64jsj\") pod \"ovsdbserver-nb-0\" (UID: \"46cbbfbf-551d-40f6-ab13-5a988d23c1d4\") " pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:55.541724 master-0 kubenswrapper[37036]: I0312 14:48:55.541720 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46cbbfbf-551d-40f6-ab13-5a988d23c1d4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"46cbbfbf-551d-40f6-ab13-5a988d23c1d4\") " pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:55.541837 master-0 kubenswrapper[37036]: I0312 14:48:55.541770 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/46cbbfbf-551d-40f6-ab13-5a988d23c1d4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"46cbbfbf-551d-40f6-ab13-5a988d23c1d4\") " pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:55.644648 master-0 kubenswrapper[37036]: I0312 14:48:55.644545 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/46cbbfbf-551d-40f6-ab13-5a988d23c1d4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"46cbbfbf-551d-40f6-ab13-5a988d23c1d4\") " pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:55.644830 master-0 kubenswrapper[37036]: I0312 14:48:55.644668 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-99ce366d-a3e0-4e6f-9e67-8266ee7447bc\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7ec2c716-b975-47af-8ae5-5a9dd1f7c891\") pod \"ovsdbserver-nb-0\" (UID: \"46cbbfbf-551d-40f6-ab13-5a988d23c1d4\") " pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:55.644830 master-0 kubenswrapper[37036]: I0312 14:48:55.644703 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46cbbfbf-551d-40f6-ab13-5a988d23c1d4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"46cbbfbf-551d-40f6-ab13-5a988d23c1d4\") " pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:55.644912 master-0 kubenswrapper[37036]: I0312 14:48:55.644843 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/46cbbfbf-551d-40f6-ab13-5a988d23c1d4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"46cbbfbf-551d-40f6-ab13-5a988d23c1d4\") " pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:55.644912 master-0 kubenswrapper[37036]: I0312 14:48:55.644866 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/46cbbfbf-551d-40f6-ab13-5a988d23c1d4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"46cbbfbf-551d-40f6-ab13-5a988d23c1d4\") " pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:55.645297 master-0 kubenswrapper[37036]: I0312 14:48:55.645251 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/46cbbfbf-551d-40f6-ab13-5a988d23c1d4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"46cbbfbf-551d-40f6-ab13-5a988d23c1d4\") " pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:55.645560 master-0 kubenswrapper[37036]: I0312 14:48:55.645517 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46cbbfbf-551d-40f6-ab13-5a988d23c1d4-config\") pod \"ovsdbserver-nb-0\" (UID: \"46cbbfbf-551d-40f6-ab13-5a988d23c1d4\") " pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:55.645880 master-0 kubenswrapper[37036]: I0312 14:48:55.645809 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64jsj\" (UniqueName: \"kubernetes.io/projected/46cbbfbf-551d-40f6-ab13-5a988d23c1d4-kube-api-access-64jsj\") pod \"ovsdbserver-nb-0\" (UID: \"46cbbfbf-551d-40f6-ab13-5a988d23c1d4\") " pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:55.645955 master-0 kubenswrapper[37036]: I0312 14:48:55.645932 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46cbbfbf-551d-40f6-ab13-5a988d23c1d4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"46cbbfbf-551d-40f6-ab13-5a988d23c1d4\") " pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:55.646098 master-0 kubenswrapper[37036]: I0312 14:48:55.645929 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46cbbfbf-551d-40f6-ab13-5a988d23c1d4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"46cbbfbf-551d-40f6-ab13-5a988d23c1d4\") " pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:55.649195 master-0 kubenswrapper[37036]: I0312 14:48:55.647738 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46cbbfbf-551d-40f6-ab13-5a988d23c1d4-config\") pod \"ovsdbserver-nb-0\" (UID: \"46cbbfbf-551d-40f6-ab13-5a988d23c1d4\") " pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:55.649195 master-0 kubenswrapper[37036]: I0312 14:48:55.648753 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/46cbbfbf-551d-40f6-ab13-5a988d23c1d4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"46cbbfbf-551d-40f6-ab13-5a988d23c1d4\") " pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:55.649415 master-0 kubenswrapper[37036]: I0312 14:48:55.649325 37036 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 14:48:55.649415 master-0 kubenswrapper[37036]: I0312 14:48:55.649359 37036 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-99ce366d-a3e0-4e6f-9e67-8266ee7447bc\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7ec2c716-b975-47af-8ae5-5a9dd1f7c891\") pod \"ovsdbserver-nb-0\" (UID: \"46cbbfbf-551d-40f6-ab13-5a988d23c1d4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/15bece0402c4d89b51dc66c3b2e8eade912f68344668ff38ce932adb9b0b46a0/globalmount\"" pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:55.649537 master-0 kubenswrapper[37036]: I0312 14:48:55.649506 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/46cbbfbf-551d-40f6-ab13-5a988d23c1d4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"46cbbfbf-551d-40f6-ab13-5a988d23c1d4\") " pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:55.651411 master-0 kubenswrapper[37036]: I0312 14:48:55.651375 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46cbbfbf-551d-40f6-ab13-5a988d23c1d4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"46cbbfbf-551d-40f6-ab13-5a988d23c1d4\") " pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:55.673084 master-0 kubenswrapper[37036]: I0312 14:48:55.673030 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64jsj\" (UniqueName: \"kubernetes.io/projected/46cbbfbf-551d-40f6-ab13-5a988d23c1d4-kube-api-access-64jsj\") pod \"ovsdbserver-nb-0\" (UID: \"46cbbfbf-551d-40f6-ab13-5a988d23c1d4\") " pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:57.015286 master-0 kubenswrapper[37036]: I0312 14:48:57.015225 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-99ce366d-a3e0-4e6f-9e67-8266ee7447bc\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7ec2c716-b975-47af-8ae5-5a9dd1f7c891\") pod \"ovsdbserver-nb-0\" (UID: \"46cbbfbf-551d-40f6-ab13-5a988d23c1d4\") " pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:57.209747 master-0 kubenswrapper[37036]: I0312 14:48:57.209613 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 12 14:48:59.112070 master-0 kubenswrapper[37036]: I0312 14:48:59.112029 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 12 14:48:59.368859 master-0 kubenswrapper[37036]: I0312 14:48:59.368764 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8"] Mar 12 14:48:59.378641 master-0 kubenswrapper[37036]: I0312 14:48:59.378571 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4mq52"] Mar 12 14:48:59.752670 master-0 kubenswrapper[37036]: I0312 14:48:59.752611 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 12 14:48:59.760990 master-0 kubenswrapper[37036]: W0312 14:48:59.760928 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode303709a_0166_4153_9e20_0351599d1a9c.slice/crio-5e77384f04b0e9365a8f504e730cd7595649461314e3322b41206ed77a6a8c8f WatchSource:0}: Error finding container 5e77384f04b0e9365a8f504e730cd7595649461314e3322b41206ed77a6a8c8f: Status 404 returned error can't find the container with id 5e77384f04b0e9365a8f504e730cd7595649461314e3322b41206ed77a6a8c8f Mar 12 14:48:59.778627 master-0 kubenswrapper[37036]: I0312 14:48:59.778561 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 12 14:48:59.792363 master-0 kubenswrapper[37036]: I0312 14:48:59.789403 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 14:48:59.804224 master-0 kubenswrapper[37036]: I0312 14:48:59.801111 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 14:49:00.111742 master-0 kubenswrapper[37036]: I0312 14:49:00.111643 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6l42m"] Mar 12 14:49:00.172357 master-0 kubenswrapper[37036]: I0312 14:49:00.172238 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"114161cb-b5bb-41d9-b085-63a181ec3480","Type":"ContainerStarted","Data":"cf8f2d09bcc7391c832ed51b8d0feed855c04a3844f63eceb791df61fbce24d1"} Mar 12 14:49:00.176825 master-0 kubenswrapper[37036]: I0312 14:49:00.176056 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4mq52" event={"ID":"a24e803b-32b7-4b4b-bb59-f58b9a506626","Type":"ContainerStarted","Data":"ae520b571d117b8dba78482985360f902cd7530f2be30c89ddb3a5587c3b5360"} Mar 12 14:49:00.180465 master-0 kubenswrapper[37036]: I0312 14:49:00.180107 37036 generic.go:334] "Generic (PLEG): container finished" podID="c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83" containerID="ff712403c14ea07cf4b411e431f0357f57571a9207fa9fc23bc2e188f7ad8f41" exitCode=0 Mar 12 14:49:00.180465 master-0 kubenswrapper[37036]: I0312 14:49:00.180432 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476fd89bc-2nlww" event={"ID":"c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83","Type":"ContainerDied","Data":"ff712403c14ea07cf4b411e431f0357f57571a9207fa9fc23bc2e188f7ad8f41"} Mar 12 14:49:00.181555 master-0 kubenswrapper[37036]: W0312 14:49:00.181248 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30f28fa9_b72d_471a_b089_9a79f5669fae.slice/crio-e5f48acec5f9007412ad8ecc92b8b9222e37715831220c51f6666db6019a41b3 WatchSource:0}: Error finding container e5f48acec5f9007412ad8ecc92b8b9222e37715831220c51f6666db6019a41b3: Status 404 returned error can't find the container with id e5f48acec5f9007412ad8ecc92b8b9222e37715831220c51f6666db6019a41b3 Mar 12 14:49:00.201372 master-0 kubenswrapper[37036]: I0312 14:49:00.201306 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-5gmb4"] Mar 12 14:49:00.219518 master-0 kubenswrapper[37036]: I0312 14:49:00.216734 37036 generic.go:334] "Generic (PLEG): container finished" podID="cd13635d-f1f1-4c88-ab76-560157eb3878" containerID="d2ea60849d285b44507ce9b2bbd3aa6a29fdcf4014a070e1e7b0042e94c786d7" exitCode=0 Mar 12 14:49:00.219518 master-0 kubenswrapper[37036]: I0312 14:49:00.216983 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685c76cf85-l8v4f" event={"ID":"cd13635d-f1f1-4c88-ab76-560157eb3878","Type":"ContainerDied","Data":"d2ea60849d285b44507ce9b2bbd3aa6a29fdcf4014a070e1e7b0042e94c786d7"} Mar 12 14:49:00.238543 master-0 kubenswrapper[37036]: W0312 14:49:00.238487 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4510d55e_0c1c_463c_b311_74e9c8864474.slice/crio-3b108320c611869bbb1beff5538348d294fb70aca20a7d1ff5f433c68f9d9cec WatchSource:0}: Error finding container 3b108320c611869bbb1beff5538348d294fb70aca20a7d1ff5f433c68f9d9cec: Status 404 returned error can't find the container with id 3b108320c611869bbb1beff5538348d294fb70aca20a7d1ff5f433c68f9d9cec Mar 12 14:49:00.253953 master-0 kubenswrapper[37036]: I0312 14:49:00.248787 37036 generic.go:334] "Generic (PLEG): container finished" podID="9866f383-8abf-4106-9cf6-9e6265fe07b4" containerID="88ff80f2841f969d593cb9825055b9cca6171bcc8e5b19cf198571e7bbed1229" exitCode=0 Mar 12 14:49:00.253953 master-0 kubenswrapper[37036]: I0312 14:49:00.249163 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8" event={"ID":"9866f383-8abf-4106-9cf6-9e6265fe07b4","Type":"ContainerDied","Data":"88ff80f2841f969d593cb9825055b9cca6171bcc8e5b19cf198571e7bbed1229"} Mar 12 14:49:00.253953 master-0 kubenswrapper[37036]: I0312 14:49:00.249202 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8" event={"ID":"9866f383-8abf-4106-9cf6-9e6265fe07b4","Type":"ContainerStarted","Data":"00dfd6ee8f6122f7aca9e4d925b373cb8a7826ec186e0a61c205ea58bedb3fd6"} Mar 12 14:49:00.273350 master-0 kubenswrapper[37036]: I0312 14:49:00.262071 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e303709a-0166-4153-9e20-0351599d1a9c","Type":"ContainerStarted","Data":"5e77384f04b0e9365a8f504e730cd7595649461314e3322b41206ed77a6a8c8f"} Mar 12 14:49:00.273350 master-0 kubenswrapper[37036]: I0312 14:49:00.264815 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b6a9660b-6127-48b0-82e7-cf5e38a66429","Type":"ContainerStarted","Data":"256d1be2f68af76594646ee40b6631271ab47e6f302114e01826e9eaa2054cc2"} Mar 12 14:49:00.293849 master-0 kubenswrapper[37036]: I0312 14:49:00.292949 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f063fb36-4428-461a-8b29-3750c3f8217f","Type":"ContainerStarted","Data":"ba3c61f588434c1ff46feec3f9ab29236567000780134a8f3e9365b939156661"} Mar 12 14:49:00.299332 master-0 kubenswrapper[37036]: I0312 14:49:00.299225 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e","Type":"ContainerStarted","Data":"eca44b46d81ec26978e65e033f66ec51dc5aa5e8d6887ba2edc550c89901f550"} Mar 12 14:49:00.348307 master-0 kubenswrapper[37036]: I0312 14:49:00.348200 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 12 14:49:00.886691 master-0 kubenswrapper[37036]: I0312 14:49:00.886578 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-2nlww" Mar 12 14:49:00.947198 master-0 kubenswrapper[37036]: I0312 14:49:00.947145 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-l8v4f" Mar 12 14:49:00.984020 master-0 kubenswrapper[37036]: I0312 14:49:00.983923 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9lcj\" (UniqueName: \"kubernetes.io/projected/c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83-kube-api-access-x9lcj\") pod \"c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83\" (UID: \"c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83\") " Mar 12 14:49:00.984271 master-0 kubenswrapper[37036]: I0312 14:49:00.984104 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83-config\") pod \"c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83\" (UID: \"c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83\") " Mar 12 14:49:00.984271 master-0 kubenswrapper[37036]: I0312 14:49:00.984257 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd13635d-f1f1-4c88-ab76-560157eb3878-config\") pod \"cd13635d-f1f1-4c88-ab76-560157eb3878\" (UID: \"cd13635d-f1f1-4c88-ab76-560157eb3878\") " Mar 12 14:49:00.986736 master-0 kubenswrapper[37036]: I0312 14:49:00.984284 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjlxf\" (UniqueName: \"kubernetes.io/projected/cd13635d-f1f1-4c88-ab76-560157eb3878-kube-api-access-gjlxf\") pod \"cd13635d-f1f1-4c88-ab76-560157eb3878\" (UID: \"cd13635d-f1f1-4c88-ab76-560157eb3878\") " Mar 12 14:49:00.986736 master-0 kubenswrapper[37036]: I0312 14:49:00.984313 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83-dns-svc\") pod \"c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83\" (UID: \"c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83\") " Mar 12 14:49:00.995534 master-0 kubenswrapper[37036]: I0312 14:49:00.995435 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83-kube-api-access-x9lcj" (OuterVolumeSpecName: "kube-api-access-x9lcj") pod "c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83" (UID: "c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83"). InnerVolumeSpecName "kube-api-access-x9lcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:49:01.022960 master-0 kubenswrapper[37036]: I0312 14:49:01.021107 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83-config" (OuterVolumeSpecName: "config") pod "c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83" (UID: "c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:01.026815 master-0 kubenswrapper[37036]: I0312 14:49:01.026412 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd13635d-f1f1-4c88-ab76-560157eb3878-kube-api-access-gjlxf" (OuterVolumeSpecName: "kube-api-access-gjlxf") pod "cd13635d-f1f1-4c88-ab76-560157eb3878" (UID: "cd13635d-f1f1-4c88-ab76-560157eb3878"). InnerVolumeSpecName "kube-api-access-gjlxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:49:01.029039 master-0 kubenswrapper[37036]: I0312 14:49:01.027144 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd13635d-f1f1-4c88-ab76-560157eb3878-config" (OuterVolumeSpecName: "config") pod "cd13635d-f1f1-4c88-ab76-560157eb3878" (UID: "cd13635d-f1f1-4c88-ab76-560157eb3878"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:01.039119 master-0 kubenswrapper[37036]: I0312 14:49:01.039064 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83" (UID: "c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:01.087940 master-0 kubenswrapper[37036]: I0312 14:49:01.087823 37036 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:01.087940 master-0 kubenswrapper[37036]: I0312 14:49:01.087869 37036 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd13635d-f1f1-4c88-ab76-560157eb3878-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:01.087940 master-0 kubenswrapper[37036]: I0312 14:49:01.087882 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjlxf\" (UniqueName: \"kubernetes.io/projected/cd13635d-f1f1-4c88-ab76-560157eb3878-kube-api-access-gjlxf\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:01.087940 master-0 kubenswrapper[37036]: I0312 14:49:01.087892 37036 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:01.087940 master-0 kubenswrapper[37036]: I0312 14:49:01.087916 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9lcj\" (UniqueName: \"kubernetes.io/projected/c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83-kube-api-access-x9lcj\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:01.325326 master-0 kubenswrapper[37036]: I0312 14:49:01.313660 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 12 14:49:01.358925 master-0 kubenswrapper[37036]: I0312 14:49:01.355882 37036 generic.go:334] "Generic (PLEG): container finished" podID="4510d55e-0c1c-463c-b311-74e9c8864474" containerID="dfa69c66652fc3d44765070bfa9bab08744090cc7955dbdc13717b53267efd05" exitCode=0 Mar 12 14:49:01.358925 master-0 kubenswrapper[37036]: I0312 14:49:01.356042 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586dbdbb8c-5gmb4" event={"ID":"4510d55e-0c1c-463c-b311-74e9c8864474","Type":"ContainerDied","Data":"dfa69c66652fc3d44765070bfa9bab08744090cc7955dbdc13717b53267efd05"} Mar 12 14:49:01.358925 master-0 kubenswrapper[37036]: I0312 14:49:01.356089 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586dbdbb8c-5gmb4" event={"ID":"4510d55e-0c1c-463c-b311-74e9c8864474","Type":"ContainerStarted","Data":"3b108320c611869bbb1beff5538348d294fb70aca20a7d1ff5f433c68f9d9cec"} Mar 12 14:49:01.363919 master-0 kubenswrapper[37036]: I0312 14:49:01.363687 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"bf3d5632-6ab4-4408-a837-7897110106d4","Type":"ContainerStarted","Data":"2d62031c0df7b56acd223204ff097316a3562969a0e85ac912809742f28ba2ba"} Mar 12 14:49:01.370117 master-0 kubenswrapper[37036]: I0312 14:49:01.367074 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476fd89bc-2nlww" event={"ID":"c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83","Type":"ContainerDied","Data":"42958e97293f4dc711a4ffd619c8e33515d6b2f488c40cb8fa433340e2db32c4"} Mar 12 14:49:01.370117 master-0 kubenswrapper[37036]: I0312 14:49:01.367119 37036 scope.go:117] "RemoveContainer" containerID="ff712403c14ea07cf4b411e431f0357f57571a9207fa9fc23bc2e188f7ad8f41" Mar 12 14:49:01.370117 master-0 kubenswrapper[37036]: I0312 14:49:01.367344 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-2nlww" Mar 12 14:49:01.378177 master-0 kubenswrapper[37036]: I0312 14:49:01.374743 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6l42m" event={"ID":"30f28fa9-b72d-471a-b089-9a79f5669fae","Type":"ContainerStarted","Data":"e5f48acec5f9007412ad8ecc92b8b9222e37715831220c51f6666db6019a41b3"} Mar 12 14:49:01.378177 master-0 kubenswrapper[37036]: I0312 14:49:01.377337 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685c76cf85-l8v4f" event={"ID":"cd13635d-f1f1-4c88-ab76-560157eb3878","Type":"ContainerDied","Data":"b885e5ff268839b8b10b993a8462a2ae523baaf496a7580a15c00820f902c3a1"} Mar 12 14:49:01.378177 master-0 kubenswrapper[37036]: I0312 14:49:01.377444 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-l8v4f" Mar 12 14:49:01.387921 master-0 kubenswrapper[37036]: I0312 14:49:01.382401 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8" event={"ID":"9866f383-8abf-4106-9cf6-9e6265fe07b4","Type":"ContainerStarted","Data":"bc43838a1d1e6583d9838261024a301cb9ad104ca3b277e536c19353a3e7dee2"} Mar 12 14:49:01.387921 master-0 kubenswrapper[37036]: I0312 14:49:01.383592 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8" Mar 12 14:49:01.741023 master-0 kubenswrapper[37036]: E0312 14:49:01.735985 37036 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd13635d_f1f1_4c88_ab76_560157eb3878.slice/crio-b885e5ff268839b8b10b993a8462a2ae523baaf496a7580a15c00820f902c3a1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0be6951_f43a_4e2f_b1b1_fbe5a0ec6e83.slice/crio-42958e97293f4dc711a4ffd619c8e33515d6b2f488c40cb8fa433340e2db32c4\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0be6951_f43a_4e2f_b1b1_fbe5a0ec6e83.slice\": RecentStats: unable to find data in memory cache]" Mar 12 14:49:01.913675 master-0 kubenswrapper[37036]: I0312 14:49:01.913470 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-l8v4f"] Mar 12 14:49:01.932996 master-0 kubenswrapper[37036]: I0312 14:49:01.932871 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-l8v4f"] Mar 12 14:49:01.954377 master-0 kubenswrapper[37036]: I0312 14:49:01.953548 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-2nlww"] Mar 12 14:49:01.962189 master-0 kubenswrapper[37036]: I0312 14:49:01.961012 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-2nlww"] Mar 12 14:49:01.965478 master-0 kubenswrapper[37036]: I0312 14:49:01.965351 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8" podStartSLOduration=18.965330696 podStartE2EDuration="18.965330696s" podCreationTimestamp="2026-03-12 14:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:49:01.855768746 +0000 UTC m=+800.863509683" watchObservedRunningTime="2026-03-12 14:49:01.965330696 +0000 UTC m=+800.973071643" Mar 12 14:49:03.131783 master-0 kubenswrapper[37036]: W0312 14:49:03.131712 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46cbbfbf_551d_40f6_ab13_5a988d23c1d4.slice/crio-bd82c6d8c175ac79edcfab9bd5f7d4290833a69654834b766699b1dc0e261fd4 WatchSource:0}: Error finding container bd82c6d8c175ac79edcfab9bd5f7d4290833a69654834b766699b1dc0e261fd4: Status 404 returned error can't find the container with id bd82c6d8c175ac79edcfab9bd5f7d4290833a69654834b766699b1dc0e261fd4 Mar 12 14:49:03.253298 master-0 kubenswrapper[37036]: I0312 14:49:03.253243 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83" path="/var/lib/kubelet/pods/c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83/volumes" Mar 12 14:49:03.253893 master-0 kubenswrapper[37036]: I0312 14:49:03.253870 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd13635d-f1f1-4c88-ab76-560157eb3878" path="/var/lib/kubelet/pods/cd13635d-f1f1-4c88-ab76-560157eb3878/volumes" Mar 12 14:49:03.404164 master-0 kubenswrapper[37036]: I0312 14:49:03.403987 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"46cbbfbf-551d-40f6-ab13-5a988d23c1d4","Type":"ContainerStarted","Data":"bd82c6d8c175ac79edcfab9bd5f7d4290833a69654834b766699b1dc0e261fd4"} Mar 12 14:49:08.306779 master-0 kubenswrapper[37036]: I0312 14:49:08.306742 37036 scope.go:117] "RemoveContainer" containerID="d2ea60849d285b44507ce9b2bbd3aa6a29fdcf4014a070e1e7b0042e94c786d7" Mar 12 14:49:08.798071 master-0 kubenswrapper[37036]: I0312 14:49:08.798022 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8" Mar 12 14:49:09.292980 master-0 kubenswrapper[37036]: I0312 14:49:09.292625 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-5gmb4"] Mar 12 14:49:11.515373 master-0 kubenswrapper[37036]: I0312 14:49:11.515201 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4mq52" event={"ID":"a24e803b-32b7-4b4b-bb59-f58b9a506626","Type":"ContainerStarted","Data":"2f3583fd9b17cf3fdba5103612838d4e339d3a988b4782691a8e22ca4f31e373"} Mar 12 14:49:11.516076 master-0 kubenswrapper[37036]: I0312 14:49:11.516045 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-4mq52" Mar 12 14:49:11.519385 master-0 kubenswrapper[37036]: I0312 14:49:11.519037 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6l42m" event={"ID":"30f28fa9-b72d-471a-b089-9a79f5669fae","Type":"ContainerStarted","Data":"fc99a79522e2473a9651238b01fbe7700a855422c5a017d454166d8305d23e78"} Mar 12 14:49:11.524529 master-0 kubenswrapper[37036]: I0312 14:49:11.524138 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586dbdbb8c-5gmb4" podUID="4510d55e-0c1c-463c-b311-74e9c8864474" containerName="dnsmasq-dns" containerID="cri-o://db807882ce831ab037fa9c360c5e9d0cb914c519c24750497988175da7abffc9" gracePeriod=10 Mar 12 14:49:11.524529 master-0 kubenswrapper[37036]: I0312 14:49:11.524229 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586dbdbb8c-5gmb4" event={"ID":"4510d55e-0c1c-463c-b311-74e9c8864474","Type":"ContainerStarted","Data":"db807882ce831ab037fa9c360c5e9d0cb914c519c24750497988175da7abffc9"} Mar 12 14:49:11.524529 master-0 kubenswrapper[37036]: I0312 14:49:11.524278 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586dbdbb8c-5gmb4" Mar 12 14:49:11.528955 master-0 kubenswrapper[37036]: I0312 14:49:11.528664 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e303709a-0166-4153-9e20-0351599d1a9c","Type":"ContainerStarted","Data":"787d358271da624f851a089b558b45124a4a2502097179461ca8dfd1cea5bdb4"} Mar 12 14:49:11.535064 master-0 kubenswrapper[37036]: I0312 14:49:11.533516 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"114161cb-b5bb-41d9-b085-63a181ec3480","Type":"ContainerStarted","Data":"dbba7df2e812df30bb97948c8198035b900e91076dd7ba7c03f2f5f7c1ce155b"} Mar 12 14:49:11.537356 master-0 kubenswrapper[37036]: I0312 14:49:11.537281 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-4mq52" podStartSLOduration=9.9237223 podStartE2EDuration="21.537263102s" podCreationTimestamp="2026-03-12 14:48:50 +0000 UTC" firstStartedPulling="2026-03-12 14:48:59.382954958 +0000 UTC m=+798.390695895" lastFinishedPulling="2026-03-12 14:49:10.99649576 +0000 UTC m=+810.004236697" observedRunningTime="2026-03-12 14:49:11.534856522 +0000 UTC m=+810.542597469" watchObservedRunningTime="2026-03-12 14:49:11.537263102 +0000 UTC m=+810.545004039" Mar 12 14:49:11.537762 master-0 kubenswrapper[37036]: I0312 14:49:11.537729 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"46cbbfbf-551d-40f6-ab13-5a988d23c1d4","Type":"ContainerStarted","Data":"bf447f1c7681885f8ace8bde3659fdf6b3d51c3592fb59ab563628e5006f0398"} Mar 12 14:49:11.542866 master-0 kubenswrapper[37036]: I0312 14:49:11.542840 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"bf3d5632-6ab4-4408-a837-7897110106d4","Type":"ContainerStarted","Data":"24843a656f31b5310dc0f3cc4fa2c6f81ba92ea16994ca78b78b50e63cd308a9"} Mar 12 14:49:11.548010 master-0 kubenswrapper[37036]: I0312 14:49:11.547965 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b6a9660b-6127-48b0-82e7-cf5e38a66429","Type":"ContainerStarted","Data":"2e2d98f879b49db7982148f862b949cd4eb1c65641f277ba0f576b76048595fa"} Mar 12 14:49:11.548254 master-0 kubenswrapper[37036]: I0312 14:49:11.548234 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 12 14:49:11.565067 master-0 kubenswrapper[37036]: I0312 14:49:11.564850 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586dbdbb8c-5gmb4" podStartSLOduration=29.564829736 podStartE2EDuration="29.564829736s" podCreationTimestamp="2026-03-12 14:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:49:11.556093949 +0000 UTC m=+810.563834886" watchObservedRunningTime="2026-03-12 14:49:11.564829736 +0000 UTC m=+810.572570673" Mar 12 14:49:11.691018 master-0 kubenswrapper[37036]: I0312 14:49:11.688839 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=16.324198063 podStartE2EDuration="26.688799273s" podCreationTimestamp="2026-03-12 14:48:45 +0000 UTC" firstStartedPulling="2026-03-12 14:48:59.127214441 +0000 UTC m=+798.134955378" lastFinishedPulling="2026-03-12 14:49:09.491815651 +0000 UTC m=+808.499556588" observedRunningTime="2026-03-12 14:49:11.671297379 +0000 UTC m=+810.679038316" watchObservedRunningTime="2026-03-12 14:49:11.688799273 +0000 UTC m=+810.696540210" Mar 12 14:49:12.109988 master-0 kubenswrapper[37036]: I0312 14:49:12.109946 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586dbdbb8c-5gmb4" Mar 12 14:49:12.195049 master-0 kubenswrapper[37036]: I0312 14:49:12.194486 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4510d55e-0c1c-463c-b311-74e9c8864474-config\") pod \"4510d55e-0c1c-463c-b311-74e9c8864474\" (UID: \"4510d55e-0c1c-463c-b311-74e9c8864474\") " Mar 12 14:49:12.195049 master-0 kubenswrapper[37036]: I0312 14:49:12.194616 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4510d55e-0c1c-463c-b311-74e9c8864474-dns-svc\") pod \"4510d55e-0c1c-463c-b311-74e9c8864474\" (UID: \"4510d55e-0c1c-463c-b311-74e9c8864474\") " Mar 12 14:49:12.195049 master-0 kubenswrapper[37036]: I0312 14:49:12.194669 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-288hc\" (UniqueName: \"kubernetes.io/projected/4510d55e-0c1c-463c-b311-74e9c8864474-kube-api-access-288hc\") pod \"4510d55e-0c1c-463c-b311-74e9c8864474\" (UID: \"4510d55e-0c1c-463c-b311-74e9c8864474\") " Mar 12 14:49:12.428667 master-0 kubenswrapper[37036]: I0312 14:49:12.428511 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4510d55e-0c1c-463c-b311-74e9c8864474-kube-api-access-288hc" (OuterVolumeSpecName: "kube-api-access-288hc") pod "4510d55e-0c1c-463c-b311-74e9c8864474" (UID: "4510d55e-0c1c-463c-b311-74e9c8864474"). InnerVolumeSpecName "kube-api-access-288hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:49:12.452662 master-0 kubenswrapper[37036]: E0312 14:49:12.452472 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4510d55e-0c1c-463c-b311-74e9c8864474-config podName:4510d55e-0c1c-463c-b311-74e9c8864474 nodeName:}" failed. No retries permitted until 2026-03-12 14:49:12.952438438 +0000 UTC m=+811.960179375 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config" (UniqueName: "kubernetes.io/configmap/4510d55e-0c1c-463c-b311-74e9c8864474-config") pod "4510d55e-0c1c-463c-b311-74e9c8864474" (UID: "4510d55e-0c1c-463c-b311-74e9c8864474") : error deleting /var/lib/kubelet/pods/4510d55e-0c1c-463c-b311-74e9c8864474/volume-subpaths: remove /var/lib/kubelet/pods/4510d55e-0c1c-463c-b311-74e9c8864474/volume-subpaths: no such file or directory Mar 12 14:49:12.453056 master-0 kubenswrapper[37036]: I0312 14:49:12.453003 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4510d55e-0c1c-463c-b311-74e9c8864474-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4510d55e-0c1c-463c-b311-74e9c8864474" (UID: "4510d55e-0c1c-463c-b311-74e9c8864474"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:12.502274 master-0 kubenswrapper[37036]: I0312 14:49:12.502205 37036 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4510d55e-0c1c-463c-b311-74e9c8864474-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:12.502274 master-0 kubenswrapper[37036]: I0312 14:49:12.502249 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-288hc\" (UniqueName: \"kubernetes.io/projected/4510d55e-0c1c-463c-b311-74e9c8864474-kube-api-access-288hc\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:12.557917 master-0 kubenswrapper[37036]: I0312 14:49:12.557856 37036 generic.go:334] "Generic (PLEG): container finished" podID="30f28fa9-b72d-471a-b089-9a79f5669fae" containerID="fc99a79522e2473a9651238b01fbe7700a855422c5a017d454166d8305d23e78" exitCode=0 Mar 12 14:49:12.558498 master-0 kubenswrapper[37036]: I0312 14:49:12.558477 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6l42m" event={"ID":"30f28fa9-b72d-471a-b089-9a79f5669fae","Type":"ContainerDied","Data":"fc99a79522e2473a9651238b01fbe7700a855422c5a017d454166d8305d23e78"} Mar 12 14:49:12.576095 master-0 kubenswrapper[37036]: I0312 14:49:12.576042 37036 generic.go:334] "Generic (PLEG): container finished" podID="4510d55e-0c1c-463c-b311-74e9c8864474" containerID="db807882ce831ab037fa9c360c5e9d0cb914c519c24750497988175da7abffc9" exitCode=0 Mar 12 14:49:12.576170 master-0 kubenswrapper[37036]: I0312 14:49:12.576114 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586dbdbb8c-5gmb4" event={"ID":"4510d55e-0c1c-463c-b311-74e9c8864474","Type":"ContainerDied","Data":"db807882ce831ab037fa9c360c5e9d0cb914c519c24750497988175da7abffc9"} Mar 12 14:49:12.576170 master-0 kubenswrapper[37036]: I0312 14:49:12.576146 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586dbdbb8c-5gmb4" event={"ID":"4510d55e-0c1c-463c-b311-74e9c8864474","Type":"ContainerDied","Data":"3b108320c611869bbb1beff5538348d294fb70aca20a7d1ff5f433c68f9d9cec"} Mar 12 14:49:12.576170 master-0 kubenswrapper[37036]: I0312 14:49:12.576163 37036 scope.go:117] "RemoveContainer" containerID="db807882ce831ab037fa9c360c5e9d0cb914c519c24750497988175da7abffc9" Mar 12 14:49:12.576319 master-0 kubenswrapper[37036]: I0312 14:49:12.576275 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586dbdbb8c-5gmb4" Mar 12 14:49:12.593359 master-0 kubenswrapper[37036]: I0312 14:49:12.593318 37036 scope.go:117] "RemoveContainer" containerID="dfa69c66652fc3d44765070bfa9bab08744090cc7955dbdc13717b53267efd05" Mar 12 14:49:12.594919 master-0 kubenswrapper[37036]: I0312 14:49:12.594871 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f063fb36-4428-461a-8b29-3750c3f8217f","Type":"ContainerStarted","Data":"f7462dd01e0eb4a4819a9668830cc108b07028a9765d03e8d331f4a0c9108a53"} Mar 12 14:49:12.613669 master-0 kubenswrapper[37036]: I0312 14:49:12.613622 37036 scope.go:117] "RemoveContainer" containerID="db807882ce831ab037fa9c360c5e9d0cb914c519c24750497988175da7abffc9" Mar 12 14:49:12.613977 master-0 kubenswrapper[37036]: E0312 14:49:12.613947 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db807882ce831ab037fa9c360c5e9d0cb914c519c24750497988175da7abffc9\": container with ID starting with db807882ce831ab037fa9c360c5e9d0cb914c519c24750497988175da7abffc9 not found: ID does not exist" containerID="db807882ce831ab037fa9c360c5e9d0cb914c519c24750497988175da7abffc9" Mar 12 14:49:12.614057 master-0 kubenswrapper[37036]: I0312 14:49:12.613978 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db807882ce831ab037fa9c360c5e9d0cb914c519c24750497988175da7abffc9"} err="failed to get container status \"db807882ce831ab037fa9c360c5e9d0cb914c519c24750497988175da7abffc9\": rpc error: code = NotFound desc = could not find container \"db807882ce831ab037fa9c360c5e9d0cb914c519c24750497988175da7abffc9\": container with ID starting with db807882ce831ab037fa9c360c5e9d0cb914c519c24750497988175da7abffc9 not found: ID does not exist" Mar 12 14:49:12.614057 master-0 kubenswrapper[37036]: I0312 14:49:12.614001 37036 scope.go:117] "RemoveContainer" containerID="dfa69c66652fc3d44765070bfa9bab08744090cc7955dbdc13717b53267efd05" Mar 12 14:49:12.614269 master-0 kubenswrapper[37036]: E0312 14:49:12.614237 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfa69c66652fc3d44765070bfa9bab08744090cc7955dbdc13717b53267efd05\": container with ID starting with dfa69c66652fc3d44765070bfa9bab08744090cc7955dbdc13717b53267efd05 not found: ID does not exist" containerID="dfa69c66652fc3d44765070bfa9bab08744090cc7955dbdc13717b53267efd05" Mar 12 14:49:12.614309 master-0 kubenswrapper[37036]: I0312 14:49:12.614263 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfa69c66652fc3d44765070bfa9bab08744090cc7955dbdc13717b53267efd05"} err="failed to get container status \"dfa69c66652fc3d44765070bfa9bab08744090cc7955dbdc13717b53267efd05\": rpc error: code = NotFound desc = could not find container \"dfa69c66652fc3d44765070bfa9bab08744090cc7955dbdc13717b53267efd05\": container with ID starting with dfa69c66652fc3d44765070bfa9bab08744090cc7955dbdc13717b53267efd05 not found: ID does not exist" Mar 12 14:49:13.028371 master-0 kubenswrapper[37036]: I0312 14:49:13.028311 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4510d55e-0c1c-463c-b311-74e9c8864474-config\") pod \"4510d55e-0c1c-463c-b311-74e9c8864474\" (UID: \"4510d55e-0c1c-463c-b311-74e9c8864474\") " Mar 12 14:49:13.029145 master-0 kubenswrapper[37036]: I0312 14:49:13.029124 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4510d55e-0c1c-463c-b311-74e9c8864474-config" (OuterVolumeSpecName: "config") pod "4510d55e-0c1c-463c-b311-74e9c8864474" (UID: "4510d55e-0c1c-463c-b311-74e9c8864474"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:13.132565 master-0 kubenswrapper[37036]: I0312 14:49:13.132431 37036 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4510d55e-0c1c-463c-b311-74e9c8864474-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:13.296323 master-0 kubenswrapper[37036]: I0312 14:49:13.296258 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-5gmb4"] Mar 12 14:49:13.306980 master-0 kubenswrapper[37036]: I0312 14:49:13.306912 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-5gmb4"] Mar 12 14:49:13.625389 master-0 kubenswrapper[37036]: I0312 14:49:13.625319 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e","Type":"ContainerStarted","Data":"f40dd02a35e0d13c24e4347b09e1fd06de3a873ee8c19bfa2f7f841c96074bb0"} Mar 12 14:49:13.630145 master-0 kubenswrapper[37036]: I0312 14:49:13.630094 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6l42m" event={"ID":"30f28fa9-b72d-471a-b089-9a79f5669fae","Type":"ContainerStarted","Data":"59c3f02cee3939649eafdb3af70398a1d36e8ea21741c3e77bcd11d2a7c92c2e"} Mar 12 14:49:14.641177 master-0 kubenswrapper[37036]: I0312 14:49:14.641096 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6l42m" event={"ID":"30f28fa9-b72d-471a-b089-9a79f5669fae","Type":"ContainerStarted","Data":"8c8b6a1d9595e4de988233d159d0385c4e9c8c1633c940640f268f389174a984"} Mar 12 14:49:14.642011 master-0 kubenswrapper[37036]: I0312 14:49:14.641187 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6l42m" Mar 12 14:49:14.642011 master-0 kubenswrapper[37036]: I0312 14:49:14.641208 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6l42m" Mar 12 14:49:15.250123 master-0 kubenswrapper[37036]: I0312 14:49:15.250068 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4510d55e-0c1c-463c-b311-74e9c8864474" path="/var/lib/kubelet/pods/4510d55e-0c1c-463c-b311-74e9c8864474/volumes" Mar 12 14:49:17.676003 master-0 kubenswrapper[37036]: I0312 14:49:17.675935 37036 generic.go:334] "Generic (PLEG): container finished" podID="e303709a-0166-4153-9e20-0351599d1a9c" containerID="787d358271da624f851a089b558b45124a4a2502097179461ca8dfd1cea5bdb4" exitCode=0 Mar 12 14:49:17.676543 master-0 kubenswrapper[37036]: I0312 14:49:17.676012 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e303709a-0166-4153-9e20-0351599d1a9c","Type":"ContainerDied","Data":"787d358271da624f851a089b558b45124a4a2502097179461ca8dfd1cea5bdb4"} Mar 12 14:49:17.677701 master-0 kubenswrapper[37036]: I0312 14:49:17.677649 37036 generic.go:334] "Generic (PLEG): container finished" podID="114161cb-b5bb-41d9-b085-63a181ec3480" containerID="dbba7df2e812df30bb97948c8198035b900e91076dd7ba7c03f2f5f7c1ce155b" exitCode=0 Mar 12 14:49:17.677775 master-0 kubenswrapper[37036]: I0312 14:49:17.677737 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"114161cb-b5bb-41d9-b085-63a181ec3480","Type":"ContainerDied","Data":"dbba7df2e812df30bb97948c8198035b900e91076dd7ba7c03f2f5f7c1ce155b"} Mar 12 14:49:17.682412 master-0 kubenswrapper[37036]: I0312 14:49:17.681791 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"46cbbfbf-551d-40f6-ab13-5a988d23c1d4","Type":"ContainerStarted","Data":"b0a796256dfcb236a68e476f70ff6a997fe46036cde2f9f74ecd6ad4fb8eaf29"} Mar 12 14:49:17.683648 master-0 kubenswrapper[37036]: I0312 14:49:17.683611 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"bf3d5632-6ab4-4408-a837-7897110106d4","Type":"ContainerStarted","Data":"9d4a0e98c4645c2a702a1d07c1834a3f4cc8779b90b1f663506461e590547461"} Mar 12 14:49:17.714726 master-0 kubenswrapper[37036]: I0312 14:49:17.714654 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-6l42m" podStartSLOduration=17.206541319 podStartE2EDuration="27.714633802s" podCreationTimestamp="2026-03-12 14:48:50 +0000 UTC" firstStartedPulling="2026-03-12 14:49:00.211275048 +0000 UTC m=+799.219015985" lastFinishedPulling="2026-03-12 14:49:10.719367531 +0000 UTC m=+809.727108468" observedRunningTime="2026-03-12 14:49:14.824923826 +0000 UTC m=+813.832664773" watchObservedRunningTime="2026-03-12 14:49:17.714633802 +0000 UTC m=+816.722374739" Mar 12 14:49:17.726016 master-0 kubenswrapper[37036]: I0312 14:49:17.725749 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=10.748222908 podStartE2EDuration="24.725729397s" podCreationTimestamp="2026-03-12 14:48:53 +0000 UTC" firstStartedPulling="2026-03-12 14:49:03.134813943 +0000 UTC m=+802.142554880" lastFinishedPulling="2026-03-12 14:49:17.112320442 +0000 UTC m=+816.120061369" observedRunningTime="2026-03-12 14:49:17.725540322 +0000 UTC m=+816.733281279" watchObservedRunningTime="2026-03-12 14:49:17.725729397 +0000 UTC m=+816.733470334" Mar 12 14:49:17.752691 master-0 kubenswrapper[37036]: I0312 14:49:17.752611 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=10.004390843 podStartE2EDuration="26.752592054s" podCreationTimestamp="2026-03-12 14:48:51 +0000 UTC" firstStartedPulling="2026-03-12 14:49:00.349224251 +0000 UTC m=+799.356965188" lastFinishedPulling="2026-03-12 14:49:17.097425472 +0000 UTC m=+816.105166399" observedRunningTime="2026-03-12 14:49:17.747462566 +0000 UTC m=+816.755203513" watchObservedRunningTime="2026-03-12 14:49:17.752592054 +0000 UTC m=+816.760332981" Mar 12 14:49:18.210271 master-0 kubenswrapper[37036]: I0312 14:49:18.210202 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 12 14:49:18.246815 master-0 kubenswrapper[37036]: I0312 14:49:18.246766 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 12 14:49:18.695559 master-0 kubenswrapper[37036]: I0312 14:49:18.695479 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e303709a-0166-4153-9e20-0351599d1a9c","Type":"ContainerStarted","Data":"b06e894c2920e0b523152a67a8917b169ab284138dd1489eb60abe6c260c9c86"} Mar 12 14:49:18.697876 master-0 kubenswrapper[37036]: I0312 14:49:18.697795 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"114161cb-b5bb-41d9-b085-63a181ec3480","Type":"ContainerStarted","Data":"9d4ab919da314c73a8c6bc356f814e0e9555f10f775a8db4fe114b145572bf34"} Mar 12 14:49:18.698217 master-0 kubenswrapper[37036]: I0312 14:49:18.698177 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 12 14:49:18.728019 master-0 kubenswrapper[37036]: I0312 14:49:18.727918 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=23.471238959 podStartE2EDuration="34.727881272s" podCreationTimestamp="2026-03-12 14:48:44 +0000 UTC" firstStartedPulling="2026-03-12 14:48:59.780737621 +0000 UTC m=+798.788478558" lastFinishedPulling="2026-03-12 14:49:11.037379934 +0000 UTC m=+810.045120871" observedRunningTime="2026-03-12 14:49:18.720048637 +0000 UTC m=+817.727789584" watchObservedRunningTime="2026-03-12 14:49:18.727881272 +0000 UTC m=+817.735622209" Mar 12 14:49:18.740295 master-0 kubenswrapper[37036]: I0312 14:49:18.740153 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 12 14:49:18.747676 master-0 kubenswrapper[37036]: I0312 14:49:18.747604 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=24.537008731 podStartE2EDuration="35.747587241s" podCreationTimestamp="2026-03-12 14:48:43 +0000 UTC" firstStartedPulling="2026-03-12 14:48:59.785005517 +0000 UTC m=+798.792746454" lastFinishedPulling="2026-03-12 14:49:10.995584027 +0000 UTC m=+810.003324964" observedRunningTime="2026-03-12 14:49:18.743649863 +0000 UTC m=+817.751390800" watchObservedRunningTime="2026-03-12 14:49:18.747587241 +0000 UTC m=+817.755328178" Mar 12 14:49:19.154241 master-0 kubenswrapper[37036]: I0312 14:49:19.154186 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-rgttc"] Mar 12 14:49:19.157702 master-0 kubenswrapper[37036]: E0312 14:49:19.154883 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd13635d-f1f1-4c88-ab76-560157eb3878" containerName="init" Mar 12 14:49:19.157702 master-0 kubenswrapper[37036]: I0312 14:49:19.154918 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd13635d-f1f1-4c88-ab76-560157eb3878" containerName="init" Mar 12 14:49:19.157702 master-0 kubenswrapper[37036]: E0312 14:49:19.154967 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83" containerName="init" Mar 12 14:49:19.157702 master-0 kubenswrapper[37036]: I0312 14:49:19.154976 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83" containerName="init" Mar 12 14:49:19.157702 master-0 kubenswrapper[37036]: E0312 14:49:19.154997 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4510d55e-0c1c-463c-b311-74e9c8864474" containerName="dnsmasq-dns" Mar 12 14:49:19.157702 master-0 kubenswrapper[37036]: I0312 14:49:19.155004 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="4510d55e-0c1c-463c-b311-74e9c8864474" containerName="dnsmasq-dns" Mar 12 14:49:19.157702 master-0 kubenswrapper[37036]: E0312 14:49:19.155048 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4510d55e-0c1c-463c-b311-74e9c8864474" containerName="init" Mar 12 14:49:19.157702 master-0 kubenswrapper[37036]: I0312 14:49:19.155054 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="4510d55e-0c1c-463c-b311-74e9c8864474" containerName="init" Mar 12 14:49:19.157702 master-0 kubenswrapper[37036]: I0312 14:49:19.155433 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="4510d55e-0c1c-463c-b311-74e9c8864474" containerName="dnsmasq-dns" Mar 12 14:49:19.157702 master-0 kubenswrapper[37036]: I0312 14:49:19.155527 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0be6951-f43a-4e2f-b1b1-fbe5a0ec6e83" containerName="init" Mar 12 14:49:19.157702 master-0 kubenswrapper[37036]: I0312 14:49:19.155569 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd13635d-f1f1-4c88-ab76-560157eb3878" containerName="init" Mar 12 14:49:19.157702 master-0 kubenswrapper[37036]: I0312 14:49:19.156714 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rgttc" Mar 12 14:49:19.161913 master-0 kubenswrapper[37036]: I0312 14:49:19.161214 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 12 14:49:19.171860 master-0 kubenswrapper[37036]: I0312 14:49:19.171813 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e864e00e-7629-4dab-ae9a-55f609712148-ovn-rundir\") pod \"ovn-controller-metrics-rgttc\" (UID: \"e864e00e-7629-4dab-ae9a-55f609712148\") " pod="openstack/ovn-controller-metrics-rgttc" Mar 12 14:49:19.172102 master-0 kubenswrapper[37036]: I0312 14:49:19.171919 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5db7b98cb5-49vnz"] Mar 12 14:49:19.172102 master-0 kubenswrapper[37036]: I0312 14:49:19.172079 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e864e00e-7629-4dab-ae9a-55f609712148-combined-ca-bundle\") pod \"ovn-controller-metrics-rgttc\" (UID: \"e864e00e-7629-4dab-ae9a-55f609712148\") " pod="openstack/ovn-controller-metrics-rgttc" Mar 12 14:49:19.172265 master-0 kubenswrapper[37036]: I0312 14:49:19.172111 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e864e00e-7629-4dab-ae9a-55f609712148-ovs-rundir\") pod \"ovn-controller-metrics-rgttc\" (UID: \"e864e00e-7629-4dab-ae9a-55f609712148\") " pod="openstack/ovn-controller-metrics-rgttc" Mar 12 14:49:19.172265 master-0 kubenswrapper[37036]: I0312 14:49:19.172136 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qjrv\" (UniqueName: \"kubernetes.io/projected/e864e00e-7629-4dab-ae9a-55f609712148-kube-api-access-8qjrv\") pod \"ovn-controller-metrics-rgttc\" (UID: \"e864e00e-7629-4dab-ae9a-55f609712148\") " pod="openstack/ovn-controller-metrics-rgttc" Mar 12 14:49:19.172265 master-0 kubenswrapper[37036]: I0312 14:49:19.172229 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e864e00e-7629-4dab-ae9a-55f609712148-config\") pod \"ovn-controller-metrics-rgttc\" (UID: \"e864e00e-7629-4dab-ae9a-55f609712148\") " pod="openstack/ovn-controller-metrics-rgttc" Mar 12 14:49:19.172387 master-0 kubenswrapper[37036]: I0312 14:49:19.172270 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e864e00e-7629-4dab-ae9a-55f609712148-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rgttc\" (UID: \"e864e00e-7629-4dab-ae9a-55f609712148\") " pod="openstack/ovn-controller-metrics-rgttc" Mar 12 14:49:19.174468 master-0 kubenswrapper[37036]: I0312 14:49:19.174437 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db7b98cb5-49vnz" Mar 12 14:49:19.177043 master-0 kubenswrapper[37036]: I0312 14:49:19.177012 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 12 14:49:19.184514 master-0 kubenswrapper[37036]: I0312 14:49:19.183875 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-rgttc"] Mar 12 14:49:19.195850 master-0 kubenswrapper[37036]: I0312 14:49:19.195804 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5db7b98cb5-49vnz"] Mar 12 14:49:19.196076 master-0 kubenswrapper[37036]: I0312 14:49:19.196026 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 12 14:49:19.256714 master-0 kubenswrapper[37036]: I0312 14:49:19.256672 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 12 14:49:19.281266 master-0 kubenswrapper[37036]: I0312 14:49:19.280990 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbsw8\" (UniqueName: \"kubernetes.io/projected/e4c1e388-ed69-47f6-acc8-c08b1bc0bd90-kube-api-access-lbsw8\") pod \"dnsmasq-dns-5db7b98cb5-49vnz\" (UID: \"e4c1e388-ed69-47f6-acc8-c08b1bc0bd90\") " pod="openstack/dnsmasq-dns-5db7b98cb5-49vnz" Mar 12 14:49:19.281266 master-0 kubenswrapper[37036]: I0312 14:49:19.281062 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e864e00e-7629-4dab-ae9a-55f609712148-ovn-rundir\") pod \"ovn-controller-metrics-rgttc\" (UID: \"e864e00e-7629-4dab-ae9a-55f609712148\") " pod="openstack/ovn-controller-metrics-rgttc" Mar 12 14:49:19.282004 master-0 kubenswrapper[37036]: I0312 14:49:19.281961 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e864e00e-7629-4dab-ae9a-55f609712148-ovn-rundir\") pod \"ovn-controller-metrics-rgttc\" (UID: \"e864e00e-7629-4dab-ae9a-55f609712148\") " pod="openstack/ovn-controller-metrics-rgttc" Mar 12 14:49:19.282078 master-0 kubenswrapper[37036]: I0312 14:49:19.282012 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4c1e388-ed69-47f6-acc8-c08b1bc0bd90-dns-svc\") pod \"dnsmasq-dns-5db7b98cb5-49vnz\" (UID: \"e4c1e388-ed69-47f6-acc8-c08b1bc0bd90\") " pod="openstack/dnsmasq-dns-5db7b98cb5-49vnz" Mar 12 14:49:19.282187 master-0 kubenswrapper[37036]: I0312 14:49:19.282152 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e864e00e-7629-4dab-ae9a-55f609712148-combined-ca-bundle\") pod \"ovn-controller-metrics-rgttc\" (UID: \"e864e00e-7629-4dab-ae9a-55f609712148\") " pod="openstack/ovn-controller-metrics-rgttc" Mar 12 14:49:19.282241 master-0 kubenswrapper[37036]: I0312 14:49:19.282200 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e864e00e-7629-4dab-ae9a-55f609712148-ovs-rundir\") pod \"ovn-controller-metrics-rgttc\" (UID: \"e864e00e-7629-4dab-ae9a-55f609712148\") " pod="openstack/ovn-controller-metrics-rgttc" Mar 12 14:49:19.282241 master-0 kubenswrapper[37036]: I0312 14:49:19.282239 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qjrv\" (UniqueName: \"kubernetes.io/projected/e864e00e-7629-4dab-ae9a-55f609712148-kube-api-access-8qjrv\") pod \"ovn-controller-metrics-rgttc\" (UID: \"e864e00e-7629-4dab-ae9a-55f609712148\") " pod="openstack/ovn-controller-metrics-rgttc" Mar 12 14:49:19.282354 master-0 kubenswrapper[37036]: I0312 14:49:19.282333 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e864e00e-7629-4dab-ae9a-55f609712148-config\") pod \"ovn-controller-metrics-rgttc\" (UID: \"e864e00e-7629-4dab-ae9a-55f609712148\") " pod="openstack/ovn-controller-metrics-rgttc" Mar 12 14:49:19.282409 master-0 kubenswrapper[37036]: I0312 14:49:19.282377 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4c1e388-ed69-47f6-acc8-c08b1bc0bd90-ovsdbserver-nb\") pod \"dnsmasq-dns-5db7b98cb5-49vnz\" (UID: \"e4c1e388-ed69-47f6-acc8-c08b1bc0bd90\") " pod="openstack/dnsmasq-dns-5db7b98cb5-49vnz" Mar 12 14:49:19.282458 master-0 kubenswrapper[37036]: I0312 14:49:19.282414 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e864e00e-7629-4dab-ae9a-55f609712148-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rgttc\" (UID: \"e864e00e-7629-4dab-ae9a-55f609712148\") " pod="openstack/ovn-controller-metrics-rgttc" Mar 12 14:49:19.282458 master-0 kubenswrapper[37036]: I0312 14:49:19.282445 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4c1e388-ed69-47f6-acc8-c08b1bc0bd90-config\") pod \"dnsmasq-dns-5db7b98cb5-49vnz\" (UID: \"e4c1e388-ed69-47f6-acc8-c08b1bc0bd90\") " pod="openstack/dnsmasq-dns-5db7b98cb5-49vnz" Mar 12 14:49:19.284682 master-0 kubenswrapper[37036]: I0312 14:49:19.283736 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e864e00e-7629-4dab-ae9a-55f609712148-config\") pod \"ovn-controller-metrics-rgttc\" (UID: \"e864e00e-7629-4dab-ae9a-55f609712148\") " pod="openstack/ovn-controller-metrics-rgttc" Mar 12 14:49:19.284682 master-0 kubenswrapper[37036]: I0312 14:49:19.283817 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e864e00e-7629-4dab-ae9a-55f609712148-ovs-rundir\") pod \"ovn-controller-metrics-rgttc\" (UID: \"e864e00e-7629-4dab-ae9a-55f609712148\") " pod="openstack/ovn-controller-metrics-rgttc" Mar 12 14:49:19.286870 master-0 kubenswrapper[37036]: I0312 14:49:19.286838 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e864e00e-7629-4dab-ae9a-55f609712148-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rgttc\" (UID: \"e864e00e-7629-4dab-ae9a-55f609712148\") " pod="openstack/ovn-controller-metrics-rgttc" Mar 12 14:49:19.288707 master-0 kubenswrapper[37036]: I0312 14:49:19.288067 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e864e00e-7629-4dab-ae9a-55f609712148-combined-ca-bundle\") pod \"ovn-controller-metrics-rgttc\" (UID: \"e864e00e-7629-4dab-ae9a-55f609712148\") " pod="openstack/ovn-controller-metrics-rgttc" Mar 12 14:49:19.324767 master-0 kubenswrapper[37036]: I0312 14:49:19.318602 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qjrv\" (UniqueName: \"kubernetes.io/projected/e864e00e-7629-4dab-ae9a-55f609712148-kube-api-access-8qjrv\") pod \"ovn-controller-metrics-rgttc\" (UID: \"e864e00e-7629-4dab-ae9a-55f609712148\") " pod="openstack/ovn-controller-metrics-rgttc" Mar 12 14:49:19.385014 master-0 kubenswrapper[37036]: I0312 14:49:19.383944 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4c1e388-ed69-47f6-acc8-c08b1bc0bd90-dns-svc\") pod \"dnsmasq-dns-5db7b98cb5-49vnz\" (UID: \"e4c1e388-ed69-47f6-acc8-c08b1bc0bd90\") " pod="openstack/dnsmasq-dns-5db7b98cb5-49vnz" Mar 12 14:49:19.385014 master-0 kubenswrapper[37036]: I0312 14:49:19.384111 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4c1e388-ed69-47f6-acc8-c08b1bc0bd90-ovsdbserver-nb\") pod \"dnsmasq-dns-5db7b98cb5-49vnz\" (UID: \"e4c1e388-ed69-47f6-acc8-c08b1bc0bd90\") " pod="openstack/dnsmasq-dns-5db7b98cb5-49vnz" Mar 12 14:49:19.385014 master-0 kubenswrapper[37036]: I0312 14:49:19.384133 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4c1e388-ed69-47f6-acc8-c08b1bc0bd90-config\") pod \"dnsmasq-dns-5db7b98cb5-49vnz\" (UID: \"e4c1e388-ed69-47f6-acc8-c08b1bc0bd90\") " pod="openstack/dnsmasq-dns-5db7b98cb5-49vnz" Mar 12 14:49:19.385014 master-0 kubenswrapper[37036]: I0312 14:49:19.384186 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbsw8\" (UniqueName: \"kubernetes.io/projected/e4c1e388-ed69-47f6-acc8-c08b1bc0bd90-kube-api-access-lbsw8\") pod \"dnsmasq-dns-5db7b98cb5-49vnz\" (UID: \"e4c1e388-ed69-47f6-acc8-c08b1bc0bd90\") " pod="openstack/dnsmasq-dns-5db7b98cb5-49vnz" Mar 12 14:49:19.385357 master-0 kubenswrapper[37036]: I0312 14:49:19.385289 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4c1e388-ed69-47f6-acc8-c08b1bc0bd90-dns-svc\") pod \"dnsmasq-dns-5db7b98cb5-49vnz\" (UID: \"e4c1e388-ed69-47f6-acc8-c08b1bc0bd90\") " pod="openstack/dnsmasq-dns-5db7b98cb5-49vnz" Mar 12 14:49:19.387792 master-0 kubenswrapper[37036]: I0312 14:49:19.385844 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4c1e388-ed69-47f6-acc8-c08b1bc0bd90-ovsdbserver-nb\") pod \"dnsmasq-dns-5db7b98cb5-49vnz\" (UID: \"e4c1e388-ed69-47f6-acc8-c08b1bc0bd90\") " pod="openstack/dnsmasq-dns-5db7b98cb5-49vnz" Mar 12 14:49:19.387792 master-0 kubenswrapper[37036]: I0312 14:49:19.386383 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4c1e388-ed69-47f6-acc8-c08b1bc0bd90-config\") pod \"dnsmasq-dns-5db7b98cb5-49vnz\" (UID: \"e4c1e388-ed69-47f6-acc8-c08b1bc0bd90\") " pod="openstack/dnsmasq-dns-5db7b98cb5-49vnz" Mar 12 14:49:19.429198 master-0 kubenswrapper[37036]: I0312 14:49:19.422203 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbsw8\" (UniqueName: \"kubernetes.io/projected/e4c1e388-ed69-47f6-acc8-c08b1bc0bd90-kube-api-access-lbsw8\") pod \"dnsmasq-dns-5db7b98cb5-49vnz\" (UID: \"e4c1e388-ed69-47f6-acc8-c08b1bc0bd90\") " pod="openstack/dnsmasq-dns-5db7b98cb5-49vnz" Mar 12 14:49:19.487267 master-0 kubenswrapper[37036]: I0312 14:49:19.487155 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rgttc" Mar 12 14:49:19.517464 master-0 kubenswrapper[37036]: I0312 14:49:19.517384 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db7b98cb5-49vnz" Mar 12 14:49:19.641471 master-0 kubenswrapper[37036]: I0312 14:49:19.641345 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5db7b98cb5-49vnz"] Mar 12 14:49:19.683387 master-0 kubenswrapper[37036]: I0312 14:49:19.682784 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57bc987d9f-b27sg"] Mar 12 14:49:19.687064 master-0 kubenswrapper[37036]: I0312 14:49:19.684508 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" Mar 12 14:49:19.688474 master-0 kubenswrapper[37036]: I0312 14:49:19.688262 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 12 14:49:19.727521 master-0 kubenswrapper[37036]: I0312 14:49:19.726654 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 12 14:49:19.777523 master-0 kubenswrapper[37036]: I0312 14:49:19.777457 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57bc987d9f-b27sg"] Mar 12 14:49:19.794192 master-0 kubenswrapper[37036]: I0312 14:49:19.794087 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-ovsdbserver-sb\") pod \"dnsmasq-dns-57bc987d9f-b27sg\" (UID: \"a3cb6993-990a-4101-a6e1-fd59b95eeeb0\") " pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" Mar 12 14:49:19.794417 master-0 kubenswrapper[37036]: I0312 14:49:19.794234 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-dns-svc\") pod \"dnsmasq-dns-57bc987d9f-b27sg\" (UID: \"a3cb6993-990a-4101-a6e1-fd59b95eeeb0\") " pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" Mar 12 14:49:19.794720 master-0 kubenswrapper[37036]: I0312 14:49:19.794691 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-config\") pod \"dnsmasq-dns-57bc987d9f-b27sg\" (UID: \"a3cb6993-990a-4101-a6e1-fd59b95eeeb0\") " pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" Mar 12 14:49:19.794816 master-0 kubenswrapper[37036]: I0312 14:49:19.794796 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-ovsdbserver-nb\") pod \"dnsmasq-dns-57bc987d9f-b27sg\" (UID: \"a3cb6993-990a-4101-a6e1-fd59b95eeeb0\") " pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" Mar 12 14:49:19.795432 master-0 kubenswrapper[37036]: I0312 14:49:19.795390 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnvbb\" (UniqueName: \"kubernetes.io/projected/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-kube-api-access-tnvbb\") pod \"dnsmasq-dns-57bc987d9f-b27sg\" (UID: \"a3cb6993-990a-4101-a6e1-fd59b95eeeb0\") " pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" Mar 12 14:49:19.807736 master-0 kubenswrapper[37036]: I0312 14:49:19.807685 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 12 14:49:19.896834 master-0 kubenswrapper[37036]: I0312 14:49:19.896700 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-config\") pod \"dnsmasq-dns-57bc987d9f-b27sg\" (UID: \"a3cb6993-990a-4101-a6e1-fd59b95eeeb0\") " pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" Mar 12 14:49:19.896834 master-0 kubenswrapper[37036]: I0312 14:49:19.896766 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-ovsdbserver-nb\") pod \"dnsmasq-dns-57bc987d9f-b27sg\" (UID: \"a3cb6993-990a-4101-a6e1-fd59b95eeeb0\") " pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" Mar 12 14:49:19.897096 master-0 kubenswrapper[37036]: I0312 14:49:19.896990 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnvbb\" (UniqueName: \"kubernetes.io/projected/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-kube-api-access-tnvbb\") pod \"dnsmasq-dns-57bc987d9f-b27sg\" (UID: \"a3cb6993-990a-4101-a6e1-fd59b95eeeb0\") " pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" Mar 12 14:49:19.897096 master-0 kubenswrapper[37036]: I0312 14:49:19.897075 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-ovsdbserver-sb\") pod \"dnsmasq-dns-57bc987d9f-b27sg\" (UID: \"a3cb6993-990a-4101-a6e1-fd59b95eeeb0\") " pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" Mar 12 14:49:19.897180 master-0 kubenswrapper[37036]: I0312 14:49:19.897114 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-dns-svc\") pod \"dnsmasq-dns-57bc987d9f-b27sg\" (UID: \"a3cb6993-990a-4101-a6e1-fd59b95eeeb0\") " pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" Mar 12 14:49:19.898855 master-0 kubenswrapper[37036]: I0312 14:49:19.898165 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-ovsdbserver-sb\") pod \"dnsmasq-dns-57bc987d9f-b27sg\" (UID: \"a3cb6993-990a-4101-a6e1-fd59b95eeeb0\") " pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" Mar 12 14:49:19.898855 master-0 kubenswrapper[37036]: I0312 14:49:19.898247 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-ovsdbserver-nb\") pod \"dnsmasq-dns-57bc987d9f-b27sg\" (UID: \"a3cb6993-990a-4101-a6e1-fd59b95eeeb0\") " pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" Mar 12 14:49:19.898855 master-0 kubenswrapper[37036]: I0312 14:49:19.898403 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-dns-svc\") pod \"dnsmasq-dns-57bc987d9f-b27sg\" (UID: \"a3cb6993-990a-4101-a6e1-fd59b95eeeb0\") " pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" Mar 12 14:49:19.898855 master-0 kubenswrapper[37036]: I0312 14:49:19.898485 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-config\") pod \"dnsmasq-dns-57bc987d9f-b27sg\" (UID: \"a3cb6993-990a-4101-a6e1-fd59b95eeeb0\") " pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" Mar 12 14:49:19.916642 master-0 kubenswrapper[37036]: I0312 14:49:19.916294 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnvbb\" (UniqueName: \"kubernetes.io/projected/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-kube-api-access-tnvbb\") pod \"dnsmasq-dns-57bc987d9f-b27sg\" (UID: \"a3cb6993-990a-4101-a6e1-fd59b95eeeb0\") " pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" Mar 12 14:49:20.021438 master-0 kubenswrapper[37036]: I0312 14:49:20.021388 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" Mar 12 14:49:20.021672 master-0 kubenswrapper[37036]: I0312 14:49:20.021640 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 12 14:49:20.071746 master-0 kubenswrapper[37036]: I0312 14:49:20.071240 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 12 14:49:20.078179 master-0 kubenswrapper[37036]: I0312 14:49:20.075093 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 12 14:49:20.078179 master-0 kubenswrapper[37036]: I0312 14:49:20.075324 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 12 14:49:20.078179 master-0 kubenswrapper[37036]: I0312 14:49:20.075754 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 12 14:49:20.101478 master-0 kubenswrapper[37036]: I0312 14:49:20.101418 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 12 14:49:20.121075 master-0 kubenswrapper[37036]: I0312 14:49:20.120986 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22529544-4de1-4a4d-8b41-71e9c3b522e1-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"22529544-4de1-4a4d-8b41-71e9c3b522e1\") " pod="openstack/ovn-northd-0" Mar 12 14:49:20.121674 master-0 kubenswrapper[37036]: I0312 14:49:20.121613 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22529544-4de1-4a4d-8b41-71e9c3b522e1-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"22529544-4de1-4a4d-8b41-71e9c3b522e1\") " pod="openstack/ovn-northd-0" Mar 12 14:49:20.121962 master-0 kubenswrapper[37036]: I0312 14:49:20.121941 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22529544-4de1-4a4d-8b41-71e9c3b522e1-scripts\") pod \"ovn-northd-0\" (UID: \"22529544-4de1-4a4d-8b41-71e9c3b522e1\") " pod="openstack/ovn-northd-0" Mar 12 14:49:20.122320 master-0 kubenswrapper[37036]: I0312 14:49:20.122238 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lftd6\" (UniqueName: \"kubernetes.io/projected/22529544-4de1-4a4d-8b41-71e9c3b522e1-kube-api-access-lftd6\") pod \"ovn-northd-0\" (UID: \"22529544-4de1-4a4d-8b41-71e9c3b522e1\") " pod="openstack/ovn-northd-0" Mar 12 14:49:20.122606 master-0 kubenswrapper[37036]: I0312 14:49:20.122584 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/22529544-4de1-4a4d-8b41-71e9c3b522e1-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"22529544-4de1-4a4d-8b41-71e9c3b522e1\") " pod="openstack/ovn-northd-0" Mar 12 14:49:20.123003 master-0 kubenswrapper[37036]: I0312 14:49:20.122953 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22529544-4de1-4a4d-8b41-71e9c3b522e1-config\") pod \"ovn-northd-0\" (UID: \"22529544-4de1-4a4d-8b41-71e9c3b522e1\") " pod="openstack/ovn-northd-0" Mar 12 14:49:20.124119 master-0 kubenswrapper[37036]: I0312 14:49:20.123666 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22529544-4de1-4a4d-8b41-71e9c3b522e1-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"22529544-4de1-4a4d-8b41-71e9c3b522e1\") " pod="openstack/ovn-northd-0" Mar 12 14:49:20.148165 master-0 kubenswrapper[37036]: W0312 14:49:20.147938 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode864e00e_7629_4dab_ae9a_55f609712148.slice/crio-4ed6872a16598690f9f321dfd2cdd070d70c914102f3a192800f5e9bdd36f475 WatchSource:0}: Error finding container 4ed6872a16598690f9f321dfd2cdd070d70c914102f3a192800f5e9bdd36f475: Status 404 returned error can't find the container with id 4ed6872a16598690f9f321dfd2cdd070d70c914102f3a192800f5e9bdd36f475 Mar 12 14:49:20.167535 master-0 kubenswrapper[37036]: I0312 14:49:20.167468 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-rgttc"] Mar 12 14:49:20.227818 master-0 kubenswrapper[37036]: I0312 14:49:20.227772 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22529544-4de1-4a4d-8b41-71e9c3b522e1-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"22529544-4de1-4a4d-8b41-71e9c3b522e1\") " pod="openstack/ovn-northd-0" Mar 12 14:49:20.228016 master-0 kubenswrapper[37036]: I0312 14:49:20.227993 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22529544-4de1-4a4d-8b41-71e9c3b522e1-scripts\") pod \"ovn-northd-0\" (UID: \"22529544-4de1-4a4d-8b41-71e9c3b522e1\") " pod="openstack/ovn-northd-0" Mar 12 14:49:20.228781 master-0 kubenswrapper[37036]: I0312 14:49:20.228758 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lftd6\" (UniqueName: \"kubernetes.io/projected/22529544-4de1-4a4d-8b41-71e9c3b522e1-kube-api-access-lftd6\") pod \"ovn-northd-0\" (UID: \"22529544-4de1-4a4d-8b41-71e9c3b522e1\") " pod="openstack/ovn-northd-0" Mar 12 14:49:20.228973 master-0 kubenswrapper[37036]: I0312 14:49:20.228949 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/22529544-4de1-4a4d-8b41-71e9c3b522e1-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"22529544-4de1-4a4d-8b41-71e9c3b522e1\") " pod="openstack/ovn-northd-0" Mar 12 14:49:20.229570 master-0 kubenswrapper[37036]: I0312 14:49:20.229546 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22529544-4de1-4a4d-8b41-71e9c3b522e1-config\") pod \"ovn-northd-0\" (UID: \"22529544-4de1-4a4d-8b41-71e9c3b522e1\") " pod="openstack/ovn-northd-0" Mar 12 14:49:20.229790 master-0 kubenswrapper[37036]: I0312 14:49:20.229771 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22529544-4de1-4a4d-8b41-71e9c3b522e1-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"22529544-4de1-4a4d-8b41-71e9c3b522e1\") " pod="openstack/ovn-northd-0" Mar 12 14:49:20.230074 master-0 kubenswrapper[37036]: I0312 14:49:20.230014 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22529544-4de1-4a4d-8b41-71e9c3b522e1-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"22529544-4de1-4a4d-8b41-71e9c3b522e1\") " pod="openstack/ovn-northd-0" Mar 12 14:49:20.230411 master-0 kubenswrapper[37036]: I0312 14:49:20.230375 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22529544-4de1-4a4d-8b41-71e9c3b522e1-config\") pod \"ovn-northd-0\" (UID: \"22529544-4de1-4a4d-8b41-71e9c3b522e1\") " pod="openstack/ovn-northd-0" Mar 12 14:49:20.230480 master-0 kubenswrapper[37036]: I0312 14:49:20.229270 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22529544-4de1-4a4d-8b41-71e9c3b522e1-scripts\") pod \"ovn-northd-0\" (UID: \"22529544-4de1-4a4d-8b41-71e9c3b522e1\") " pod="openstack/ovn-northd-0" Mar 12 14:49:20.232443 master-0 kubenswrapper[37036]: I0312 14:49:20.231524 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22529544-4de1-4a4d-8b41-71e9c3b522e1-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"22529544-4de1-4a4d-8b41-71e9c3b522e1\") " pod="openstack/ovn-northd-0" Mar 12 14:49:20.249594 master-0 kubenswrapper[37036]: I0312 14:49:20.240710 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22529544-4de1-4a4d-8b41-71e9c3b522e1-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"22529544-4de1-4a4d-8b41-71e9c3b522e1\") " pod="openstack/ovn-northd-0" Mar 12 14:49:20.251305 master-0 kubenswrapper[37036]: I0312 14:49:20.251252 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22529544-4de1-4a4d-8b41-71e9c3b522e1-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"22529544-4de1-4a4d-8b41-71e9c3b522e1\") " pod="openstack/ovn-northd-0" Mar 12 14:49:20.255782 master-0 kubenswrapper[37036]: I0312 14:49:20.255686 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lftd6\" (UniqueName: \"kubernetes.io/projected/22529544-4de1-4a4d-8b41-71e9c3b522e1-kube-api-access-lftd6\") pod \"ovn-northd-0\" (UID: \"22529544-4de1-4a4d-8b41-71e9c3b522e1\") " pod="openstack/ovn-northd-0" Mar 12 14:49:20.278102 master-0 kubenswrapper[37036]: I0312 14:49:20.277418 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/22529544-4de1-4a4d-8b41-71e9c3b522e1-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"22529544-4de1-4a4d-8b41-71e9c3b522e1\") " pod="openstack/ovn-northd-0" Mar 12 14:49:20.312195 master-0 kubenswrapper[37036]: I0312 14:49:20.312097 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5db7b98cb5-49vnz"] Mar 12 14:49:20.448052 master-0 kubenswrapper[37036]: I0312 14:49:20.446095 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 12 14:49:20.448052 master-0 kubenswrapper[37036]: I0312 14:49:20.446593 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 12 14:49:20.448052 master-0 kubenswrapper[37036]: I0312 14:49:20.446819 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 12 14:49:20.448301 master-0 kubenswrapper[37036]: I0312 14:49:20.448200 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 12 14:49:20.634525 master-0 kubenswrapper[37036]: I0312 14:49:20.634125 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57bc987d9f-b27sg"] Mar 12 14:49:20.738703 master-0 kubenswrapper[37036]: I0312 14:49:20.736736 37036 generic.go:334] "Generic (PLEG): container finished" podID="e4c1e388-ed69-47f6-acc8-c08b1bc0bd90" containerID="dd33eb9b6ec605de71f71a77209c3259f548badabee25ef844d5e7d4b2af554b" exitCode=0 Mar 12 14:49:20.738703 master-0 kubenswrapper[37036]: I0312 14:49:20.736830 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5db7b98cb5-49vnz" event={"ID":"e4c1e388-ed69-47f6-acc8-c08b1bc0bd90","Type":"ContainerDied","Data":"dd33eb9b6ec605de71f71a77209c3259f548badabee25ef844d5e7d4b2af554b"} Mar 12 14:49:20.738703 master-0 kubenswrapper[37036]: I0312 14:49:20.736876 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5db7b98cb5-49vnz" event={"ID":"e4c1e388-ed69-47f6-acc8-c08b1bc0bd90","Type":"ContainerStarted","Data":"f94565823e9d1afd5624918d1e8fa5617fcbe2ef8b3f118b9b8cd58fac246ebf"} Mar 12 14:49:20.750742 master-0 kubenswrapper[37036]: I0312 14:49:20.750050 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rgttc" event={"ID":"e864e00e-7629-4dab-ae9a-55f609712148","Type":"ContainerStarted","Data":"18b92066ab33d24841af0fb9cec9f2ab86116a703079062b52d4280cda837328"} Mar 12 14:49:20.750742 master-0 kubenswrapper[37036]: I0312 14:49:20.750123 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rgttc" event={"ID":"e864e00e-7629-4dab-ae9a-55f609712148","Type":"ContainerStarted","Data":"4ed6872a16598690f9f321dfd2cdd070d70c914102f3a192800f5e9bdd36f475"} Mar 12 14:49:20.757343 master-0 kubenswrapper[37036]: I0312 14:49:20.757209 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" event={"ID":"a3cb6993-990a-4101-a6e1-fd59b95eeeb0","Type":"ContainerStarted","Data":"60e7a6a9da74daf1a0b2056980d23068ea59dd069b421b29d7ec4e8898112dab"} Mar 12 14:49:20.789054 master-0 kubenswrapper[37036]: I0312 14:49:20.788748 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-rgttc" podStartSLOduration=1.788721694 podStartE2EDuration="1.788721694s" podCreationTimestamp="2026-03-12 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:49:20.784310044 +0000 UTC m=+819.792050981" watchObservedRunningTime="2026-03-12 14:49:20.788721694 +0000 UTC m=+819.796462631" Mar 12 14:49:21.018946 master-0 kubenswrapper[37036]: W0312 14:49:21.018531 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22529544_4de1_4a4d_8b41_71e9c3b522e1.slice/crio-1ee06f8d974fb8c0053c9878f5d69bc39346137795fc153657ce9e0a49aba5cd WatchSource:0}: Error finding container 1ee06f8d974fb8c0053c9878f5d69bc39346137795fc153657ce9e0a49aba5cd: Status 404 returned error can't find the container with id 1ee06f8d974fb8c0053c9878f5d69bc39346137795fc153657ce9e0a49aba5cd Mar 12 14:49:21.018946 master-0 kubenswrapper[37036]: I0312 14:49:21.018850 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 12 14:49:21.247010 master-0 kubenswrapper[37036]: I0312 14:49:21.246075 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db7b98cb5-49vnz" Mar 12 14:49:21.377354 master-0 kubenswrapper[37036]: I0312 14:49:21.377284 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4c1e388-ed69-47f6-acc8-c08b1bc0bd90-ovsdbserver-nb\") pod \"e4c1e388-ed69-47f6-acc8-c08b1bc0bd90\" (UID: \"e4c1e388-ed69-47f6-acc8-c08b1bc0bd90\") " Mar 12 14:49:21.377563 master-0 kubenswrapper[37036]: I0312 14:49:21.377398 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4c1e388-ed69-47f6-acc8-c08b1bc0bd90-dns-svc\") pod \"e4c1e388-ed69-47f6-acc8-c08b1bc0bd90\" (UID: \"e4c1e388-ed69-47f6-acc8-c08b1bc0bd90\") " Mar 12 14:49:21.377635 master-0 kubenswrapper[37036]: I0312 14:49:21.377613 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4c1e388-ed69-47f6-acc8-c08b1bc0bd90-config\") pod \"e4c1e388-ed69-47f6-acc8-c08b1bc0bd90\" (UID: \"e4c1e388-ed69-47f6-acc8-c08b1bc0bd90\") " Mar 12 14:49:21.377736 master-0 kubenswrapper[37036]: I0312 14:49:21.377716 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbsw8\" (UniqueName: \"kubernetes.io/projected/e4c1e388-ed69-47f6-acc8-c08b1bc0bd90-kube-api-access-lbsw8\") pod \"e4c1e388-ed69-47f6-acc8-c08b1bc0bd90\" (UID: \"e4c1e388-ed69-47f6-acc8-c08b1bc0bd90\") " Mar 12 14:49:21.380981 master-0 kubenswrapper[37036]: I0312 14:49:21.380921 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4c1e388-ed69-47f6-acc8-c08b1bc0bd90-kube-api-access-lbsw8" (OuterVolumeSpecName: "kube-api-access-lbsw8") pod "e4c1e388-ed69-47f6-acc8-c08b1bc0bd90" (UID: "e4c1e388-ed69-47f6-acc8-c08b1bc0bd90"). InnerVolumeSpecName "kube-api-access-lbsw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:49:21.397790 master-0 kubenswrapper[37036]: I0312 14:49:21.397715 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4c1e388-ed69-47f6-acc8-c08b1bc0bd90-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e4c1e388-ed69-47f6-acc8-c08b1bc0bd90" (UID: "e4c1e388-ed69-47f6-acc8-c08b1bc0bd90"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:21.400362 master-0 kubenswrapper[37036]: I0312 14:49:21.400322 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4c1e388-ed69-47f6-acc8-c08b1bc0bd90-config" (OuterVolumeSpecName: "config") pod "e4c1e388-ed69-47f6-acc8-c08b1bc0bd90" (UID: "e4c1e388-ed69-47f6-acc8-c08b1bc0bd90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:21.403489 master-0 kubenswrapper[37036]: I0312 14:49:21.403430 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4c1e388-ed69-47f6-acc8-c08b1bc0bd90-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e4c1e388-ed69-47f6-acc8-c08b1bc0bd90" (UID: "e4c1e388-ed69-47f6-acc8-c08b1bc0bd90"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:21.481072 master-0 kubenswrapper[37036]: I0312 14:49:21.480929 37036 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4c1e388-ed69-47f6-acc8-c08b1bc0bd90-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:21.481072 master-0 kubenswrapper[37036]: I0312 14:49:21.480972 37036 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4c1e388-ed69-47f6-acc8-c08b1bc0bd90-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:21.481072 master-0 kubenswrapper[37036]: I0312 14:49:21.481033 37036 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4c1e388-ed69-47f6-acc8-c08b1bc0bd90-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:21.481072 master-0 kubenswrapper[37036]: I0312 14:49:21.481048 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbsw8\" (UniqueName: \"kubernetes.io/projected/e4c1e388-ed69-47f6-acc8-c08b1bc0bd90-kube-api-access-lbsw8\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:21.764708 master-0 kubenswrapper[37036]: I0312 14:49:21.764636 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22529544-4de1-4a4d-8b41-71e9c3b522e1","Type":"ContainerStarted","Data":"1ee06f8d974fb8c0053c9878f5d69bc39346137795fc153657ce9e0a49aba5cd"} Mar 12 14:49:21.768353 master-0 kubenswrapper[37036]: I0312 14:49:21.768304 37036 generic.go:334] "Generic (PLEG): container finished" podID="a3cb6993-990a-4101-a6e1-fd59b95eeeb0" containerID="bb86c596a387fea3f7587cfdfa84092f6c0fcc0a421de611d73e99181af48c87" exitCode=0 Mar 12 14:49:21.768495 master-0 kubenswrapper[37036]: I0312 14:49:21.768390 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" event={"ID":"a3cb6993-990a-4101-a6e1-fd59b95eeeb0","Type":"ContainerDied","Data":"bb86c596a387fea3f7587cfdfa84092f6c0fcc0a421de611d73e99181af48c87"} Mar 12 14:49:21.773100 master-0 kubenswrapper[37036]: I0312 14:49:21.772136 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5db7b98cb5-49vnz" event={"ID":"e4c1e388-ed69-47f6-acc8-c08b1bc0bd90","Type":"ContainerDied","Data":"f94565823e9d1afd5624918d1e8fa5617fcbe2ef8b3f118b9b8cd58fac246ebf"} Mar 12 14:49:21.773100 master-0 kubenswrapper[37036]: I0312 14:49:21.772199 37036 scope.go:117] "RemoveContainer" containerID="dd33eb9b6ec605de71f71a77209c3259f548badabee25ef844d5e7d4b2af554b" Mar 12 14:49:21.773100 master-0 kubenswrapper[37036]: I0312 14:49:21.772323 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db7b98cb5-49vnz" Mar 12 14:49:21.988070 master-0 kubenswrapper[37036]: I0312 14:49:21.988010 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5db7b98cb5-49vnz"] Mar 12 14:49:21.998673 master-0 kubenswrapper[37036]: I0312 14:49:21.998606 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5db7b98cb5-49vnz"] Mar 12 14:49:22.777710 master-0 kubenswrapper[37036]: I0312 14:49:22.777664 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 12 14:49:22.781816 master-0 kubenswrapper[37036]: I0312 14:49:22.778367 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 12 14:49:22.783879 master-0 kubenswrapper[37036]: I0312 14:49:22.783798 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22529544-4de1-4a4d-8b41-71e9c3b522e1","Type":"ContainerStarted","Data":"63b72a59521c42b677a6e4d581cb28f9221f3c897274766bf0f1f6c0f51a1bfe"} Mar 12 14:49:22.789721 master-0 kubenswrapper[37036]: I0312 14:49:22.789652 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" event={"ID":"a3cb6993-990a-4101-a6e1-fd59b95eeeb0","Type":"ContainerStarted","Data":"36f54fe5d8b6593c3b6ff3c587675fcd33516b630e1c9111cdfd6faadca5df72"} Mar 12 14:49:22.790134 master-0 kubenswrapper[37036]: I0312 14:49:22.790113 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" Mar 12 14:49:22.820206 master-0 kubenswrapper[37036]: I0312 14:49:22.820113 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" podStartSLOduration=3.820092575 podStartE2EDuration="3.820092575s" podCreationTimestamp="2026-03-12 14:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:49:22.80942731 +0000 UTC m=+821.817168257" watchObservedRunningTime="2026-03-12 14:49:22.820092575 +0000 UTC m=+821.827833512" Mar 12 14:49:23.247001 master-0 kubenswrapper[37036]: I0312 14:49:23.246694 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4c1e388-ed69-47f6-acc8-c08b1bc0bd90" path="/var/lib/kubelet/pods/e4c1e388-ed69-47f6-acc8-c08b1bc0bd90/volumes" Mar 12 14:49:23.809748 master-0 kubenswrapper[37036]: I0312 14:49:23.809667 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22529544-4de1-4a4d-8b41-71e9c3b522e1","Type":"ContainerStarted","Data":"4ab587294066e28052ac61adabe1999e45ffefe1bb6e25e0bae07c6a50d12c94"} Mar 12 14:49:23.810337 master-0 kubenswrapper[37036]: I0312 14:49:23.810061 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 12 14:49:23.841425 master-0 kubenswrapper[37036]: I0312 14:49:23.841341 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.402012497 podStartE2EDuration="4.841321983s" podCreationTimestamp="2026-03-12 14:49:19 +0000 UTC" firstStartedPulling="2026-03-12 14:49:21.020726782 +0000 UTC m=+820.028467719" lastFinishedPulling="2026-03-12 14:49:22.460036268 +0000 UTC m=+821.467777205" observedRunningTime="2026-03-12 14:49:23.832893454 +0000 UTC m=+822.840634411" watchObservedRunningTime="2026-03-12 14:49:23.841321983 +0000 UTC m=+822.849062920" Mar 12 14:49:24.595297 master-0 kubenswrapper[37036]: I0312 14:49:24.595256 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 12 14:49:24.664707 master-0 kubenswrapper[37036]: I0312 14:49:24.664655 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 12 14:49:24.862406 master-0 kubenswrapper[37036]: I0312 14:49:24.862279 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 12 14:49:24.936044 master-0 kubenswrapper[37036]: I0312 14:49:24.935975 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 12 14:49:26.493039 master-0 kubenswrapper[37036]: I0312 14:49:26.492964 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-1d9f-account-create-update-kkfzt"] Mar 12 14:49:26.495798 master-0 kubenswrapper[37036]: E0312 14:49:26.495570 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4c1e388-ed69-47f6-acc8-c08b1bc0bd90" containerName="init" Mar 12 14:49:26.496143 master-0 kubenswrapper[37036]: I0312 14:49:26.496124 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4c1e388-ed69-47f6-acc8-c08b1bc0bd90" containerName="init" Mar 12 14:49:26.496543 master-0 kubenswrapper[37036]: I0312 14:49:26.496528 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4c1e388-ed69-47f6-acc8-c08b1bc0bd90" containerName="init" Mar 12 14:49:26.497445 master-0 kubenswrapper[37036]: I0312 14:49:26.497425 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1d9f-account-create-update-kkfzt" Mar 12 14:49:26.500519 master-0 kubenswrapper[37036]: I0312 14:49:26.500454 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 12 14:49:26.514355 master-0 kubenswrapper[37036]: I0312 14:49:26.514120 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1d9f-account-create-update-kkfzt"] Mar 12 14:49:26.603368 master-0 kubenswrapper[37036]: I0312 14:49:26.603276 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-cqt24"] Mar 12 14:49:26.604839 master-0 kubenswrapper[37036]: I0312 14:49:26.604806 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cqt24" Mar 12 14:49:26.619232 master-0 kubenswrapper[37036]: I0312 14:49:26.619143 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n86rp\" (UniqueName: \"kubernetes.io/projected/68421a5c-f523-46fc-8448-704811e6ed1c-kube-api-access-n86rp\") pod \"keystone-1d9f-account-create-update-kkfzt\" (UID: \"68421a5c-f523-46fc-8448-704811e6ed1c\") " pod="openstack/keystone-1d9f-account-create-update-kkfzt" Mar 12 14:49:26.619967 master-0 kubenswrapper[37036]: I0312 14:49:26.619940 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68421a5c-f523-46fc-8448-704811e6ed1c-operator-scripts\") pod \"keystone-1d9f-account-create-update-kkfzt\" (UID: \"68421a5c-f523-46fc-8448-704811e6ed1c\") " pod="openstack/keystone-1d9f-account-create-update-kkfzt" Mar 12 14:49:26.621611 master-0 kubenswrapper[37036]: I0312 14:49:26.621564 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-cqt24"] Mar 12 14:49:26.673018 master-0 kubenswrapper[37036]: I0312 14:49:26.665316 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-cjlkf"] Mar 12 14:49:26.673018 master-0 kubenswrapper[37036]: I0312 14:49:26.668514 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cjlkf" Mar 12 14:49:26.674769 master-0 kubenswrapper[37036]: I0312 14:49:26.674218 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-cjlkf"] Mar 12 14:49:26.723455 master-0 kubenswrapper[37036]: I0312 14:49:26.722766 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68421a5c-f523-46fc-8448-704811e6ed1c-operator-scripts\") pod \"keystone-1d9f-account-create-update-kkfzt\" (UID: \"68421a5c-f523-46fc-8448-704811e6ed1c\") " pod="openstack/keystone-1d9f-account-create-update-kkfzt" Mar 12 14:49:26.723455 master-0 kubenswrapper[37036]: I0312 14:49:26.723036 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n86rp\" (UniqueName: \"kubernetes.io/projected/68421a5c-f523-46fc-8448-704811e6ed1c-kube-api-access-n86rp\") pod \"keystone-1d9f-account-create-update-kkfzt\" (UID: \"68421a5c-f523-46fc-8448-704811e6ed1c\") " pod="openstack/keystone-1d9f-account-create-update-kkfzt" Mar 12 14:49:26.723455 master-0 kubenswrapper[37036]: I0312 14:49:26.723218 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9644543-4e13-4b3c-9862-7a861ea2af30-operator-scripts\") pod \"keystone-db-create-cqt24\" (UID: \"a9644543-4e13-4b3c-9862-7a861ea2af30\") " pod="openstack/keystone-db-create-cqt24" Mar 12 14:49:26.723455 master-0 kubenswrapper[37036]: I0312 14:49:26.723296 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llvfm\" (UniqueName: \"kubernetes.io/projected/a9644543-4e13-4b3c-9862-7a861ea2af30-kube-api-access-llvfm\") pod \"keystone-db-create-cqt24\" (UID: \"a9644543-4e13-4b3c-9862-7a861ea2af30\") " pod="openstack/keystone-db-create-cqt24" Mar 12 14:49:26.724270 master-0 kubenswrapper[37036]: I0312 14:49:26.724201 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68421a5c-f523-46fc-8448-704811e6ed1c-operator-scripts\") pod \"keystone-1d9f-account-create-update-kkfzt\" (UID: \"68421a5c-f523-46fc-8448-704811e6ed1c\") " pod="openstack/keystone-1d9f-account-create-update-kkfzt" Mar 12 14:49:26.744799 master-0 kubenswrapper[37036]: I0312 14:49:26.743824 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n86rp\" (UniqueName: \"kubernetes.io/projected/68421a5c-f523-46fc-8448-704811e6ed1c-kube-api-access-n86rp\") pod \"keystone-1d9f-account-create-update-kkfzt\" (UID: \"68421a5c-f523-46fc-8448-704811e6ed1c\") " pod="openstack/keystone-1d9f-account-create-update-kkfzt" Mar 12 14:49:26.784663 master-0 kubenswrapper[37036]: I0312 14:49:26.781692 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-4fdf-account-create-update-bvvpt"] Mar 12 14:49:26.784663 master-0 kubenswrapper[37036]: I0312 14:49:26.783343 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4fdf-account-create-update-bvvpt" Mar 12 14:49:26.793963 master-0 kubenswrapper[37036]: I0312 14:49:26.792999 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4fdf-account-create-update-bvvpt"] Mar 12 14:49:26.794169 master-0 kubenswrapper[37036]: I0312 14:49:26.794100 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 12 14:49:26.828358 master-0 kubenswrapper[37036]: I0312 14:49:26.824587 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6defdb3a-1932-4d90-b25c-af496585b703-operator-scripts\") pod \"placement-db-create-cjlkf\" (UID: \"6defdb3a-1932-4d90-b25c-af496585b703\") " pod="openstack/placement-db-create-cjlkf" Mar 12 14:49:26.828358 master-0 kubenswrapper[37036]: I0312 14:49:26.824699 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9644543-4e13-4b3c-9862-7a861ea2af30-operator-scripts\") pod \"keystone-db-create-cqt24\" (UID: \"a9644543-4e13-4b3c-9862-7a861ea2af30\") " pod="openstack/keystone-db-create-cqt24" Mar 12 14:49:26.828358 master-0 kubenswrapper[37036]: I0312 14:49:26.824733 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwzsq\" (UniqueName: \"kubernetes.io/projected/6defdb3a-1932-4d90-b25c-af496585b703-kube-api-access-hwzsq\") pod \"placement-db-create-cjlkf\" (UID: \"6defdb3a-1932-4d90-b25c-af496585b703\") " pod="openstack/placement-db-create-cjlkf" Mar 12 14:49:26.828358 master-0 kubenswrapper[37036]: I0312 14:49:26.824766 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llvfm\" (UniqueName: \"kubernetes.io/projected/a9644543-4e13-4b3c-9862-7a861ea2af30-kube-api-access-llvfm\") pod \"keystone-db-create-cqt24\" (UID: \"a9644543-4e13-4b3c-9862-7a861ea2af30\") " pod="openstack/keystone-db-create-cqt24" Mar 12 14:49:26.828358 master-0 kubenswrapper[37036]: I0312 14:49:26.828185 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9644543-4e13-4b3c-9862-7a861ea2af30-operator-scripts\") pod \"keystone-db-create-cqt24\" (UID: \"a9644543-4e13-4b3c-9862-7a861ea2af30\") " pod="openstack/keystone-db-create-cqt24" Mar 12 14:49:26.849105 master-0 kubenswrapper[37036]: I0312 14:49:26.849047 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llvfm\" (UniqueName: \"kubernetes.io/projected/a9644543-4e13-4b3c-9862-7a861ea2af30-kube-api-access-llvfm\") pod \"keystone-db-create-cqt24\" (UID: \"a9644543-4e13-4b3c-9862-7a861ea2af30\") " pod="openstack/keystone-db-create-cqt24" Mar 12 14:49:26.862188 master-0 kubenswrapper[37036]: I0312 14:49:26.862139 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1d9f-account-create-update-kkfzt" Mar 12 14:49:26.925102 master-0 kubenswrapper[37036]: I0312 14:49:26.924952 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cqt24" Mar 12 14:49:26.928104 master-0 kubenswrapper[37036]: I0312 14:49:26.927320 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lrmz\" (UniqueName: \"kubernetes.io/projected/9e21e790-37ba-458a-a7a6-c17ed7736b11-kube-api-access-5lrmz\") pod \"placement-4fdf-account-create-update-bvvpt\" (UID: \"9e21e790-37ba-458a-a7a6-c17ed7736b11\") " pod="openstack/placement-4fdf-account-create-update-bvvpt" Mar 12 14:49:26.928104 master-0 kubenswrapper[37036]: I0312 14:49:26.927439 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e21e790-37ba-458a-a7a6-c17ed7736b11-operator-scripts\") pod \"placement-4fdf-account-create-update-bvvpt\" (UID: \"9e21e790-37ba-458a-a7a6-c17ed7736b11\") " pod="openstack/placement-4fdf-account-create-update-bvvpt" Mar 12 14:49:26.928104 master-0 kubenswrapper[37036]: I0312 14:49:26.927497 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6defdb3a-1932-4d90-b25c-af496585b703-operator-scripts\") pod \"placement-db-create-cjlkf\" (UID: \"6defdb3a-1932-4d90-b25c-af496585b703\") " pod="openstack/placement-db-create-cjlkf" Mar 12 14:49:26.928104 master-0 kubenswrapper[37036]: I0312 14:49:26.927568 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwzsq\" (UniqueName: \"kubernetes.io/projected/6defdb3a-1932-4d90-b25c-af496585b703-kube-api-access-hwzsq\") pod \"placement-db-create-cjlkf\" (UID: \"6defdb3a-1932-4d90-b25c-af496585b703\") " pod="openstack/placement-db-create-cjlkf" Mar 12 14:49:26.929161 master-0 kubenswrapper[37036]: I0312 14:49:26.929087 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6defdb3a-1932-4d90-b25c-af496585b703-operator-scripts\") pod \"placement-db-create-cjlkf\" (UID: \"6defdb3a-1932-4d90-b25c-af496585b703\") " pod="openstack/placement-db-create-cjlkf" Mar 12 14:49:26.952749 master-0 kubenswrapper[37036]: I0312 14:49:26.952074 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwzsq\" (UniqueName: \"kubernetes.io/projected/6defdb3a-1932-4d90-b25c-af496585b703-kube-api-access-hwzsq\") pod \"placement-db-create-cjlkf\" (UID: \"6defdb3a-1932-4d90-b25c-af496585b703\") " pod="openstack/placement-db-create-cjlkf" Mar 12 14:49:26.995862 master-0 kubenswrapper[37036]: I0312 14:49:26.995715 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cjlkf" Mar 12 14:49:27.030220 master-0 kubenswrapper[37036]: I0312 14:49:27.030140 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e21e790-37ba-458a-a7a6-c17ed7736b11-operator-scripts\") pod \"placement-4fdf-account-create-update-bvvpt\" (UID: \"9e21e790-37ba-458a-a7a6-c17ed7736b11\") " pod="openstack/placement-4fdf-account-create-update-bvvpt" Mar 12 14:49:27.030497 master-0 kubenswrapper[37036]: I0312 14:49:27.030451 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lrmz\" (UniqueName: \"kubernetes.io/projected/9e21e790-37ba-458a-a7a6-c17ed7736b11-kube-api-access-5lrmz\") pod \"placement-4fdf-account-create-update-bvvpt\" (UID: \"9e21e790-37ba-458a-a7a6-c17ed7736b11\") " pod="openstack/placement-4fdf-account-create-update-bvvpt" Mar 12 14:49:27.032302 master-0 kubenswrapper[37036]: I0312 14:49:27.031780 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e21e790-37ba-458a-a7a6-c17ed7736b11-operator-scripts\") pod \"placement-4fdf-account-create-update-bvvpt\" (UID: \"9e21e790-37ba-458a-a7a6-c17ed7736b11\") " pod="openstack/placement-4fdf-account-create-update-bvvpt" Mar 12 14:49:27.056930 master-0 kubenswrapper[37036]: I0312 14:49:27.056272 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lrmz\" (UniqueName: \"kubernetes.io/projected/9e21e790-37ba-458a-a7a6-c17ed7736b11-kube-api-access-5lrmz\") pod \"placement-4fdf-account-create-update-bvvpt\" (UID: \"9e21e790-37ba-458a-a7a6-c17ed7736b11\") " pod="openstack/placement-4fdf-account-create-update-bvvpt" Mar 12 14:49:27.125494 master-0 kubenswrapper[37036]: I0312 14:49:27.125436 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4fdf-account-create-update-bvvpt" Mar 12 14:49:27.535334 master-0 kubenswrapper[37036]: W0312 14:49:27.535117 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68421a5c_f523_46fc_8448_704811e6ed1c.slice/crio-c1b184a4a5c82e5d9b2007d3e08ca8d28e16ec3b9ca998bb5f01c9203c430fba WatchSource:0}: Error finding container c1b184a4a5c82e5d9b2007d3e08ca8d28e16ec3b9ca998bb5f01c9203c430fba: Status 404 returned error can't find the container with id c1b184a4a5c82e5d9b2007d3e08ca8d28e16ec3b9ca998bb5f01c9203c430fba Mar 12 14:49:27.546651 master-0 kubenswrapper[37036]: I0312 14:49:27.543669 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1d9f-account-create-update-kkfzt"] Mar 12 14:49:27.558728 master-0 kubenswrapper[37036]: I0312 14:49:27.558673 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-cqt24"] Mar 12 14:49:27.645326 master-0 kubenswrapper[37036]: I0312 14:49:27.644245 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-cjlkf"] Mar 12 14:49:27.781456 master-0 kubenswrapper[37036]: I0312 14:49:27.755946 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57bc987d9f-b27sg"] Mar 12 14:49:27.781456 master-0 kubenswrapper[37036]: I0312 14:49:27.756445 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" podUID="a3cb6993-990a-4101-a6e1-fd59b95eeeb0" containerName="dnsmasq-dns" containerID="cri-o://36f54fe5d8b6593c3b6ff3c587675fcd33516b630e1c9111cdfd6faadca5df72" gracePeriod=10 Mar 12 14:49:27.781456 master-0 kubenswrapper[37036]: I0312 14:49:27.774414 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" Mar 12 14:49:27.817497 master-0 kubenswrapper[37036]: I0312 14:49:27.814962 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b8649b7f9-97zvq"] Mar 12 14:49:27.817497 master-0 kubenswrapper[37036]: I0312 14:49:27.816745 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" Mar 12 14:49:27.822547 master-0 kubenswrapper[37036]: I0312 14:49:27.819877 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b8649b7f9-97zvq"] Mar 12 14:49:27.907704 master-0 kubenswrapper[37036]: I0312 14:49:27.906975 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4fdf-account-create-update-bvvpt"] Mar 12 14:49:27.938392 master-0 kubenswrapper[37036]: I0312 14:49:27.920652 37036 generic.go:334] "Generic (PLEG): container finished" podID="a3cb6993-990a-4101-a6e1-fd59b95eeeb0" containerID="36f54fe5d8b6593c3b6ff3c587675fcd33516b630e1c9111cdfd6faadca5df72" exitCode=0 Mar 12 14:49:27.938392 master-0 kubenswrapper[37036]: I0312 14:49:27.920747 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" event={"ID":"a3cb6993-990a-4101-a6e1-fd59b95eeeb0","Type":"ContainerDied","Data":"36f54fe5d8b6593c3b6ff3c587675fcd33516b630e1c9111cdfd6faadca5df72"} Mar 12 14:49:27.938392 master-0 kubenswrapper[37036]: I0312 14:49:27.936510 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-cqt24" event={"ID":"a9644543-4e13-4b3c-9862-7a861ea2af30","Type":"ContainerStarted","Data":"f27b41fa3afcd8393d12939f98b08f4f10fb8735a1eb583acbd083101205ab56"} Mar 12 14:49:27.958321 master-0 kubenswrapper[37036]: I0312 14:49:27.944831 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1d9f-account-create-update-kkfzt" event={"ID":"68421a5c-f523-46fc-8448-704811e6ed1c","Type":"ContainerStarted","Data":"c1b184a4a5c82e5d9b2007d3e08ca8d28e16ec3b9ca998bb5f01c9203c430fba"} Mar 12 14:49:27.958321 master-0 kubenswrapper[37036]: I0312 14:49:27.954881 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cjlkf" event={"ID":"6defdb3a-1932-4d90-b25c-af496585b703","Type":"ContainerStarted","Data":"ca180af688bae9967ff84de69b31e8520fec26b840e4f25a18cfbce8f9678675"} Mar 12 14:49:27.967293 master-0 kubenswrapper[37036]: W0312 14:49:27.967250 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e21e790_37ba_458a_a7a6_c17ed7736b11.slice/crio-e3143c2c59577b34962f6456c7d3e35c45253dfad4a865d9895b58f70d3d58e8 WatchSource:0}: Error finding container e3143c2c59577b34962f6456c7d3e35c45253dfad4a865d9895b58f70d3d58e8: Status 404 returned error can't find the container with id e3143c2c59577b34962f6456c7d3e35c45253dfad4a865d9895b58f70d3d58e8 Mar 12 14:49:28.008865 master-0 kubenswrapper[37036]: I0312 14:49:28.008805 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-config\") pod \"dnsmasq-dns-5b8649b7f9-97zvq\" (UID: \"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9\") " pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" Mar 12 14:49:28.008865 master-0 kubenswrapper[37036]: I0312 14:49:28.008861 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-dns-svc\") pod \"dnsmasq-dns-5b8649b7f9-97zvq\" (UID: \"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9\") " pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" Mar 12 14:49:28.009105 master-0 kubenswrapper[37036]: I0312 14:49:28.008927 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-ovsdbserver-sb\") pod \"dnsmasq-dns-5b8649b7f9-97zvq\" (UID: \"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9\") " pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" Mar 12 14:49:28.009105 master-0 kubenswrapper[37036]: I0312 14:49:28.009042 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhqfq\" (UniqueName: \"kubernetes.io/projected/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-kube-api-access-nhqfq\") pod \"dnsmasq-dns-5b8649b7f9-97zvq\" (UID: \"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9\") " pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" Mar 12 14:49:28.009187 master-0 kubenswrapper[37036]: I0312 14:49:28.009151 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-ovsdbserver-nb\") pod \"dnsmasq-dns-5b8649b7f9-97zvq\" (UID: \"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9\") " pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" Mar 12 14:49:28.144760 master-0 kubenswrapper[37036]: I0312 14:49:28.116231 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-config\") pod \"dnsmasq-dns-5b8649b7f9-97zvq\" (UID: \"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9\") " pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" Mar 12 14:49:28.144760 master-0 kubenswrapper[37036]: I0312 14:49:28.116288 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-dns-svc\") pod \"dnsmasq-dns-5b8649b7f9-97zvq\" (UID: \"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9\") " pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" Mar 12 14:49:28.144760 master-0 kubenswrapper[37036]: I0312 14:49:28.116341 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-ovsdbserver-sb\") pod \"dnsmasq-dns-5b8649b7f9-97zvq\" (UID: \"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9\") " pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" Mar 12 14:49:28.144760 master-0 kubenswrapper[37036]: I0312 14:49:28.116415 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhqfq\" (UniqueName: \"kubernetes.io/projected/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-kube-api-access-nhqfq\") pod \"dnsmasq-dns-5b8649b7f9-97zvq\" (UID: \"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9\") " pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" Mar 12 14:49:28.144760 master-0 kubenswrapper[37036]: I0312 14:49:28.116484 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-ovsdbserver-nb\") pod \"dnsmasq-dns-5b8649b7f9-97zvq\" (UID: \"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9\") " pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" Mar 12 14:49:28.144760 master-0 kubenswrapper[37036]: I0312 14:49:28.117547 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-ovsdbserver-nb\") pod \"dnsmasq-dns-5b8649b7f9-97zvq\" (UID: \"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9\") " pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" Mar 12 14:49:28.144760 master-0 kubenswrapper[37036]: I0312 14:49:28.118322 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-dns-svc\") pod \"dnsmasq-dns-5b8649b7f9-97zvq\" (UID: \"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9\") " pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" Mar 12 14:49:28.144760 master-0 kubenswrapper[37036]: I0312 14:49:28.118611 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-config\") pod \"dnsmasq-dns-5b8649b7f9-97zvq\" (UID: \"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9\") " pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" Mar 12 14:49:28.144760 master-0 kubenswrapper[37036]: I0312 14:49:28.139666 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-ovsdbserver-sb\") pod \"dnsmasq-dns-5b8649b7f9-97zvq\" (UID: \"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9\") " pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" Mar 12 14:49:28.149809 master-0 kubenswrapper[37036]: I0312 14:49:28.149760 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhqfq\" (UniqueName: \"kubernetes.io/projected/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-kube-api-access-nhqfq\") pod \"dnsmasq-dns-5b8649b7f9-97zvq\" (UID: \"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9\") " pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" Mar 12 14:49:28.340528 master-0 kubenswrapper[37036]: I0312 14:49:28.339864 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" Mar 12 14:49:28.469503 master-0 kubenswrapper[37036]: I0312 14:49:28.469455 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" Mar 12 14:49:28.545975 master-0 kubenswrapper[37036]: I0312 14:49:28.545574 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-ovsdbserver-nb\") pod \"a3cb6993-990a-4101-a6e1-fd59b95eeeb0\" (UID: \"a3cb6993-990a-4101-a6e1-fd59b95eeeb0\") " Mar 12 14:49:28.545975 master-0 kubenswrapper[37036]: I0312 14:49:28.545664 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-ovsdbserver-sb\") pod \"a3cb6993-990a-4101-a6e1-fd59b95eeeb0\" (UID: \"a3cb6993-990a-4101-a6e1-fd59b95eeeb0\") " Mar 12 14:49:28.545975 master-0 kubenswrapper[37036]: I0312 14:49:28.545808 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-dns-svc\") pod \"a3cb6993-990a-4101-a6e1-fd59b95eeeb0\" (UID: \"a3cb6993-990a-4101-a6e1-fd59b95eeeb0\") " Mar 12 14:49:28.545975 master-0 kubenswrapper[37036]: I0312 14:49:28.545912 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-config\") pod \"a3cb6993-990a-4101-a6e1-fd59b95eeeb0\" (UID: \"a3cb6993-990a-4101-a6e1-fd59b95eeeb0\") " Mar 12 14:49:28.545975 master-0 kubenswrapper[37036]: I0312 14:49:28.545963 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnvbb\" (UniqueName: \"kubernetes.io/projected/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-kube-api-access-tnvbb\") pod \"a3cb6993-990a-4101-a6e1-fd59b95eeeb0\" (UID: \"a3cb6993-990a-4101-a6e1-fd59b95eeeb0\") " Mar 12 14:49:28.626578 master-0 kubenswrapper[37036]: I0312 14:49:28.626400 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-kube-api-access-tnvbb" (OuterVolumeSpecName: "kube-api-access-tnvbb") pod "a3cb6993-990a-4101-a6e1-fd59b95eeeb0" (UID: "a3cb6993-990a-4101-a6e1-fd59b95eeeb0"). InnerVolumeSpecName "kube-api-access-tnvbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:49:28.654327 master-0 kubenswrapper[37036]: I0312 14:49:28.654271 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnvbb\" (UniqueName: \"kubernetes.io/projected/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-kube-api-access-tnvbb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:28.732197 master-0 kubenswrapper[37036]: I0312 14:49:28.732129 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-config" (OuterVolumeSpecName: "config") pod "a3cb6993-990a-4101-a6e1-fd59b95eeeb0" (UID: "a3cb6993-990a-4101-a6e1-fd59b95eeeb0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:28.768073 master-0 kubenswrapper[37036]: I0312 14:49:28.760312 37036 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:28.800339 master-0 kubenswrapper[37036]: I0312 14:49:28.799509 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a3cb6993-990a-4101-a6e1-fd59b95eeeb0" (UID: "a3cb6993-990a-4101-a6e1-fd59b95eeeb0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:28.864801 master-0 kubenswrapper[37036]: I0312 14:49:28.864149 37036 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:28.915785 master-0 kubenswrapper[37036]: I0312 14:49:28.915663 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a3cb6993-990a-4101-a6e1-fd59b95eeeb0" (UID: "a3cb6993-990a-4101-a6e1-fd59b95eeeb0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:28.921618 master-0 kubenswrapper[37036]: I0312 14:49:28.921549 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a3cb6993-990a-4101-a6e1-fd59b95eeeb0" (UID: "a3cb6993-990a-4101-a6e1-fd59b95eeeb0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:28.967652 master-0 kubenswrapper[37036]: I0312 14:49:28.967550 37036 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:28.967652 master-0 kubenswrapper[37036]: I0312 14:49:28.967588 37036 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3cb6993-990a-4101-a6e1-fd59b95eeeb0-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:28.999741 master-0 kubenswrapper[37036]: I0312 14:49:28.999626 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4fdf-account-create-update-bvvpt" event={"ID":"9e21e790-37ba-458a-a7a6-c17ed7736b11","Type":"ContainerStarted","Data":"ab54e1ad2d765065a5735985c93cd0b4de704401910701c8b732cbde88a41722"} Mar 12 14:49:28.999934 master-0 kubenswrapper[37036]: I0312 14:49:28.999741 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4fdf-account-create-update-bvvpt" event={"ID":"9e21e790-37ba-458a-a7a6-c17ed7736b11","Type":"ContainerStarted","Data":"e3143c2c59577b34962f6456c7d3e35c45253dfad4a865d9895b58f70d3d58e8"} Mar 12 14:49:29.011848 master-0 kubenswrapper[37036]: I0312 14:49:29.011777 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b8649b7f9-97zvq"] Mar 12 14:49:29.013304 master-0 kubenswrapper[37036]: W0312 14:49:29.013227 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3095c7fb_7e2c_4a5f_a5e5_0a1876168cd9.slice/crio-04911b8fc1f8573d5632919377c3fa38b18e16640d73857141cf5c58e965df4d WatchSource:0}: Error finding container 04911b8fc1f8573d5632919377c3fa38b18e16640d73857141cf5c58e965df4d: Status 404 returned error can't find the container with id 04911b8fc1f8573d5632919377c3fa38b18e16640d73857141cf5c58e965df4d Mar 12 14:49:29.013572 master-0 kubenswrapper[37036]: I0312 14:49:29.013532 37036 generic.go:334] "Generic (PLEG): container finished" podID="6defdb3a-1932-4d90-b25c-af496585b703" containerID="b1525000a6649203939c6f326608d41c284349f422cb219df34f7c9835eba528" exitCode=0 Mar 12 14:49:29.013780 master-0 kubenswrapper[37036]: I0312 14:49:29.013753 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cjlkf" event={"ID":"6defdb3a-1932-4d90-b25c-af496585b703","Type":"ContainerDied","Data":"b1525000a6649203939c6f326608d41c284349f422cb219df34f7c9835eba528"} Mar 12 14:49:29.031644 master-0 kubenswrapper[37036]: I0312 14:49:29.031532 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-4fdf-account-create-update-bvvpt" podStartSLOduration=3.03151068 podStartE2EDuration="3.03151068s" podCreationTimestamp="2026-03-12 14:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:49:29.022001474 +0000 UTC m=+828.029742411" watchObservedRunningTime="2026-03-12 14:49:29.03151068 +0000 UTC m=+828.039251617" Mar 12 14:49:29.038144 master-0 kubenswrapper[37036]: I0312 14:49:29.038077 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" Mar 12 14:49:29.038435 master-0 kubenswrapper[37036]: I0312 14:49:29.038363 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bc987d9f-b27sg" event={"ID":"a3cb6993-990a-4101-a6e1-fd59b95eeeb0","Type":"ContainerDied","Data":"60e7a6a9da74daf1a0b2056980d23068ea59dd069b421b29d7ec4e8898112dab"} Mar 12 14:49:29.038509 master-0 kubenswrapper[37036]: I0312 14:49:29.038484 37036 scope.go:117] "RemoveContainer" containerID="36f54fe5d8b6593c3b6ff3c587675fcd33516b630e1c9111cdfd6faadca5df72" Mar 12 14:49:29.043826 master-0 kubenswrapper[37036]: I0312 14:49:29.043785 37036 generic.go:334] "Generic (PLEG): container finished" podID="a9644543-4e13-4b3c-9862-7a861ea2af30" containerID="7793d64719cc08332cb5fbb7a9da1e289d9ac896636f1e86af3d63f746705ce6" exitCode=0 Mar 12 14:49:29.043917 master-0 kubenswrapper[37036]: I0312 14:49:29.043855 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-cqt24" event={"ID":"a9644543-4e13-4b3c-9862-7a861ea2af30","Type":"ContainerDied","Data":"7793d64719cc08332cb5fbb7a9da1e289d9ac896636f1e86af3d63f746705ce6"} Mar 12 14:49:29.046415 master-0 kubenswrapper[37036]: I0312 14:49:29.046360 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1d9f-account-create-update-kkfzt" event={"ID":"68421a5c-f523-46fc-8448-704811e6ed1c","Type":"ContainerStarted","Data":"5d65c38fca88dd728ccf5f5db9fdeecb5ff467dbb534e3b4adbeb426ac98d726"} Mar 12 14:49:29.061568 master-0 kubenswrapper[37036]: I0312 14:49:29.061500 37036 scope.go:117] "RemoveContainer" containerID="bb86c596a387fea3f7587cfdfa84092f6c0fcc0a421de611d73e99181af48c87" Mar 12 14:49:29.085975 master-0 kubenswrapper[37036]: I0312 14:49:29.085826 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-1d9f-account-create-update-kkfzt" podStartSLOduration=3.085807227 podStartE2EDuration="3.085807227s" podCreationTimestamp="2026-03-12 14:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:49:29.073215625 +0000 UTC m=+828.080956562" watchObservedRunningTime="2026-03-12 14:49:29.085807227 +0000 UTC m=+828.093548154" Mar 12 14:49:29.147863 master-0 kubenswrapper[37036]: I0312 14:49:29.147798 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57bc987d9f-b27sg"] Mar 12 14:49:29.157926 master-0 kubenswrapper[37036]: I0312 14:49:29.157831 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57bc987d9f-b27sg"] Mar 12 14:49:29.245731 master-0 kubenswrapper[37036]: I0312 14:49:29.245678 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3cb6993-990a-4101-a6e1-fd59b95eeeb0" path="/var/lib/kubelet/pods/a3cb6993-990a-4101-a6e1-fd59b95eeeb0/volumes" Mar 12 14:49:29.814913 master-0 kubenswrapper[37036]: I0312 14:49:29.814828 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 12 14:49:29.817547 master-0 kubenswrapper[37036]: E0312 14:49:29.815864 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3cb6993-990a-4101-a6e1-fd59b95eeeb0" containerName="dnsmasq-dns" Mar 12 14:49:29.817547 master-0 kubenswrapper[37036]: I0312 14:49:29.815888 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3cb6993-990a-4101-a6e1-fd59b95eeeb0" containerName="dnsmasq-dns" Mar 12 14:49:29.817547 master-0 kubenswrapper[37036]: E0312 14:49:29.815935 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3cb6993-990a-4101-a6e1-fd59b95eeeb0" containerName="init" Mar 12 14:49:29.817547 master-0 kubenswrapper[37036]: I0312 14:49:29.815948 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3cb6993-990a-4101-a6e1-fd59b95eeeb0" containerName="init" Mar 12 14:49:29.817547 master-0 kubenswrapper[37036]: I0312 14:49:29.816309 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3cb6993-990a-4101-a6e1-fd59b95eeeb0" containerName="dnsmasq-dns" Mar 12 14:49:29.824123 master-0 kubenswrapper[37036]: I0312 14:49:29.824069 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 12 14:49:29.827933 master-0 kubenswrapper[37036]: I0312 14:49:29.827285 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 12 14:49:29.827933 master-0 kubenswrapper[37036]: I0312 14:49:29.827622 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 12 14:49:29.827933 master-0 kubenswrapper[37036]: I0312 14:49:29.827837 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 12 14:49:29.839097 master-0 kubenswrapper[37036]: I0312 14:49:29.838993 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 12 14:49:29.890730 master-0 kubenswrapper[37036]: I0312 14:49:29.890645 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-lock\") pod \"swift-storage-0\" (UID: \"02b0bb9f-56cd-4ffe-9e37-2200e4baec09\") " pod="openstack/swift-storage-0" Mar 12 14:49:29.890967 master-0 kubenswrapper[37036]: I0312 14:49:29.890755 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-etc-swift\") pod \"swift-storage-0\" (UID: \"02b0bb9f-56cd-4ffe-9e37-2200e4baec09\") " pod="openstack/swift-storage-0" Mar 12 14:49:29.890967 master-0 kubenswrapper[37036]: I0312 14:49:29.890785 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f03ff05b-df65-4f46-92cc-788da9879240\" (UniqueName: \"kubernetes.io/csi/topolvm.io^41eee21a-5223-4889-9748-f540df5ce959\") pod \"swift-storage-0\" (UID: \"02b0bb9f-56cd-4ffe-9e37-2200e4baec09\") " pod="openstack/swift-storage-0" Mar 12 14:49:29.890967 master-0 kubenswrapper[37036]: I0312 14:49:29.890813 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6z9x\" (UniqueName: \"kubernetes.io/projected/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-kube-api-access-p6z9x\") pod \"swift-storage-0\" (UID: \"02b0bb9f-56cd-4ffe-9e37-2200e4baec09\") " pod="openstack/swift-storage-0" Mar 12 14:49:29.890967 master-0 kubenswrapper[37036]: I0312 14:49:29.890886 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-cache\") pod \"swift-storage-0\" (UID: \"02b0bb9f-56cd-4ffe-9e37-2200e4baec09\") " pod="openstack/swift-storage-0" Mar 12 14:49:29.891146 master-0 kubenswrapper[37036]: I0312 14:49:29.891047 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"02b0bb9f-56cd-4ffe-9e37-2200e4baec09\") " pod="openstack/swift-storage-0" Mar 12 14:49:29.992891 master-0 kubenswrapper[37036]: I0312 14:49:29.992823 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-etc-swift\") pod \"swift-storage-0\" (UID: \"02b0bb9f-56cd-4ffe-9e37-2200e4baec09\") " pod="openstack/swift-storage-0" Mar 12 14:49:29.993212 master-0 kubenswrapper[37036]: E0312 14:49:29.993133 37036 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 14:49:29.993212 master-0 kubenswrapper[37036]: E0312 14:49:29.993178 37036 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 14:49:29.993314 master-0 kubenswrapper[37036]: E0312 14:49:29.993233 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-etc-swift podName:02b0bb9f-56cd-4ffe-9e37-2200e4baec09 nodeName:}" failed. No retries permitted until 2026-03-12 14:49:30.49321459 +0000 UTC m=+829.500955527 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-etc-swift") pod "swift-storage-0" (UID: "02b0bb9f-56cd-4ffe-9e37-2200e4baec09") : configmap "swift-ring-files" not found Mar 12 14:49:29.993314 master-0 kubenswrapper[37036]: I0312 14:49:29.993141 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f03ff05b-df65-4f46-92cc-788da9879240\" (UniqueName: \"kubernetes.io/csi/topolvm.io^41eee21a-5223-4889-9748-f540df5ce959\") pod \"swift-storage-0\" (UID: \"02b0bb9f-56cd-4ffe-9e37-2200e4baec09\") " pod="openstack/swift-storage-0" Mar 12 14:49:29.993409 master-0 kubenswrapper[37036]: I0312 14:49:29.993320 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6z9x\" (UniqueName: \"kubernetes.io/projected/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-kube-api-access-p6z9x\") pod \"swift-storage-0\" (UID: \"02b0bb9f-56cd-4ffe-9e37-2200e4baec09\") " pod="openstack/swift-storage-0" Mar 12 14:49:29.993451 master-0 kubenswrapper[37036]: I0312 14:49:29.993405 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-cache\") pod \"swift-storage-0\" (UID: \"02b0bb9f-56cd-4ffe-9e37-2200e4baec09\") " pod="openstack/swift-storage-0" Mar 12 14:49:29.993623 master-0 kubenswrapper[37036]: I0312 14:49:29.993568 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"02b0bb9f-56cd-4ffe-9e37-2200e4baec09\") " pod="openstack/swift-storage-0" Mar 12 14:49:29.993695 master-0 kubenswrapper[37036]: I0312 14:49:29.993641 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-lock\") pod \"swift-storage-0\" (UID: \"02b0bb9f-56cd-4ffe-9e37-2200e4baec09\") " pod="openstack/swift-storage-0" Mar 12 14:49:29.994148 master-0 kubenswrapper[37036]: I0312 14:49:29.994106 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-cache\") pod \"swift-storage-0\" (UID: \"02b0bb9f-56cd-4ffe-9e37-2200e4baec09\") " pod="openstack/swift-storage-0" Mar 12 14:49:29.994227 master-0 kubenswrapper[37036]: I0312 14:49:29.994158 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-lock\") pod \"swift-storage-0\" (UID: \"02b0bb9f-56cd-4ffe-9e37-2200e4baec09\") " pod="openstack/swift-storage-0" Mar 12 14:49:29.997003 master-0 kubenswrapper[37036]: I0312 14:49:29.996887 37036 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 14:49:29.997003 master-0 kubenswrapper[37036]: I0312 14:49:29.996949 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"02b0bb9f-56cd-4ffe-9e37-2200e4baec09\") " pod="openstack/swift-storage-0" Mar 12 14:49:29.997003 master-0 kubenswrapper[37036]: I0312 14:49:29.996965 37036 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f03ff05b-df65-4f46-92cc-788da9879240\" (UniqueName: \"kubernetes.io/csi/topolvm.io^41eee21a-5223-4889-9748-f540df5ce959\") pod \"swift-storage-0\" (UID: \"02b0bb9f-56cd-4ffe-9e37-2200e4baec09\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/d7c1e88bbeb752125dba5acc450c5931f57ec1e1d93c5f1b1a646fb8f30fde07/globalmount\"" pod="openstack/swift-storage-0" Mar 12 14:49:30.022442 master-0 kubenswrapper[37036]: I0312 14:49:30.022374 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6z9x\" (UniqueName: \"kubernetes.io/projected/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-kube-api-access-p6z9x\") pod \"swift-storage-0\" (UID: \"02b0bb9f-56cd-4ffe-9e37-2200e4baec09\") " pod="openstack/swift-storage-0" Mar 12 14:49:30.057373 master-0 kubenswrapper[37036]: I0312 14:49:30.057213 37036 generic.go:334] "Generic (PLEG): container finished" podID="3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9" containerID="da5ff2b7e53cf6d1ea16e443dc92e4e165ca3226c259a471cc05aad87cea66f3" exitCode=0 Mar 12 14:49:30.057373 master-0 kubenswrapper[37036]: I0312 14:49:30.057306 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" event={"ID":"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9","Type":"ContainerDied","Data":"da5ff2b7e53cf6d1ea16e443dc92e4e165ca3226c259a471cc05aad87cea66f3"} Mar 12 14:49:30.057373 master-0 kubenswrapper[37036]: I0312 14:49:30.057368 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" event={"ID":"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9","Type":"ContainerStarted","Data":"04911b8fc1f8573d5632919377c3fa38b18e16640d73857141cf5c58e965df4d"} Mar 12 14:49:30.060724 master-0 kubenswrapper[37036]: I0312 14:49:30.060319 37036 generic.go:334] "Generic (PLEG): container finished" podID="68421a5c-f523-46fc-8448-704811e6ed1c" containerID="5d65c38fca88dd728ccf5f5db9fdeecb5ff467dbb534e3b4adbeb426ac98d726" exitCode=0 Mar 12 14:49:30.060724 master-0 kubenswrapper[37036]: I0312 14:49:30.060445 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1d9f-account-create-update-kkfzt" event={"ID":"68421a5c-f523-46fc-8448-704811e6ed1c","Type":"ContainerDied","Data":"5d65c38fca88dd728ccf5f5db9fdeecb5ff467dbb534e3b4adbeb426ac98d726"} Mar 12 14:49:30.062029 master-0 kubenswrapper[37036]: I0312 14:49:30.061947 37036 generic.go:334] "Generic (PLEG): container finished" podID="9e21e790-37ba-458a-a7a6-c17ed7736b11" containerID="ab54e1ad2d765065a5735985c93cd0b4de704401910701c8b732cbde88a41722" exitCode=0 Mar 12 14:49:30.062029 master-0 kubenswrapper[37036]: I0312 14:49:30.061991 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4fdf-account-create-update-bvvpt" event={"ID":"9e21e790-37ba-458a-a7a6-c17ed7736b11","Type":"ContainerDied","Data":"ab54e1ad2d765065a5735985c93cd0b4de704401910701c8b732cbde88a41722"} Mar 12 14:49:30.509086 master-0 kubenswrapper[37036]: I0312 14:49:30.508343 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-etc-swift\") pod \"swift-storage-0\" (UID: \"02b0bb9f-56cd-4ffe-9e37-2200e4baec09\") " pod="openstack/swift-storage-0" Mar 12 14:49:30.509086 master-0 kubenswrapper[37036]: E0312 14:49:30.508507 37036 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 14:49:30.509086 master-0 kubenswrapper[37036]: E0312 14:49:30.508661 37036 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 14:49:30.509086 master-0 kubenswrapper[37036]: E0312 14:49:30.508713 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-etc-swift podName:02b0bb9f-56cd-4ffe-9e37-2200e4baec09 nodeName:}" failed. No retries permitted until 2026-03-12 14:49:31.508694376 +0000 UTC m=+830.516435323 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-etc-swift") pod "swift-storage-0" (UID: "02b0bb9f-56cd-4ffe-9e37-2200e4baec09") : configmap "swift-ring-files" not found Mar 12 14:49:30.663263 master-0 kubenswrapper[37036]: I0312 14:49:30.663096 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-kqst6"] Mar 12 14:49:30.669982 master-0 kubenswrapper[37036]: I0312 14:49:30.665420 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kqst6" Mar 12 14:49:30.687080 master-0 kubenswrapper[37036]: I0312 14:49:30.685022 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-kqst6"] Mar 12 14:49:30.700049 master-0 kubenswrapper[37036]: I0312 14:49:30.699300 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-qpqr7"] Mar 12 14:49:30.700873 master-0 kubenswrapper[37036]: I0312 14:49:30.700850 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qpqr7" Mar 12 14:49:30.704015 master-0 kubenswrapper[37036]: I0312 14:49:30.703980 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 12 14:49:30.707465 master-0 kubenswrapper[37036]: I0312 14:49:30.707411 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 12 14:49:30.707465 master-0 kubenswrapper[37036]: I0312 14:49:30.707425 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 12 14:49:30.708406 master-0 kubenswrapper[37036]: I0312 14:49:30.708367 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cqt24" Mar 12 14:49:30.741781 master-0 kubenswrapper[37036]: I0312 14:49:30.741723 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qpqr7"] Mar 12 14:49:30.822801 master-0 kubenswrapper[37036]: I0312 14:49:30.822736 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9644543-4e13-4b3c-9862-7a861ea2af30-operator-scripts\") pod \"a9644543-4e13-4b3c-9862-7a861ea2af30\" (UID: \"a9644543-4e13-4b3c-9862-7a861ea2af30\") " Mar 12 14:49:30.823447 master-0 kubenswrapper[37036]: I0312 14:49:30.823066 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llvfm\" (UniqueName: \"kubernetes.io/projected/a9644543-4e13-4b3c-9862-7a861ea2af30-kube-api-access-llvfm\") pod \"a9644543-4e13-4b3c-9862-7a861ea2af30\" (UID: \"a9644543-4e13-4b3c-9862-7a861ea2af30\") " Mar 12 14:49:30.824544 master-0 kubenswrapper[37036]: I0312 14:49:30.823663 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9644543-4e13-4b3c-9862-7a861ea2af30-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9644543-4e13-4b3c-9862-7a861ea2af30" (UID: "a9644543-4e13-4b3c-9862-7a861ea2af30"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:30.824544 master-0 kubenswrapper[37036]: I0312 14:49:30.823840 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-swiftconf\") pod \"swift-ring-rebalance-qpqr7\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " pod="openstack/swift-ring-rebalance-qpqr7" Mar 12 14:49:30.824544 master-0 kubenswrapper[37036]: I0312 14:49:30.823921 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-combined-ca-bundle\") pod \"swift-ring-rebalance-qpqr7\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " pod="openstack/swift-ring-rebalance-qpqr7" Mar 12 14:49:30.824544 master-0 kubenswrapper[37036]: I0312 14:49:30.823971 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dccc99c-4958-49fe-8db1-1658241ccd0c-operator-scripts\") pod \"glance-db-create-kqst6\" (UID: \"3dccc99c-4958-49fe-8db1-1658241ccd0c\") " pod="openstack/glance-db-create-kqst6" Mar 12 14:49:30.824544 master-0 kubenswrapper[37036]: I0312 14:49:30.824060 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-ring-data-devices\") pod \"swift-ring-rebalance-qpqr7\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " pod="openstack/swift-ring-rebalance-qpqr7" Mar 12 14:49:30.824544 master-0 kubenswrapper[37036]: I0312 14:49:30.824104 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-etc-swift\") pod \"swift-ring-rebalance-qpqr7\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " pod="openstack/swift-ring-rebalance-qpqr7" Mar 12 14:49:30.824544 master-0 kubenswrapper[37036]: I0312 14:49:30.824142 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmwbj\" (UniqueName: \"kubernetes.io/projected/3dccc99c-4958-49fe-8db1-1658241ccd0c-kube-api-access-tmwbj\") pod \"glance-db-create-kqst6\" (UID: \"3dccc99c-4958-49fe-8db1-1658241ccd0c\") " pod="openstack/glance-db-create-kqst6" Mar 12 14:49:30.824544 master-0 kubenswrapper[37036]: I0312 14:49:30.824179 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h8mt\" (UniqueName: \"kubernetes.io/projected/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-kube-api-access-9h8mt\") pod \"swift-ring-rebalance-qpqr7\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " pod="openstack/swift-ring-rebalance-qpqr7" Mar 12 14:49:30.824544 master-0 kubenswrapper[37036]: I0312 14:49:30.824202 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-dispersionconf\") pod \"swift-ring-rebalance-qpqr7\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " pod="openstack/swift-ring-rebalance-qpqr7" Mar 12 14:49:30.824544 master-0 kubenswrapper[37036]: I0312 14:49:30.824268 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-scripts\") pod \"swift-ring-rebalance-qpqr7\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " pod="openstack/swift-ring-rebalance-qpqr7" Mar 12 14:49:30.825317 master-0 kubenswrapper[37036]: I0312 14:49:30.824889 37036 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9644543-4e13-4b3c-9862-7a861ea2af30-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:30.827211 master-0 kubenswrapper[37036]: I0312 14:49:30.827160 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9644543-4e13-4b3c-9862-7a861ea2af30-kube-api-access-llvfm" (OuterVolumeSpecName: "kube-api-access-llvfm") pod "a9644543-4e13-4b3c-9862-7a861ea2af30" (UID: "a9644543-4e13-4b3c-9862-7a861ea2af30"). InnerVolumeSpecName "kube-api-access-llvfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:49:30.839970 master-0 kubenswrapper[37036]: I0312 14:49:30.839151 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-9848-account-create-update-p47pv"] Mar 12 14:49:30.839970 master-0 kubenswrapper[37036]: E0312 14:49:30.839740 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9644543-4e13-4b3c-9862-7a861ea2af30" containerName="mariadb-database-create" Mar 12 14:49:30.839970 master-0 kubenswrapper[37036]: I0312 14:49:30.839759 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9644543-4e13-4b3c-9862-7a861ea2af30" containerName="mariadb-database-create" Mar 12 14:49:30.840239 master-0 kubenswrapper[37036]: I0312 14:49:30.840082 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9644543-4e13-4b3c-9862-7a861ea2af30" containerName="mariadb-database-create" Mar 12 14:49:30.841964 master-0 kubenswrapper[37036]: I0312 14:49:30.841095 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9848-account-create-update-p47pv" Mar 12 14:49:30.869452 master-0 kubenswrapper[37036]: I0312 14:49:30.869394 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 12 14:49:30.879800 master-0 kubenswrapper[37036]: I0312 14:49:30.879748 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9848-account-create-update-p47pv"] Mar 12 14:49:30.927611 master-0 kubenswrapper[37036]: I0312 14:49:30.927483 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-swiftconf\") pod \"swift-ring-rebalance-qpqr7\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " pod="openstack/swift-ring-rebalance-qpqr7" Mar 12 14:49:30.927857 master-0 kubenswrapper[37036]: I0312 14:49:30.927674 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-combined-ca-bundle\") pod \"swift-ring-rebalance-qpqr7\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " pod="openstack/swift-ring-rebalance-qpqr7" Mar 12 14:49:30.927857 master-0 kubenswrapper[37036]: I0312 14:49:30.927708 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dccc99c-4958-49fe-8db1-1658241ccd0c-operator-scripts\") pod \"glance-db-create-kqst6\" (UID: \"3dccc99c-4958-49fe-8db1-1658241ccd0c\") " pod="openstack/glance-db-create-kqst6" Mar 12 14:49:30.927857 master-0 kubenswrapper[37036]: I0312 14:49:30.927748 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-ring-data-devices\") pod \"swift-ring-rebalance-qpqr7\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " pod="openstack/swift-ring-rebalance-qpqr7" Mar 12 14:49:30.927857 master-0 kubenswrapper[37036]: I0312 14:49:30.927781 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc2sl\" (UniqueName: \"kubernetes.io/projected/280b3449-e519-4936-b541-9ea239fe7aee-kube-api-access-tc2sl\") pod \"glance-9848-account-create-update-p47pv\" (UID: \"280b3449-e519-4936-b541-9ea239fe7aee\") " pod="openstack/glance-9848-account-create-update-p47pv" Mar 12 14:49:30.927857 master-0 kubenswrapper[37036]: I0312 14:49:30.927804 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-etc-swift\") pod \"swift-ring-rebalance-qpqr7\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " pod="openstack/swift-ring-rebalance-qpqr7" Mar 12 14:49:30.928694 master-0 kubenswrapper[37036]: I0312 14:49:30.928636 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmwbj\" (UniqueName: \"kubernetes.io/projected/3dccc99c-4958-49fe-8db1-1658241ccd0c-kube-api-access-tmwbj\") pod \"glance-db-create-kqst6\" (UID: \"3dccc99c-4958-49fe-8db1-1658241ccd0c\") " pod="openstack/glance-db-create-kqst6" Mar 12 14:49:30.928793 master-0 kubenswrapper[37036]: I0312 14:49:30.928713 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h8mt\" (UniqueName: \"kubernetes.io/projected/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-kube-api-access-9h8mt\") pod \"swift-ring-rebalance-qpqr7\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " pod="openstack/swift-ring-rebalance-qpqr7" Mar 12 14:49:30.928793 master-0 kubenswrapper[37036]: I0312 14:49:30.928756 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-dispersionconf\") pod \"swift-ring-rebalance-qpqr7\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " pod="openstack/swift-ring-rebalance-qpqr7" Mar 12 14:49:30.928879 master-0 kubenswrapper[37036]: I0312 14:49:30.928814 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-scripts\") pod \"swift-ring-rebalance-qpqr7\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " pod="openstack/swift-ring-rebalance-qpqr7" Mar 12 14:49:30.929482 master-0 kubenswrapper[37036]: I0312 14:49:30.929446 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-ring-data-devices\") pod \"swift-ring-rebalance-qpqr7\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " pod="openstack/swift-ring-rebalance-qpqr7" Mar 12 14:49:30.929570 master-0 kubenswrapper[37036]: I0312 14:49:30.929527 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dccc99c-4958-49fe-8db1-1658241ccd0c-operator-scripts\") pod \"glance-db-create-kqst6\" (UID: \"3dccc99c-4958-49fe-8db1-1658241ccd0c\") " pod="openstack/glance-db-create-kqst6" Mar 12 14:49:30.929624 master-0 kubenswrapper[37036]: I0312 14:49:30.929598 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-scripts\") pod \"swift-ring-rebalance-qpqr7\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " pod="openstack/swift-ring-rebalance-qpqr7" Mar 12 14:49:30.929804 master-0 kubenswrapper[37036]: I0312 14:49:30.929778 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-etc-swift\") pod \"swift-ring-rebalance-qpqr7\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " pod="openstack/swift-ring-rebalance-qpqr7" Mar 12 14:49:30.955939 master-0 kubenswrapper[37036]: I0312 14:49:30.952795 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmwbj\" (UniqueName: \"kubernetes.io/projected/3dccc99c-4958-49fe-8db1-1658241ccd0c-kube-api-access-tmwbj\") pod \"glance-db-create-kqst6\" (UID: \"3dccc99c-4958-49fe-8db1-1658241ccd0c\") " pod="openstack/glance-db-create-kqst6" Mar 12 14:49:30.962645 master-0 kubenswrapper[37036]: I0312 14:49:30.962589 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/280b3449-e519-4936-b541-9ea239fe7aee-operator-scripts\") pod \"glance-9848-account-create-update-p47pv\" (UID: \"280b3449-e519-4936-b541-9ea239fe7aee\") " pod="openstack/glance-9848-account-create-update-p47pv" Mar 12 14:49:30.962852 master-0 kubenswrapper[37036]: I0312 14:49:30.962804 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-combined-ca-bundle\") pod \"swift-ring-rebalance-qpqr7\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " pod="openstack/swift-ring-rebalance-qpqr7" Mar 12 14:49:30.962942 master-0 kubenswrapper[37036]: I0312 14:49:30.962861 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-swiftconf\") pod \"swift-ring-rebalance-qpqr7\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " pod="openstack/swift-ring-rebalance-qpqr7" Mar 12 14:49:30.962996 master-0 kubenswrapper[37036]: I0312 14:49:30.962939 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llvfm\" (UniqueName: \"kubernetes.io/projected/a9644543-4e13-4b3c-9862-7a861ea2af30-kube-api-access-llvfm\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:30.963632 master-0 kubenswrapper[37036]: I0312 14:49:30.963581 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h8mt\" (UniqueName: \"kubernetes.io/projected/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-kube-api-access-9h8mt\") pod \"swift-ring-rebalance-qpqr7\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " pod="openstack/swift-ring-rebalance-qpqr7" Mar 12 14:49:30.968978 master-0 kubenswrapper[37036]: I0312 14:49:30.968936 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-dispersionconf\") pod \"swift-ring-rebalance-qpqr7\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " pod="openstack/swift-ring-rebalance-qpqr7" Mar 12 14:49:30.991059 master-0 kubenswrapper[37036]: I0312 14:49:30.991010 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cjlkf" Mar 12 14:49:30.996716 master-0 kubenswrapper[37036]: I0312 14:49:30.996667 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kqst6" Mar 12 14:49:31.043836 master-0 kubenswrapper[37036]: I0312 14:49:31.043766 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qpqr7" Mar 12 14:49:31.067249 master-0 kubenswrapper[37036]: I0312 14:49:31.067201 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6defdb3a-1932-4d90-b25c-af496585b703-operator-scripts\") pod \"6defdb3a-1932-4d90-b25c-af496585b703\" (UID: \"6defdb3a-1932-4d90-b25c-af496585b703\") " Mar 12 14:49:31.067515 master-0 kubenswrapper[37036]: I0312 14:49:31.067270 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwzsq\" (UniqueName: \"kubernetes.io/projected/6defdb3a-1932-4d90-b25c-af496585b703-kube-api-access-hwzsq\") pod \"6defdb3a-1932-4d90-b25c-af496585b703\" (UID: \"6defdb3a-1932-4d90-b25c-af496585b703\") " Mar 12 14:49:31.068490 master-0 kubenswrapper[37036]: I0312 14:49:31.067574 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/280b3449-e519-4936-b541-9ea239fe7aee-operator-scripts\") pod \"glance-9848-account-create-update-p47pv\" (UID: \"280b3449-e519-4936-b541-9ea239fe7aee\") " pod="openstack/glance-9848-account-create-update-p47pv" Mar 12 14:49:31.068490 master-0 kubenswrapper[37036]: I0312 14:49:31.067738 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc2sl\" (UniqueName: \"kubernetes.io/projected/280b3449-e519-4936-b541-9ea239fe7aee-kube-api-access-tc2sl\") pod \"glance-9848-account-create-update-p47pv\" (UID: \"280b3449-e519-4936-b541-9ea239fe7aee\") " pod="openstack/glance-9848-account-create-update-p47pv" Mar 12 14:49:31.068963 master-0 kubenswrapper[37036]: I0312 14:49:31.067956 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6defdb3a-1932-4d90-b25c-af496585b703-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6defdb3a-1932-4d90-b25c-af496585b703" (UID: "6defdb3a-1932-4d90-b25c-af496585b703"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:31.069635 master-0 kubenswrapper[37036]: I0312 14:49:31.069578 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/280b3449-e519-4936-b541-9ea239fe7aee-operator-scripts\") pod \"glance-9848-account-create-update-p47pv\" (UID: \"280b3449-e519-4936-b541-9ea239fe7aee\") " pod="openstack/glance-9848-account-create-update-p47pv" Mar 12 14:49:31.072931 master-0 kubenswrapper[37036]: I0312 14:49:31.072859 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6defdb3a-1932-4d90-b25c-af496585b703-kube-api-access-hwzsq" (OuterVolumeSpecName: "kube-api-access-hwzsq") pod "6defdb3a-1932-4d90-b25c-af496585b703" (UID: "6defdb3a-1932-4d90-b25c-af496585b703"). InnerVolumeSpecName "kube-api-access-hwzsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:49:31.086792 master-0 kubenswrapper[37036]: I0312 14:49:31.086721 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc2sl\" (UniqueName: \"kubernetes.io/projected/280b3449-e519-4936-b541-9ea239fe7aee-kube-api-access-tc2sl\") pod \"glance-9848-account-create-update-p47pv\" (UID: \"280b3449-e519-4936-b541-9ea239fe7aee\") " pod="openstack/glance-9848-account-create-update-p47pv" Mar 12 14:49:31.092928 master-0 kubenswrapper[37036]: I0312 14:49:31.092319 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cqt24" Mar 12 14:49:31.092928 master-0 kubenswrapper[37036]: I0312 14:49:31.092314 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-cqt24" event={"ID":"a9644543-4e13-4b3c-9862-7a861ea2af30","Type":"ContainerDied","Data":"f27b41fa3afcd8393d12939f98b08f4f10fb8735a1eb583acbd083101205ab56"} Mar 12 14:49:31.092928 master-0 kubenswrapper[37036]: I0312 14:49:31.092487 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f27b41fa3afcd8393d12939f98b08f4f10fb8735a1eb583acbd083101205ab56" Mar 12 14:49:31.113621 master-0 kubenswrapper[37036]: I0312 14:49:31.113587 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cjlkf" Mar 12 14:49:31.113740 master-0 kubenswrapper[37036]: I0312 14:49:31.113700 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cjlkf" event={"ID":"6defdb3a-1932-4d90-b25c-af496585b703","Type":"ContainerDied","Data":"ca180af688bae9967ff84de69b31e8520fec26b840e4f25a18cfbce8f9678675"} Mar 12 14:49:31.113793 master-0 kubenswrapper[37036]: I0312 14:49:31.113776 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca180af688bae9967ff84de69b31e8520fec26b840e4f25a18cfbce8f9678675" Mar 12 14:49:31.121295 master-0 kubenswrapper[37036]: I0312 14:49:31.121187 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" event={"ID":"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9","Type":"ContainerStarted","Data":"a1c57e3759f1afb00b6229f6176686f107112f53db8e4c19c06fb14a9140a225"} Mar 12 14:49:31.121647 master-0 kubenswrapper[37036]: I0312 14:49:31.121610 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" Mar 12 14:49:31.156351 master-0 kubenswrapper[37036]: I0312 14:49:31.156264 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" podStartSLOduration=4.156239788 podStartE2EDuration="4.156239788s" podCreationTimestamp="2026-03-12 14:49:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:49:31.14905172 +0000 UTC m=+830.156792657" watchObservedRunningTime="2026-03-12 14:49:31.156239788 +0000 UTC m=+830.163980725" Mar 12 14:49:31.174841 master-0 kubenswrapper[37036]: I0312 14:49:31.171546 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwzsq\" (UniqueName: \"kubernetes.io/projected/6defdb3a-1932-4d90-b25c-af496585b703-kube-api-access-hwzsq\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:31.174841 master-0 kubenswrapper[37036]: I0312 14:49:31.171614 37036 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6defdb3a-1932-4d90-b25c-af496585b703-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:31.290585 master-0 kubenswrapper[37036]: I0312 14:49:31.290532 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9848-account-create-update-p47pv" Mar 12 14:49:31.353255 master-0 kubenswrapper[37036]: I0312 14:49:31.352884 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f03ff05b-df65-4f46-92cc-788da9879240\" (UniqueName: \"kubernetes.io/csi/topolvm.io^41eee21a-5223-4889-9748-f540df5ce959\") pod \"swift-storage-0\" (UID: \"02b0bb9f-56cd-4ffe-9e37-2200e4baec09\") " pod="openstack/swift-storage-0" Mar 12 14:49:31.580143 master-0 kubenswrapper[37036]: I0312 14:49:31.580052 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-etc-swift\") pod \"swift-storage-0\" (UID: \"02b0bb9f-56cd-4ffe-9e37-2200e4baec09\") " pod="openstack/swift-storage-0" Mar 12 14:49:31.580397 master-0 kubenswrapper[37036]: E0312 14:49:31.580224 37036 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 14:49:31.580397 master-0 kubenswrapper[37036]: E0312 14:49:31.580259 37036 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 14:49:31.580397 master-0 kubenswrapper[37036]: E0312 14:49:31.580325 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-etc-swift podName:02b0bb9f-56cd-4ffe-9e37-2200e4baec09 nodeName:}" failed. No retries permitted until 2026-03-12 14:49:33.580302964 +0000 UTC m=+832.588043901 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-etc-swift") pod "swift-storage-0" (UID: "02b0bb9f-56cd-4ffe-9e37-2200e4baec09") : configmap "swift-ring-files" not found Mar 12 14:49:31.726876 master-0 kubenswrapper[37036]: I0312 14:49:31.726160 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4fdf-account-create-update-bvvpt" Mar 12 14:49:31.783535 master-0 kubenswrapper[37036]: I0312 14:49:31.783494 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e21e790-37ba-458a-a7a6-c17ed7736b11-operator-scripts\") pod \"9e21e790-37ba-458a-a7a6-c17ed7736b11\" (UID: \"9e21e790-37ba-458a-a7a6-c17ed7736b11\") " Mar 12 14:49:31.783804 master-0 kubenswrapper[37036]: I0312 14:49:31.783781 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lrmz\" (UniqueName: \"kubernetes.io/projected/9e21e790-37ba-458a-a7a6-c17ed7736b11-kube-api-access-5lrmz\") pod \"9e21e790-37ba-458a-a7a6-c17ed7736b11\" (UID: \"9e21e790-37ba-458a-a7a6-c17ed7736b11\") " Mar 12 14:49:31.784188 master-0 kubenswrapper[37036]: I0312 14:49:31.784036 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e21e790-37ba-458a-a7a6-c17ed7736b11-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9e21e790-37ba-458a-a7a6-c17ed7736b11" (UID: "9e21e790-37ba-458a-a7a6-c17ed7736b11"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:31.784649 master-0 kubenswrapper[37036]: I0312 14:49:31.784615 37036 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e21e790-37ba-458a-a7a6-c17ed7736b11-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:31.787752 master-0 kubenswrapper[37036]: I0312 14:49:31.787691 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e21e790-37ba-458a-a7a6-c17ed7736b11-kube-api-access-5lrmz" (OuterVolumeSpecName: "kube-api-access-5lrmz") pod "9e21e790-37ba-458a-a7a6-c17ed7736b11" (UID: "9e21e790-37ba-458a-a7a6-c17ed7736b11"). InnerVolumeSpecName "kube-api-access-5lrmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:49:31.887164 master-0 kubenswrapper[37036]: I0312 14:49:31.887045 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lrmz\" (UniqueName: \"kubernetes.io/projected/9e21e790-37ba-458a-a7a6-c17ed7736b11-kube-api-access-5lrmz\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:32.002984 master-0 kubenswrapper[37036]: I0312 14:49:32.002718 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1d9f-account-create-update-kkfzt" Mar 12 14:49:32.054883 master-0 kubenswrapper[37036]: W0312 14:49:32.048208 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod280b3449_e519_4936_b541_9ea239fe7aee.slice/crio-14e1bce25f68fa3752ed83a3cfeb3ecee547161c7e4cb3450f9fcc0f5c74d537 WatchSource:0}: Error finding container 14e1bce25f68fa3752ed83a3cfeb3ecee547161c7e4cb3450f9fcc0f5c74d537: Status 404 returned error can't find the container with id 14e1bce25f68fa3752ed83a3cfeb3ecee547161c7e4cb3450f9fcc0f5c74d537 Mar 12 14:49:32.083838 master-0 kubenswrapper[37036]: I0312 14:49:32.083814 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-kqst6"] Mar 12 14:49:32.102224 master-0 kubenswrapper[37036]: I0312 14:49:32.102177 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68421a5c-f523-46fc-8448-704811e6ed1c-operator-scripts\") pod \"68421a5c-f523-46fc-8448-704811e6ed1c\" (UID: \"68421a5c-f523-46fc-8448-704811e6ed1c\") " Mar 12 14:49:32.102398 master-0 kubenswrapper[37036]: I0312 14:49:32.102377 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n86rp\" (UniqueName: \"kubernetes.io/projected/68421a5c-f523-46fc-8448-704811e6ed1c-kube-api-access-n86rp\") pod \"68421a5c-f523-46fc-8448-704811e6ed1c\" (UID: \"68421a5c-f523-46fc-8448-704811e6ed1c\") " Mar 12 14:49:32.102825 master-0 kubenswrapper[37036]: I0312 14:49:32.102785 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68421a5c-f523-46fc-8448-704811e6ed1c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "68421a5c-f523-46fc-8448-704811e6ed1c" (UID: "68421a5c-f523-46fc-8448-704811e6ed1c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:32.103278 master-0 kubenswrapper[37036]: I0312 14:49:32.103251 37036 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68421a5c-f523-46fc-8448-704811e6ed1c-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:32.109763 master-0 kubenswrapper[37036]: I0312 14:49:32.109629 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68421a5c-f523-46fc-8448-704811e6ed1c-kube-api-access-n86rp" (OuterVolumeSpecName: "kube-api-access-n86rp") pod "68421a5c-f523-46fc-8448-704811e6ed1c" (UID: "68421a5c-f523-46fc-8448-704811e6ed1c"). InnerVolumeSpecName "kube-api-access-n86rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:49:32.114510 master-0 kubenswrapper[37036]: I0312 14:49:32.114301 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9848-account-create-update-p47pv"] Mar 12 14:49:32.133377 master-0 kubenswrapper[37036]: I0312 14:49:32.129343 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4fdf-account-create-update-bvvpt" event={"ID":"9e21e790-37ba-458a-a7a6-c17ed7736b11","Type":"ContainerDied","Data":"e3143c2c59577b34962f6456c7d3e35c45253dfad4a865d9895b58f70d3d58e8"} Mar 12 14:49:32.133377 master-0 kubenswrapper[37036]: I0312 14:49:32.129397 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3143c2c59577b34962f6456c7d3e35c45253dfad4a865d9895b58f70d3d58e8" Mar 12 14:49:32.133377 master-0 kubenswrapper[37036]: I0312 14:49:32.129461 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4fdf-account-create-update-bvvpt" Mar 12 14:49:32.138764 master-0 kubenswrapper[37036]: I0312 14:49:32.138664 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9848-account-create-update-p47pv" event={"ID":"280b3449-e519-4936-b541-9ea239fe7aee","Type":"ContainerStarted","Data":"14e1bce25f68fa3752ed83a3cfeb3ecee547161c7e4cb3450f9fcc0f5c74d537"} Mar 12 14:49:32.141047 master-0 kubenswrapper[37036]: I0312 14:49:32.141014 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kqst6" event={"ID":"3dccc99c-4958-49fe-8db1-1658241ccd0c","Type":"ContainerStarted","Data":"a9c2f70e067e7b8aee2db7a614dd6cd08dc36d8225f0e8100b439f3df46f33c6"} Mar 12 14:49:32.152048 master-0 kubenswrapper[37036]: I0312 14:49:32.151997 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1d9f-account-create-update-kkfzt" event={"ID":"68421a5c-f523-46fc-8448-704811e6ed1c","Type":"ContainerDied","Data":"c1b184a4a5c82e5d9b2007d3e08ca8d28e16ec3b9ca998bb5f01c9203c430fba"} Mar 12 14:49:32.152048 master-0 kubenswrapper[37036]: I0312 14:49:32.152088 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1b184a4a5c82e5d9b2007d3e08ca8d28e16ec3b9ca998bb5f01c9203c430fba" Mar 12 14:49:32.152048 master-0 kubenswrapper[37036]: I0312 14:49:32.152066 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1d9f-account-create-update-kkfzt" Mar 12 14:49:32.179237 master-0 kubenswrapper[37036]: I0312 14:49:32.179188 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qpqr7"] Mar 12 14:49:32.192259 master-0 kubenswrapper[37036]: W0312 14:49:32.192193 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda35ad3f9_8c4a_47cb_8326_a552e0b1dad1.slice/crio-05bf85380fe7063915e3486024d2c78eb5bc6eba73c14d6c20208e8fbbf15e63 WatchSource:0}: Error finding container 05bf85380fe7063915e3486024d2c78eb5bc6eba73c14d6c20208e8fbbf15e63: Status 404 returned error can't find the container with id 05bf85380fe7063915e3486024d2c78eb5bc6eba73c14d6c20208e8fbbf15e63 Mar 12 14:49:32.204853 master-0 kubenswrapper[37036]: I0312 14:49:32.204827 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n86rp\" (UniqueName: \"kubernetes.io/projected/68421a5c-f523-46fc-8448-704811e6ed1c-kube-api-access-n86rp\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:32.595451 master-0 kubenswrapper[37036]: E0312 14:49:32.595379 37036 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3dccc99c_4958_49fe_8db1_1658241ccd0c.slice/crio-conmon-6a0fde509ae35c603b46cecb5322e46a6febfd4b04a5a2b6ddc6b420f89d3416.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod280b3449_e519_4936_b541_9ea239fe7aee.slice/crio-8c74d49d9f6a0c835072b6d90a46c5ba57aca3e8877bac791cbee44313435abd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod280b3449_e519_4936_b541_9ea239fe7aee.slice/crio-conmon-8c74d49d9f6a0c835072b6d90a46c5ba57aca3e8877bac791cbee44313435abd.scope\": RecentStats: unable to find data in memory cache]" Mar 12 14:49:33.163737 master-0 kubenswrapper[37036]: I0312 14:49:33.163581 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qpqr7" event={"ID":"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1","Type":"ContainerStarted","Data":"05bf85380fe7063915e3486024d2c78eb5bc6eba73c14d6c20208e8fbbf15e63"} Mar 12 14:49:33.165933 master-0 kubenswrapper[37036]: I0312 14:49:33.165871 37036 generic.go:334] "Generic (PLEG): container finished" podID="3dccc99c-4958-49fe-8db1-1658241ccd0c" containerID="6a0fde509ae35c603b46cecb5322e46a6febfd4b04a5a2b6ddc6b420f89d3416" exitCode=0 Mar 12 14:49:33.166953 master-0 kubenswrapper[37036]: I0312 14:49:33.165943 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kqst6" event={"ID":"3dccc99c-4958-49fe-8db1-1658241ccd0c","Type":"ContainerDied","Data":"6a0fde509ae35c603b46cecb5322e46a6febfd4b04a5a2b6ddc6b420f89d3416"} Mar 12 14:49:33.167985 master-0 kubenswrapper[37036]: I0312 14:49:33.167889 37036 generic.go:334] "Generic (PLEG): container finished" podID="280b3449-e519-4936-b541-9ea239fe7aee" containerID="8c74d49d9f6a0c835072b6d90a46c5ba57aca3e8877bac791cbee44313435abd" exitCode=0 Mar 12 14:49:33.168323 master-0 kubenswrapper[37036]: I0312 14:49:33.167989 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9848-account-create-update-p47pv" event={"ID":"280b3449-e519-4936-b541-9ea239fe7aee","Type":"ContainerDied","Data":"8c74d49d9f6a0c835072b6d90a46c5ba57aca3e8877bac791cbee44313435abd"} Mar 12 14:49:33.347281 master-0 kubenswrapper[37036]: I0312 14:49:33.347206 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qm4b5"] Mar 12 14:49:33.347763 master-0 kubenswrapper[37036]: E0312 14:49:33.347696 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68421a5c-f523-46fc-8448-704811e6ed1c" containerName="mariadb-account-create-update" Mar 12 14:49:33.347763 master-0 kubenswrapper[37036]: I0312 14:49:33.347714 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="68421a5c-f523-46fc-8448-704811e6ed1c" containerName="mariadb-account-create-update" Mar 12 14:49:33.347763 master-0 kubenswrapper[37036]: E0312 14:49:33.347754 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6defdb3a-1932-4d90-b25c-af496585b703" containerName="mariadb-database-create" Mar 12 14:49:33.347763 master-0 kubenswrapper[37036]: I0312 14:49:33.347760 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="6defdb3a-1932-4d90-b25c-af496585b703" containerName="mariadb-database-create" Mar 12 14:49:33.347923 master-0 kubenswrapper[37036]: E0312 14:49:33.347776 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e21e790-37ba-458a-a7a6-c17ed7736b11" containerName="mariadb-account-create-update" Mar 12 14:49:33.347923 master-0 kubenswrapper[37036]: I0312 14:49:33.347783 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e21e790-37ba-458a-a7a6-c17ed7736b11" containerName="mariadb-account-create-update" Mar 12 14:49:33.348021 master-0 kubenswrapper[37036]: I0312 14:49:33.348002 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="6defdb3a-1932-4d90-b25c-af496585b703" containerName="mariadb-database-create" Mar 12 14:49:33.348064 master-0 kubenswrapper[37036]: I0312 14:49:33.348038 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="68421a5c-f523-46fc-8448-704811e6ed1c" containerName="mariadb-account-create-update" Mar 12 14:49:33.348064 master-0 kubenswrapper[37036]: I0312 14:49:33.348055 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e21e790-37ba-458a-a7a6-c17ed7736b11" containerName="mariadb-account-create-update" Mar 12 14:49:33.348740 master-0 kubenswrapper[37036]: I0312 14:49:33.348724 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qm4b5" Mar 12 14:49:33.350575 master-0 kubenswrapper[37036]: I0312 14:49:33.350550 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 12 14:49:33.365028 master-0 kubenswrapper[37036]: I0312 14:49:33.364386 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qm4b5"] Mar 12 14:49:33.450046 master-0 kubenswrapper[37036]: I0312 14:49:33.449915 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2dfb\" (UniqueName: \"kubernetes.io/projected/02bba0bb-74c4-4bee-8353-8209e63b8639-kube-api-access-v2dfb\") pod \"root-account-create-update-qm4b5\" (UID: \"02bba0bb-74c4-4bee-8353-8209e63b8639\") " pod="openstack/root-account-create-update-qm4b5" Mar 12 14:49:33.450232 master-0 kubenswrapper[37036]: I0312 14:49:33.450195 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02bba0bb-74c4-4bee-8353-8209e63b8639-operator-scripts\") pod \"root-account-create-update-qm4b5\" (UID: \"02bba0bb-74c4-4bee-8353-8209e63b8639\") " pod="openstack/root-account-create-update-qm4b5" Mar 12 14:49:33.552923 master-0 kubenswrapper[37036]: I0312 14:49:33.552842 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02bba0bb-74c4-4bee-8353-8209e63b8639-operator-scripts\") pod \"root-account-create-update-qm4b5\" (UID: \"02bba0bb-74c4-4bee-8353-8209e63b8639\") " pod="openstack/root-account-create-update-qm4b5" Mar 12 14:49:33.553475 master-0 kubenswrapper[37036]: I0312 14:49:33.553442 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2dfb\" (UniqueName: \"kubernetes.io/projected/02bba0bb-74c4-4bee-8353-8209e63b8639-kube-api-access-v2dfb\") pod \"root-account-create-update-qm4b5\" (UID: \"02bba0bb-74c4-4bee-8353-8209e63b8639\") " pod="openstack/root-account-create-update-qm4b5" Mar 12 14:49:33.553783 master-0 kubenswrapper[37036]: I0312 14:49:33.553725 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02bba0bb-74c4-4bee-8353-8209e63b8639-operator-scripts\") pod \"root-account-create-update-qm4b5\" (UID: \"02bba0bb-74c4-4bee-8353-8209e63b8639\") " pod="openstack/root-account-create-update-qm4b5" Mar 12 14:49:33.574289 master-0 kubenswrapper[37036]: I0312 14:49:33.574235 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2dfb\" (UniqueName: \"kubernetes.io/projected/02bba0bb-74c4-4bee-8353-8209e63b8639-kube-api-access-v2dfb\") pod \"root-account-create-update-qm4b5\" (UID: \"02bba0bb-74c4-4bee-8353-8209e63b8639\") " pod="openstack/root-account-create-update-qm4b5" Mar 12 14:49:33.655967 master-0 kubenswrapper[37036]: I0312 14:49:33.655891 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-etc-swift\") pod \"swift-storage-0\" (UID: \"02b0bb9f-56cd-4ffe-9e37-2200e4baec09\") " pod="openstack/swift-storage-0" Mar 12 14:49:33.656206 master-0 kubenswrapper[37036]: E0312 14:49:33.656136 37036 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 14:49:33.656206 master-0 kubenswrapper[37036]: E0312 14:49:33.656171 37036 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 14:49:33.656271 master-0 kubenswrapper[37036]: E0312 14:49:33.656235 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-etc-swift podName:02b0bb9f-56cd-4ffe-9e37-2200e4baec09 nodeName:}" failed. No retries permitted until 2026-03-12 14:49:37.656215851 +0000 UTC m=+836.663956788 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-etc-swift") pod "swift-storage-0" (UID: "02b0bb9f-56cd-4ffe-9e37-2200e4baec09") : configmap "swift-ring-files" not found Mar 12 14:49:33.672679 master-0 kubenswrapper[37036]: I0312 14:49:33.672628 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qm4b5" Mar 12 14:49:35.574417 master-0 kubenswrapper[37036]: I0312 14:49:35.574357 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kqst6" Mar 12 14:49:35.578444 master-0 kubenswrapper[37036]: I0312 14:49:35.578400 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9848-account-create-update-p47pv" Mar 12 14:49:35.704835 master-0 kubenswrapper[37036]: I0312 14:49:35.704766 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmwbj\" (UniqueName: \"kubernetes.io/projected/3dccc99c-4958-49fe-8db1-1658241ccd0c-kube-api-access-tmwbj\") pod \"3dccc99c-4958-49fe-8db1-1658241ccd0c\" (UID: \"3dccc99c-4958-49fe-8db1-1658241ccd0c\") " Mar 12 14:49:35.705145 master-0 kubenswrapper[37036]: I0312 14:49:35.705064 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc2sl\" (UniqueName: \"kubernetes.io/projected/280b3449-e519-4936-b541-9ea239fe7aee-kube-api-access-tc2sl\") pod \"280b3449-e519-4936-b541-9ea239fe7aee\" (UID: \"280b3449-e519-4936-b541-9ea239fe7aee\") " Mar 12 14:49:35.705698 master-0 kubenswrapper[37036]: I0312 14:49:35.705670 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/280b3449-e519-4936-b541-9ea239fe7aee-operator-scripts\") pod \"280b3449-e519-4936-b541-9ea239fe7aee\" (UID: \"280b3449-e519-4936-b541-9ea239fe7aee\") " Mar 12 14:49:35.705797 master-0 kubenswrapper[37036]: I0312 14:49:35.705780 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dccc99c-4958-49fe-8db1-1658241ccd0c-operator-scripts\") pod \"3dccc99c-4958-49fe-8db1-1658241ccd0c\" (UID: \"3dccc99c-4958-49fe-8db1-1658241ccd0c\") " Mar 12 14:49:35.707363 master-0 kubenswrapper[37036]: I0312 14:49:35.707331 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dccc99c-4958-49fe-8db1-1658241ccd0c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3dccc99c-4958-49fe-8db1-1658241ccd0c" (UID: "3dccc99c-4958-49fe-8db1-1658241ccd0c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:35.708119 master-0 kubenswrapper[37036]: I0312 14:49:35.708088 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/280b3449-e519-4936-b541-9ea239fe7aee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "280b3449-e519-4936-b541-9ea239fe7aee" (UID: "280b3449-e519-4936-b541-9ea239fe7aee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:35.709608 master-0 kubenswrapper[37036]: I0312 14:49:35.709570 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dccc99c-4958-49fe-8db1-1658241ccd0c-kube-api-access-tmwbj" (OuterVolumeSpecName: "kube-api-access-tmwbj") pod "3dccc99c-4958-49fe-8db1-1658241ccd0c" (UID: "3dccc99c-4958-49fe-8db1-1658241ccd0c"). InnerVolumeSpecName "kube-api-access-tmwbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:49:35.712379 master-0 kubenswrapper[37036]: I0312 14:49:35.712336 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/280b3449-e519-4936-b541-9ea239fe7aee-kube-api-access-tc2sl" (OuterVolumeSpecName: "kube-api-access-tc2sl") pod "280b3449-e519-4936-b541-9ea239fe7aee" (UID: "280b3449-e519-4936-b541-9ea239fe7aee"). InnerVolumeSpecName "kube-api-access-tc2sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:49:35.808548 master-0 kubenswrapper[37036]: I0312 14:49:35.808474 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc2sl\" (UniqueName: \"kubernetes.io/projected/280b3449-e519-4936-b541-9ea239fe7aee-kube-api-access-tc2sl\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:35.808548 master-0 kubenswrapper[37036]: I0312 14:49:35.808535 37036 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/280b3449-e519-4936-b541-9ea239fe7aee-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:35.808548 master-0 kubenswrapper[37036]: I0312 14:49:35.808552 37036 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dccc99c-4958-49fe-8db1-1658241ccd0c-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:35.808825 master-0 kubenswrapper[37036]: I0312 14:49:35.808565 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmwbj\" (UniqueName: \"kubernetes.io/projected/3dccc99c-4958-49fe-8db1-1658241ccd0c-kube-api-access-tmwbj\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:35.968210 master-0 kubenswrapper[37036]: I0312 14:49:35.966683 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qm4b5"] Mar 12 14:49:36.227297 master-0 kubenswrapper[37036]: I0312 14:49:36.227167 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qpqr7" event={"ID":"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1","Type":"ContainerStarted","Data":"d3ecde157f64258ed74f50d52b92c5e69118e60e0b175db8b167ef85df20bafc"} Mar 12 14:49:36.230700 master-0 kubenswrapper[37036]: I0312 14:49:36.230649 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kqst6" Mar 12 14:49:36.231039 master-0 kubenswrapper[37036]: I0312 14:49:36.230947 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kqst6" event={"ID":"3dccc99c-4958-49fe-8db1-1658241ccd0c","Type":"ContainerDied","Data":"a9c2f70e067e7b8aee2db7a614dd6cd08dc36d8225f0e8100b439f3df46f33c6"} Mar 12 14:49:36.231039 master-0 kubenswrapper[37036]: I0312 14:49:36.231007 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9c2f70e067e7b8aee2db7a614dd6cd08dc36d8225f0e8100b439f3df46f33c6" Mar 12 14:49:36.234073 master-0 kubenswrapper[37036]: I0312 14:49:36.233925 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qm4b5" event={"ID":"02bba0bb-74c4-4bee-8353-8209e63b8639","Type":"ContainerStarted","Data":"3312bced8ddfaa859b4b5ce821d267e776224cbcedcad7308170a24b3f24dd14"} Mar 12 14:49:36.234073 master-0 kubenswrapper[37036]: I0312 14:49:36.233955 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qm4b5" event={"ID":"02bba0bb-74c4-4bee-8353-8209e63b8639","Type":"ContainerStarted","Data":"afb0f8b4ac3a2398eb5ecb89fc47be9f3e494f7d259c14a2552b538c0b723730"} Mar 12 14:49:36.235993 master-0 kubenswrapper[37036]: I0312 14:49:36.235957 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9848-account-create-update-p47pv" event={"ID":"280b3449-e519-4936-b541-9ea239fe7aee","Type":"ContainerDied","Data":"14e1bce25f68fa3752ed83a3cfeb3ecee547161c7e4cb3450f9fcc0f5c74d537"} Mar 12 14:49:36.236097 master-0 kubenswrapper[37036]: I0312 14:49:36.236084 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14e1bce25f68fa3752ed83a3cfeb3ecee547161c7e4cb3450f9fcc0f5c74d537" Mar 12 14:49:36.236214 master-0 kubenswrapper[37036]: I0312 14:49:36.236199 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9848-account-create-update-p47pv" Mar 12 14:49:36.261855 master-0 kubenswrapper[37036]: I0312 14:49:36.261758 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-qpqr7" podStartSLOduration=2.994640639 podStartE2EDuration="6.261733563s" podCreationTimestamp="2026-03-12 14:49:30 +0000 UTC" firstStartedPulling="2026-03-12 14:49:32.194246172 +0000 UTC m=+831.201987119" lastFinishedPulling="2026-03-12 14:49:35.461339116 +0000 UTC m=+834.469080043" observedRunningTime="2026-03-12 14:49:36.247109039 +0000 UTC m=+835.254849976" watchObservedRunningTime="2026-03-12 14:49:36.261733563 +0000 UTC m=+835.269474500" Mar 12 14:49:36.279019 master-0 kubenswrapper[37036]: I0312 14:49:36.278945 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-qm4b5" podStartSLOduration=3.278925929 podStartE2EDuration="3.278925929s" podCreationTimestamp="2026-03-12 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:49:36.268948182 +0000 UTC m=+835.276689129" watchObservedRunningTime="2026-03-12 14:49:36.278925929 +0000 UTC m=+835.286666866" Mar 12 14:49:37.258835 master-0 kubenswrapper[37036]: I0312 14:49:37.258747 37036 generic.go:334] "Generic (PLEG): container finished" podID="02bba0bb-74c4-4bee-8353-8209e63b8639" containerID="3312bced8ddfaa859b4b5ce821d267e776224cbcedcad7308170a24b3f24dd14" exitCode=0 Mar 12 14:49:37.259543 master-0 kubenswrapper[37036]: I0312 14:49:37.259071 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qm4b5" event={"ID":"02bba0bb-74c4-4bee-8353-8209e63b8639","Type":"ContainerDied","Data":"3312bced8ddfaa859b4b5ce821d267e776224cbcedcad7308170a24b3f24dd14"} Mar 12 14:49:37.749204 master-0 kubenswrapper[37036]: I0312 14:49:37.749149 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-etc-swift\") pod \"swift-storage-0\" (UID: \"02b0bb9f-56cd-4ffe-9e37-2200e4baec09\") " pod="openstack/swift-storage-0" Mar 12 14:49:37.749453 master-0 kubenswrapper[37036]: E0312 14:49:37.749419 37036 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 14:49:37.749453 master-0 kubenswrapper[37036]: E0312 14:49:37.749453 37036 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 14:49:37.749527 master-0 kubenswrapper[37036]: E0312 14:49:37.749511 37036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-etc-swift podName:02b0bb9f-56cd-4ffe-9e37-2200e4baec09 nodeName:}" failed. No retries permitted until 2026-03-12 14:49:45.74949165 +0000 UTC m=+844.757232597 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-etc-swift") pod "swift-storage-0" (UID: "02b0bb9f-56cd-4ffe-9e37-2200e4baec09") : configmap "swift-ring-files" not found Mar 12 14:49:38.342144 master-0 kubenswrapper[37036]: I0312 14:49:38.342075 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" Mar 12 14:49:38.460892 master-0 kubenswrapper[37036]: I0312 14:49:38.460817 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8"] Mar 12 14:49:38.461143 master-0 kubenswrapper[37036]: I0312 14:49:38.461098 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8" podUID="9866f383-8abf-4106-9cf6-9e6265fe07b4" containerName="dnsmasq-dns" containerID="cri-o://bc43838a1d1e6583d9838261024a301cb9ad104ca3b277e536c19353a3e7dee2" gracePeriod=10 Mar 12 14:49:38.845461 master-0 kubenswrapper[37036]: I0312 14:49:38.845314 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qm4b5" Mar 12 14:49:38.886774 master-0 kubenswrapper[37036]: I0312 14:49:38.886679 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02bba0bb-74c4-4bee-8353-8209e63b8639-operator-scripts\") pod \"02bba0bb-74c4-4bee-8353-8209e63b8639\" (UID: \"02bba0bb-74c4-4bee-8353-8209e63b8639\") " Mar 12 14:49:38.887519 master-0 kubenswrapper[37036]: I0312 14:49:38.887488 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2dfb\" (UniqueName: \"kubernetes.io/projected/02bba0bb-74c4-4bee-8353-8209e63b8639-kube-api-access-v2dfb\") pod \"02bba0bb-74c4-4bee-8353-8209e63b8639\" (UID: \"02bba0bb-74c4-4bee-8353-8209e63b8639\") " Mar 12 14:49:38.893487 master-0 kubenswrapper[37036]: I0312 14:49:38.893438 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02bba0bb-74c4-4bee-8353-8209e63b8639-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "02bba0bb-74c4-4bee-8353-8209e63b8639" (UID: "02bba0bb-74c4-4bee-8353-8209e63b8639"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:38.893757 master-0 kubenswrapper[37036]: I0312 14:49:38.893614 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02bba0bb-74c4-4bee-8353-8209e63b8639-kube-api-access-v2dfb" (OuterVolumeSpecName: "kube-api-access-v2dfb") pod "02bba0bb-74c4-4bee-8353-8209e63b8639" (UID: "02bba0bb-74c4-4bee-8353-8209e63b8639"). InnerVolumeSpecName "kube-api-access-v2dfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:49:38.993588 master-0 kubenswrapper[37036]: I0312 14:49:38.993513 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2dfb\" (UniqueName: \"kubernetes.io/projected/02bba0bb-74c4-4bee-8353-8209e63b8639-kube-api-access-v2dfb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:38.993588 master-0 kubenswrapper[37036]: I0312 14:49:38.993584 37036 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02bba0bb-74c4-4bee-8353-8209e63b8639-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:39.211432 master-0 kubenswrapper[37036]: I0312 14:49:39.211375 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8" Mar 12 14:49:39.298043 master-0 kubenswrapper[37036]: I0312 14:49:39.297983 37036 generic.go:334] "Generic (PLEG): container finished" podID="9866f383-8abf-4106-9cf6-9e6265fe07b4" containerID="bc43838a1d1e6583d9838261024a301cb9ad104ca3b277e536c19353a3e7dee2" exitCode=0 Mar 12 14:49:39.298243 master-0 kubenswrapper[37036]: I0312 14:49:39.298061 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8" event={"ID":"9866f383-8abf-4106-9cf6-9e6265fe07b4","Type":"ContainerDied","Data":"bc43838a1d1e6583d9838261024a301cb9ad104ca3b277e536c19353a3e7dee2"} Mar 12 14:49:39.298243 master-0 kubenswrapper[37036]: I0312 14:49:39.298112 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8" event={"ID":"9866f383-8abf-4106-9cf6-9e6265fe07b4","Type":"ContainerDied","Data":"00dfd6ee8f6122f7aca9e4d925b373cb8a7826ec186e0a61c205ea58bedb3fd6"} Mar 12 14:49:39.298243 master-0 kubenswrapper[37036]: I0312 14:49:39.298131 37036 scope.go:117] "RemoveContainer" containerID="bc43838a1d1e6583d9838261024a301cb9ad104ca3b277e536c19353a3e7dee2" Mar 12 14:49:39.298345 master-0 kubenswrapper[37036]: I0312 14:49:39.298278 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8" Mar 12 14:49:39.320684 master-0 kubenswrapper[37036]: I0312 14:49:39.307657 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qm4b5" event={"ID":"02bba0bb-74c4-4bee-8353-8209e63b8639","Type":"ContainerDied","Data":"afb0f8b4ac3a2398eb5ecb89fc47be9f3e494f7d259c14a2552b538c0b723730"} Mar 12 14:49:39.320684 master-0 kubenswrapper[37036]: I0312 14:49:39.307683 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afb0f8b4ac3a2398eb5ecb89fc47be9f3e494f7d259c14a2552b538c0b723730" Mar 12 14:49:39.320684 master-0 kubenswrapper[37036]: I0312 14:49:39.307749 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qm4b5" Mar 12 14:49:39.328299 master-0 kubenswrapper[37036]: I0312 14:49:39.328209 37036 scope.go:117] "RemoveContainer" containerID="88ff80f2841f969d593cb9825055b9cca6171bcc8e5b19cf198571e7bbed1229" Mar 12 14:49:39.356998 master-0 kubenswrapper[37036]: I0312 14:49:39.353988 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qm4b5"] Mar 12 14:49:39.358124 master-0 kubenswrapper[37036]: I0312 14:49:39.358066 37036 scope.go:117] "RemoveContainer" containerID="bc43838a1d1e6583d9838261024a301cb9ad104ca3b277e536c19353a3e7dee2" Mar 12 14:49:39.358938 master-0 kubenswrapper[37036]: E0312 14:49:39.358742 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc43838a1d1e6583d9838261024a301cb9ad104ca3b277e536c19353a3e7dee2\": container with ID starting with bc43838a1d1e6583d9838261024a301cb9ad104ca3b277e536c19353a3e7dee2 not found: ID does not exist" containerID="bc43838a1d1e6583d9838261024a301cb9ad104ca3b277e536c19353a3e7dee2" Mar 12 14:49:39.358938 master-0 kubenswrapper[37036]: I0312 14:49:39.358838 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc43838a1d1e6583d9838261024a301cb9ad104ca3b277e536c19353a3e7dee2"} err="failed to get container status \"bc43838a1d1e6583d9838261024a301cb9ad104ca3b277e536c19353a3e7dee2\": rpc error: code = NotFound desc = could not find container \"bc43838a1d1e6583d9838261024a301cb9ad104ca3b277e536c19353a3e7dee2\": container with ID starting with bc43838a1d1e6583d9838261024a301cb9ad104ca3b277e536c19353a3e7dee2 not found: ID does not exist" Mar 12 14:49:39.358938 master-0 kubenswrapper[37036]: I0312 14:49:39.358939 37036 scope.go:117] "RemoveContainer" containerID="88ff80f2841f969d593cb9825055b9cca6171bcc8e5b19cf198571e7bbed1229" Mar 12 14:49:39.359255 master-0 kubenswrapper[37036]: E0312 14:49:39.359201 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88ff80f2841f969d593cb9825055b9cca6171bcc8e5b19cf198571e7bbed1229\": container with ID starting with 88ff80f2841f969d593cb9825055b9cca6171bcc8e5b19cf198571e7bbed1229 not found: ID does not exist" containerID="88ff80f2841f969d593cb9825055b9cca6171bcc8e5b19cf198571e7bbed1229" Mar 12 14:49:39.359324 master-0 kubenswrapper[37036]: I0312 14:49:39.359257 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88ff80f2841f969d593cb9825055b9cca6171bcc8e5b19cf198571e7bbed1229"} err="failed to get container status \"88ff80f2841f969d593cb9825055b9cca6171bcc8e5b19cf198571e7bbed1229\": rpc error: code = NotFound desc = could not find container \"88ff80f2841f969d593cb9825055b9cca6171bcc8e5b19cf198571e7bbed1229\": container with ID starting with 88ff80f2841f969d593cb9825055b9cca6171bcc8e5b19cf198571e7bbed1229 not found: ID does not exist" Mar 12 14:49:39.363326 master-0 kubenswrapper[37036]: I0312 14:49:39.363276 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qm4b5"] Mar 12 14:49:39.417876 master-0 kubenswrapper[37036]: I0312 14:49:39.417739 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9866f383-8abf-4106-9cf6-9e6265fe07b4-config\") pod \"9866f383-8abf-4106-9cf6-9e6265fe07b4\" (UID: \"9866f383-8abf-4106-9cf6-9e6265fe07b4\") " Mar 12 14:49:39.418058 master-0 kubenswrapper[37036]: I0312 14:49:39.417990 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwswk\" (UniqueName: \"kubernetes.io/projected/9866f383-8abf-4106-9cf6-9e6265fe07b4-kube-api-access-pwswk\") pod \"9866f383-8abf-4106-9cf6-9e6265fe07b4\" (UID: \"9866f383-8abf-4106-9cf6-9e6265fe07b4\") " Mar 12 14:49:39.418111 master-0 kubenswrapper[37036]: I0312 14:49:39.418089 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9866f383-8abf-4106-9cf6-9e6265fe07b4-dns-svc\") pod \"9866f383-8abf-4106-9cf6-9e6265fe07b4\" (UID: \"9866f383-8abf-4106-9cf6-9e6265fe07b4\") " Mar 12 14:49:39.421595 master-0 kubenswrapper[37036]: I0312 14:49:39.421449 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9866f383-8abf-4106-9cf6-9e6265fe07b4-kube-api-access-pwswk" (OuterVolumeSpecName: "kube-api-access-pwswk") pod "9866f383-8abf-4106-9cf6-9e6265fe07b4" (UID: "9866f383-8abf-4106-9cf6-9e6265fe07b4"). InnerVolumeSpecName "kube-api-access-pwswk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:49:39.461113 master-0 kubenswrapper[37036]: I0312 14:49:39.459825 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9866f383-8abf-4106-9cf6-9e6265fe07b4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9866f383-8abf-4106-9cf6-9e6265fe07b4" (UID: "9866f383-8abf-4106-9cf6-9e6265fe07b4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:39.461558 master-0 kubenswrapper[37036]: I0312 14:49:39.461443 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9866f383-8abf-4106-9cf6-9e6265fe07b4-config" (OuterVolumeSpecName: "config") pod "9866f383-8abf-4106-9cf6-9e6265fe07b4" (UID: "9866f383-8abf-4106-9cf6-9e6265fe07b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:39.520363 master-0 kubenswrapper[37036]: I0312 14:49:39.520296 37036 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9866f383-8abf-4106-9cf6-9e6265fe07b4-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:39.520363 master-0 kubenswrapper[37036]: I0312 14:49:39.520353 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwswk\" (UniqueName: \"kubernetes.io/projected/9866f383-8abf-4106-9cf6-9e6265fe07b4-kube-api-access-pwswk\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:39.520363 master-0 kubenswrapper[37036]: I0312 14:49:39.520366 37036 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9866f383-8abf-4106-9cf6-9e6265fe07b4-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:39.650761 master-0 kubenswrapper[37036]: I0312 14:49:39.650685 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8"] Mar 12 14:49:39.660100 master-0 kubenswrapper[37036]: I0312 14:49:39.660044 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8"] Mar 12 14:49:40.502206 master-0 kubenswrapper[37036]: I0312 14:49:40.502151 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 12 14:49:41.095328 master-0 kubenswrapper[37036]: I0312 14:49:41.095263 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-989wd"] Mar 12 14:49:41.095761 master-0 kubenswrapper[37036]: E0312 14:49:41.095727 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9866f383-8abf-4106-9cf6-9e6265fe07b4" containerName="init" Mar 12 14:49:41.095761 master-0 kubenswrapper[37036]: I0312 14:49:41.095755 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="9866f383-8abf-4106-9cf6-9e6265fe07b4" containerName="init" Mar 12 14:49:41.095837 master-0 kubenswrapper[37036]: E0312 14:49:41.095803 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="280b3449-e519-4936-b541-9ea239fe7aee" containerName="mariadb-account-create-update" Mar 12 14:49:41.095837 master-0 kubenswrapper[37036]: I0312 14:49:41.095812 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="280b3449-e519-4936-b541-9ea239fe7aee" containerName="mariadb-account-create-update" Mar 12 14:49:41.095837 master-0 kubenswrapper[37036]: E0312 14:49:41.095822 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02bba0bb-74c4-4bee-8353-8209e63b8639" containerName="mariadb-account-create-update" Mar 12 14:49:41.095837 master-0 kubenswrapper[37036]: I0312 14:49:41.095829 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="02bba0bb-74c4-4bee-8353-8209e63b8639" containerName="mariadb-account-create-update" Mar 12 14:49:41.095961 master-0 kubenswrapper[37036]: E0312 14:49:41.095857 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9866f383-8abf-4106-9cf6-9e6265fe07b4" containerName="dnsmasq-dns" Mar 12 14:49:41.095961 master-0 kubenswrapper[37036]: I0312 14:49:41.095867 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="9866f383-8abf-4106-9cf6-9e6265fe07b4" containerName="dnsmasq-dns" Mar 12 14:49:41.095961 master-0 kubenswrapper[37036]: E0312 14:49:41.095879 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dccc99c-4958-49fe-8db1-1658241ccd0c" containerName="mariadb-database-create" Mar 12 14:49:41.095961 master-0 kubenswrapper[37036]: I0312 14:49:41.095885 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dccc99c-4958-49fe-8db1-1658241ccd0c" containerName="mariadb-database-create" Mar 12 14:49:41.096187 master-0 kubenswrapper[37036]: I0312 14:49:41.096156 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="280b3449-e519-4936-b541-9ea239fe7aee" containerName="mariadb-account-create-update" Mar 12 14:49:41.096187 master-0 kubenswrapper[37036]: I0312 14:49:41.096186 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dccc99c-4958-49fe-8db1-1658241ccd0c" containerName="mariadb-database-create" Mar 12 14:49:41.096265 master-0 kubenswrapper[37036]: I0312 14:49:41.096202 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="02bba0bb-74c4-4bee-8353-8209e63b8639" containerName="mariadb-account-create-update" Mar 12 14:49:41.096265 master-0 kubenswrapper[37036]: I0312 14:49:41.096217 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="9866f383-8abf-4106-9cf6-9e6265fe07b4" containerName="dnsmasq-dns" Mar 12 14:49:41.097053 master-0 kubenswrapper[37036]: I0312 14:49:41.097018 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-989wd" Mar 12 14:49:41.098812 master-0 kubenswrapper[37036]: I0312 14:49:41.098779 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-bc20e-config-data" Mar 12 14:49:41.113341 master-0 kubenswrapper[37036]: I0312 14:49:41.113007 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-989wd"] Mar 12 14:49:41.246279 master-0 kubenswrapper[37036]: I0312 14:49:41.245399 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02bba0bb-74c4-4bee-8353-8209e63b8639" path="/var/lib/kubelet/pods/02bba0bb-74c4-4bee-8353-8209e63b8639/volumes" Mar 12 14:49:41.246279 master-0 kubenswrapper[37036]: I0312 14:49:41.246046 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9866f383-8abf-4106-9cf6-9e6265fe07b4" path="/var/lib/kubelet/pods/9866f383-8abf-4106-9cf6-9e6265fe07b4/volumes" Mar 12 14:49:41.259697 master-0 kubenswrapper[37036]: I0312 14:49:41.259639 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmtpb\" (UniqueName: \"kubernetes.io/projected/78a7388f-90a4-420a-b2ef-e31fb1fda25e-kube-api-access-dmtpb\") pod \"glance-db-sync-989wd\" (UID: \"78a7388f-90a4-420a-b2ef-e31fb1fda25e\") " pod="openstack/glance-db-sync-989wd" Mar 12 14:49:41.259918 master-0 kubenswrapper[37036]: I0312 14:49:41.259781 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78a7388f-90a4-420a-b2ef-e31fb1fda25e-combined-ca-bundle\") pod \"glance-db-sync-989wd\" (UID: \"78a7388f-90a4-420a-b2ef-e31fb1fda25e\") " pod="openstack/glance-db-sync-989wd" Mar 12 14:49:41.259918 master-0 kubenswrapper[37036]: I0312 14:49:41.259853 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78a7388f-90a4-420a-b2ef-e31fb1fda25e-config-data\") pod \"glance-db-sync-989wd\" (UID: \"78a7388f-90a4-420a-b2ef-e31fb1fda25e\") " pod="openstack/glance-db-sync-989wd" Mar 12 14:49:41.260102 master-0 kubenswrapper[37036]: I0312 14:49:41.260070 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/78a7388f-90a4-420a-b2ef-e31fb1fda25e-db-sync-config-data\") pod \"glance-db-sync-989wd\" (UID: \"78a7388f-90a4-420a-b2ef-e31fb1fda25e\") " pod="openstack/glance-db-sync-989wd" Mar 12 14:49:41.367307 master-0 kubenswrapper[37036]: I0312 14:49:41.367179 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmtpb\" (UniqueName: \"kubernetes.io/projected/78a7388f-90a4-420a-b2ef-e31fb1fda25e-kube-api-access-dmtpb\") pod \"glance-db-sync-989wd\" (UID: \"78a7388f-90a4-420a-b2ef-e31fb1fda25e\") " pod="openstack/glance-db-sync-989wd" Mar 12 14:49:41.367492 master-0 kubenswrapper[37036]: I0312 14:49:41.367372 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78a7388f-90a4-420a-b2ef-e31fb1fda25e-combined-ca-bundle\") pod \"glance-db-sync-989wd\" (UID: \"78a7388f-90a4-420a-b2ef-e31fb1fda25e\") " pod="openstack/glance-db-sync-989wd" Mar 12 14:49:41.367492 master-0 kubenswrapper[37036]: I0312 14:49:41.367479 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78a7388f-90a4-420a-b2ef-e31fb1fda25e-config-data\") pod \"glance-db-sync-989wd\" (UID: \"78a7388f-90a4-420a-b2ef-e31fb1fda25e\") " pod="openstack/glance-db-sync-989wd" Mar 12 14:49:41.367590 master-0 kubenswrapper[37036]: I0312 14:49:41.367562 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/78a7388f-90a4-420a-b2ef-e31fb1fda25e-db-sync-config-data\") pod \"glance-db-sync-989wd\" (UID: \"78a7388f-90a4-420a-b2ef-e31fb1fda25e\") " pod="openstack/glance-db-sync-989wd" Mar 12 14:49:41.369147 master-0 kubenswrapper[37036]: I0312 14:49:41.369083 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-bc20e-config-data" Mar 12 14:49:41.371204 master-0 kubenswrapper[37036]: I0312 14:49:41.371172 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78a7388f-90a4-420a-b2ef-e31fb1fda25e-combined-ca-bundle\") pod \"glance-db-sync-989wd\" (UID: \"78a7388f-90a4-420a-b2ef-e31fb1fda25e\") " pod="openstack/glance-db-sync-989wd" Mar 12 14:49:41.385917 master-0 kubenswrapper[37036]: I0312 14:49:41.383453 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78a7388f-90a4-420a-b2ef-e31fb1fda25e-config-data\") pod \"glance-db-sync-989wd\" (UID: \"78a7388f-90a4-420a-b2ef-e31fb1fda25e\") " pod="openstack/glance-db-sync-989wd" Mar 12 14:49:41.390748 master-0 kubenswrapper[37036]: I0312 14:49:41.386795 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/78a7388f-90a4-420a-b2ef-e31fb1fda25e-db-sync-config-data\") pod \"glance-db-sync-989wd\" (UID: \"78a7388f-90a4-420a-b2ef-e31fb1fda25e\") " pod="openstack/glance-db-sync-989wd" Mar 12 14:49:41.398676 master-0 kubenswrapper[37036]: I0312 14:49:41.398398 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmtpb\" (UniqueName: \"kubernetes.io/projected/78a7388f-90a4-420a-b2ef-e31fb1fda25e-kube-api-access-dmtpb\") pod \"glance-db-sync-989wd\" (UID: \"78a7388f-90a4-420a-b2ef-e31fb1fda25e\") " pod="openstack/glance-db-sync-989wd" Mar 12 14:49:41.411879 master-0 kubenswrapper[37036]: I0312 14:49:41.411822 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-989wd" Mar 12 14:49:42.123136 master-0 kubenswrapper[37036]: I0312 14:49:42.123049 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-989wd"] Mar 12 14:49:42.240146 master-0 kubenswrapper[37036]: I0312 14:49:42.240102 37036 trace.go:236] Trace[1826356158]: "Calculate volume metrics of swift for pod openstack/swift-storage-0" (12-Mar-2026 14:49:41.211) (total time: 1028ms): Mar 12 14:49:42.240146 master-0 kubenswrapper[37036]: Trace[1826356158]: [1.02894582s] [1.02894582s] END Mar 12 14:49:42.340799 master-0 kubenswrapper[37036]: I0312 14:49:42.340690 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-989wd" event={"ID":"78a7388f-90a4-420a-b2ef-e31fb1fda25e","Type":"ContainerStarted","Data":"cce253be1910b2659f7b5bbe842e262b1b048b283695aa30f3c81858c3aa77ca"} Mar 12 14:49:42.389341 master-0 kubenswrapper[37036]: I0312 14:49:42.389289 37036 trace.go:236] Trace[1375847326]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-0" (12-Mar-2026 14:49:41.211) (total time: 1177ms): Mar 12 14:49:42.389341 master-0 kubenswrapper[37036]: Trace[1375847326]: [1.177458056s] [1.177458056s] END Mar 12 14:49:43.365086 master-0 kubenswrapper[37036]: I0312 14:49:43.365034 37036 generic.go:334] "Generic (PLEG): container finished" podID="a35ad3f9-8c4a-47cb-8326-a552e0b1dad1" containerID="d3ecde157f64258ed74f50d52b92c5e69118e60e0b175db8b167ef85df20bafc" exitCode=0 Mar 12 14:49:43.365086 master-0 kubenswrapper[37036]: I0312 14:49:43.365091 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qpqr7" event={"ID":"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1","Type":"ContainerDied","Data":"d3ecde157f64258ed74f50d52b92c5e69118e60e0b175db8b167ef85df20bafc"} Mar 12 14:49:43.381537 master-0 kubenswrapper[37036]: I0312 14:49:43.380572 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-kvdsk"] Mar 12 14:49:43.385621 master-0 kubenswrapper[37036]: I0312 14:49:43.383951 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kvdsk" Mar 12 14:49:43.388501 master-0 kubenswrapper[37036]: I0312 14:49:43.388414 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 12 14:49:43.400981 master-0 kubenswrapper[37036]: I0312 14:49:43.400841 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kvdsk"] Mar 12 14:49:43.555800 master-0 kubenswrapper[37036]: I0312 14:49:43.555730 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58fdw\" (UniqueName: \"kubernetes.io/projected/5e7b883c-d1db-467a-9f8f-641d11139185-kube-api-access-58fdw\") pod \"root-account-create-update-kvdsk\" (UID: \"5e7b883c-d1db-467a-9f8f-641d11139185\") " pod="openstack/root-account-create-update-kvdsk" Mar 12 14:49:43.556197 master-0 kubenswrapper[37036]: I0312 14:49:43.556150 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e7b883c-d1db-467a-9f8f-641d11139185-operator-scripts\") pod \"root-account-create-update-kvdsk\" (UID: \"5e7b883c-d1db-467a-9f8f-641d11139185\") " pod="openstack/root-account-create-update-kvdsk" Mar 12 14:49:43.658673 master-0 kubenswrapper[37036]: I0312 14:49:43.658541 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e7b883c-d1db-467a-9f8f-641d11139185-operator-scripts\") pod \"root-account-create-update-kvdsk\" (UID: \"5e7b883c-d1db-467a-9f8f-641d11139185\") " pod="openstack/root-account-create-update-kvdsk" Mar 12 14:49:43.658850 master-0 kubenswrapper[37036]: I0312 14:49:43.658739 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58fdw\" (UniqueName: \"kubernetes.io/projected/5e7b883c-d1db-467a-9f8f-641d11139185-kube-api-access-58fdw\") pod \"root-account-create-update-kvdsk\" (UID: \"5e7b883c-d1db-467a-9f8f-641d11139185\") " pod="openstack/root-account-create-update-kvdsk" Mar 12 14:49:43.659663 master-0 kubenswrapper[37036]: I0312 14:49:43.659615 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e7b883c-d1db-467a-9f8f-641d11139185-operator-scripts\") pod \"root-account-create-update-kvdsk\" (UID: \"5e7b883c-d1db-467a-9f8f-641d11139185\") " pod="openstack/root-account-create-update-kvdsk" Mar 12 14:49:43.678681 master-0 kubenswrapper[37036]: I0312 14:49:43.678610 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58fdw\" (UniqueName: \"kubernetes.io/projected/5e7b883c-d1db-467a-9f8f-641d11139185-kube-api-access-58fdw\") pod \"root-account-create-update-kvdsk\" (UID: \"5e7b883c-d1db-467a-9f8f-641d11139185\") " pod="openstack/root-account-create-update-kvdsk" Mar 12 14:49:43.711268 master-0 kubenswrapper[37036]: I0312 14:49:43.711185 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kvdsk" Mar 12 14:49:43.798028 master-0 kubenswrapper[37036]: I0312 14:49:43.797948 37036 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6ff8fd9d5c-wn7k8" podUID="9866f383-8abf-4106-9cf6-9e6265fe07b4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.174:5353: i/o timeout" Mar 12 14:49:44.228191 master-0 kubenswrapper[37036]: I0312 14:49:44.228153 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kvdsk"] Mar 12 14:49:44.238434 master-0 kubenswrapper[37036]: W0312 14:49:44.238285 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e7b883c_d1db_467a_9f8f_641d11139185.slice/crio-1ff7a2c2484e6ea377ca1fa2f6b10b43a9031716f9ab86f005ea847810774291 WatchSource:0}: Error finding container 1ff7a2c2484e6ea377ca1fa2f6b10b43a9031716f9ab86f005ea847810774291: Status 404 returned error can't find the container with id 1ff7a2c2484e6ea377ca1fa2f6b10b43a9031716f9ab86f005ea847810774291 Mar 12 14:49:44.382676 master-0 kubenswrapper[37036]: I0312 14:49:44.382612 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kvdsk" event={"ID":"5e7b883c-d1db-467a-9f8f-641d11139185","Type":"ContainerStarted","Data":"1ff7a2c2484e6ea377ca1fa2f6b10b43a9031716f9ab86f005ea847810774291"} Mar 12 14:49:44.850840 master-0 kubenswrapper[37036]: I0312 14:49:44.850798 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qpqr7" Mar 12 14:49:44.990524 master-0 kubenswrapper[37036]: I0312 14:49:44.990447 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-etc-swift\") pod \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " Mar 12 14:49:44.990774 master-0 kubenswrapper[37036]: I0312 14:49:44.990590 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-ring-data-devices\") pod \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " Mar 12 14:49:44.990774 master-0 kubenswrapper[37036]: I0312 14:49:44.990678 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-dispersionconf\") pod \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " Mar 12 14:49:44.990774 master-0 kubenswrapper[37036]: I0312 14:49:44.990768 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h8mt\" (UniqueName: \"kubernetes.io/projected/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-kube-api-access-9h8mt\") pod \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " Mar 12 14:49:44.990885 master-0 kubenswrapper[37036]: I0312 14:49:44.990818 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-swiftconf\") pod \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " Mar 12 14:49:44.990885 master-0 kubenswrapper[37036]: I0312 14:49:44.990857 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-scripts\") pod \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " Mar 12 14:49:44.991549 master-0 kubenswrapper[37036]: I0312 14:49:44.991511 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-combined-ca-bundle\") pod \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\" (UID: \"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1\") " Mar 12 14:49:44.991932 master-0 kubenswrapper[37036]: I0312 14:49:44.991881 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a35ad3f9-8c4a-47cb-8326-a552e0b1dad1" (UID: "a35ad3f9-8c4a-47cb-8326-a552e0b1dad1"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:49:44.992014 master-0 kubenswrapper[37036]: I0312 14:49:44.991884 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "a35ad3f9-8c4a-47cb-8326-a552e0b1dad1" (UID: "a35ad3f9-8c4a-47cb-8326-a552e0b1dad1"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:44.992373 master-0 kubenswrapper[37036]: I0312 14:49:44.992338 37036 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-etc-swift\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:44.992459 master-0 kubenswrapper[37036]: I0312 14:49:44.992374 37036 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-ring-data-devices\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:44.993727 master-0 kubenswrapper[37036]: I0312 14:49:44.993632 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-kube-api-access-9h8mt" (OuterVolumeSpecName: "kube-api-access-9h8mt") pod "a35ad3f9-8c4a-47cb-8326-a552e0b1dad1" (UID: "a35ad3f9-8c4a-47cb-8326-a552e0b1dad1"). InnerVolumeSpecName "kube-api-access-9h8mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:49:44.996241 master-0 kubenswrapper[37036]: I0312 14:49:44.996171 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "a35ad3f9-8c4a-47cb-8326-a552e0b1dad1" (UID: "a35ad3f9-8c4a-47cb-8326-a552e0b1dad1"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:49:45.019316 master-0 kubenswrapper[37036]: I0312 14:49:45.019235 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-scripts" (OuterVolumeSpecName: "scripts") pod "a35ad3f9-8c4a-47cb-8326-a552e0b1dad1" (UID: "a35ad3f9-8c4a-47cb-8326-a552e0b1dad1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:45.021459 master-0 kubenswrapper[37036]: I0312 14:49:45.021404 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "a35ad3f9-8c4a-47cb-8326-a552e0b1dad1" (UID: "a35ad3f9-8c4a-47cb-8326-a552e0b1dad1"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:49:45.021822 master-0 kubenswrapper[37036]: I0312 14:49:45.021766 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a35ad3f9-8c4a-47cb-8326-a552e0b1dad1" (UID: "a35ad3f9-8c4a-47cb-8326-a552e0b1dad1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:49:45.094964 master-0 kubenswrapper[37036]: I0312 14:49:45.094907 37036 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:45.094964 master-0 kubenswrapper[37036]: I0312 14:49:45.094962 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:45.095150 master-0 kubenswrapper[37036]: I0312 14:49:45.094977 37036 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-dispersionconf\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:45.095150 master-0 kubenswrapper[37036]: I0312 14:49:45.094987 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9h8mt\" (UniqueName: \"kubernetes.io/projected/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-kube-api-access-9h8mt\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:45.095150 master-0 kubenswrapper[37036]: I0312 14:49:45.094996 37036 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a35ad3f9-8c4a-47cb-8326-a552e0b1dad1-swiftconf\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:45.394319 master-0 kubenswrapper[37036]: I0312 14:49:45.394172 37036 generic.go:334] "Generic (PLEG): container finished" podID="78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e" containerID="f40dd02a35e0d13c24e4347b09e1fd06de3a873ee8c19bfa2f7f841c96074bb0" exitCode=0 Mar 12 14:49:45.394319 master-0 kubenswrapper[37036]: I0312 14:49:45.394258 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e","Type":"ContainerDied","Data":"f40dd02a35e0d13c24e4347b09e1fd06de3a873ee8c19bfa2f7f841c96074bb0"} Mar 12 14:49:45.396463 master-0 kubenswrapper[37036]: I0312 14:49:45.396405 37036 generic.go:334] "Generic (PLEG): container finished" podID="f063fb36-4428-461a-8b29-3750c3f8217f" containerID="f7462dd01e0eb4a4819a9668830cc108b07028a9765d03e8d331f4a0c9108a53" exitCode=0 Mar 12 14:49:45.396560 master-0 kubenswrapper[37036]: I0312 14:49:45.396488 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f063fb36-4428-461a-8b29-3750c3f8217f","Type":"ContainerDied","Data":"f7462dd01e0eb4a4819a9668830cc108b07028a9765d03e8d331f4a0c9108a53"} Mar 12 14:49:45.398625 master-0 kubenswrapper[37036]: I0312 14:49:45.398577 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qpqr7" Mar 12 14:49:45.398625 master-0 kubenswrapper[37036]: I0312 14:49:45.398605 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qpqr7" event={"ID":"a35ad3f9-8c4a-47cb-8326-a552e0b1dad1","Type":"ContainerDied","Data":"05bf85380fe7063915e3486024d2c78eb5bc6eba73c14d6c20208e8fbbf15e63"} Mar 12 14:49:45.398802 master-0 kubenswrapper[37036]: I0312 14:49:45.398671 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05bf85380fe7063915e3486024d2c78eb5bc6eba73c14d6c20208e8fbbf15e63" Mar 12 14:49:45.400607 master-0 kubenswrapper[37036]: I0312 14:49:45.400573 37036 generic.go:334] "Generic (PLEG): container finished" podID="5e7b883c-d1db-467a-9f8f-641d11139185" containerID="9457a74c3b0d7737ba8ee40b0e7cd1ce0418c98f841e5d68dfc27385f4ea28bd" exitCode=0 Mar 12 14:49:45.400607 master-0 kubenswrapper[37036]: I0312 14:49:45.400601 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kvdsk" event={"ID":"5e7b883c-d1db-467a-9f8f-641d11139185","Type":"ContainerDied","Data":"9457a74c3b0d7737ba8ee40b0e7cd1ce0418c98f841e5d68dfc27385f4ea28bd"} Mar 12 14:49:45.812130 master-0 kubenswrapper[37036]: I0312 14:49:45.812017 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-etc-swift\") pod \"swift-storage-0\" (UID: \"02b0bb9f-56cd-4ffe-9e37-2200e4baec09\") " pod="openstack/swift-storage-0" Mar 12 14:49:45.818319 master-0 kubenswrapper[37036]: I0312 14:49:45.818262 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/02b0bb9f-56cd-4ffe-9e37-2200e4baec09-etc-swift\") pod \"swift-storage-0\" (UID: \"02b0bb9f-56cd-4ffe-9e37-2200e4baec09\") " pod="openstack/swift-storage-0" Mar 12 14:49:45.899608 master-0 kubenswrapper[37036]: I0312 14:49:45.899538 37036 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-4mq52" podUID="a24e803b-32b7-4b4b-bb59-f58b9a506626" containerName="ovn-controller" probeResult="failure" output=< Mar 12 14:49:45.899608 master-0 kubenswrapper[37036]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 12 14:49:45.899608 master-0 kubenswrapper[37036]: > Mar 12 14:49:46.050631 master-0 kubenswrapper[37036]: I0312 14:49:46.050592 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6l42m" Mar 12 14:49:46.056252 master-0 kubenswrapper[37036]: I0312 14:49:46.056220 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6l42m" Mar 12 14:49:46.069057 master-0 kubenswrapper[37036]: I0312 14:49:46.068890 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 12 14:49:46.312964 master-0 kubenswrapper[37036]: I0312 14:49:46.311273 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4mq52-config-27v4x"] Mar 12 14:49:46.312964 master-0 kubenswrapper[37036]: E0312 14:49:46.311810 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a35ad3f9-8c4a-47cb-8326-a552e0b1dad1" containerName="swift-ring-rebalance" Mar 12 14:49:46.312964 master-0 kubenswrapper[37036]: I0312 14:49:46.311837 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="a35ad3f9-8c4a-47cb-8326-a552e0b1dad1" containerName="swift-ring-rebalance" Mar 12 14:49:46.312964 master-0 kubenswrapper[37036]: I0312 14:49:46.312147 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="a35ad3f9-8c4a-47cb-8326-a552e0b1dad1" containerName="swift-ring-rebalance" Mar 12 14:49:46.316953 master-0 kubenswrapper[37036]: I0312 14:49:46.314884 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4mq52-config-27v4x" Mar 12 14:49:46.319334 master-0 kubenswrapper[37036]: I0312 14:49:46.319111 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 12 14:49:46.343447 master-0 kubenswrapper[37036]: I0312 14:49:46.343394 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4mq52-config-27v4x"] Mar 12 14:49:46.429968 master-0 kubenswrapper[37036]: I0312 14:49:46.428265 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-var-run-ovn\") pod \"ovn-controller-4mq52-config-27v4x\" (UID: \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\") " pod="openstack/ovn-controller-4mq52-config-27v4x" Mar 12 14:49:46.429968 master-0 kubenswrapper[37036]: I0312 14:49:46.428322 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-var-run\") pod \"ovn-controller-4mq52-config-27v4x\" (UID: \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\") " pod="openstack/ovn-controller-4mq52-config-27v4x" Mar 12 14:49:46.429968 master-0 kubenswrapper[37036]: I0312 14:49:46.428350 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-additional-scripts\") pod \"ovn-controller-4mq52-config-27v4x\" (UID: \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\") " pod="openstack/ovn-controller-4mq52-config-27v4x" Mar 12 14:49:46.429968 master-0 kubenswrapper[37036]: I0312 14:49:46.428398 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-var-log-ovn\") pod \"ovn-controller-4mq52-config-27v4x\" (UID: \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\") " pod="openstack/ovn-controller-4mq52-config-27v4x" Mar 12 14:49:46.429968 master-0 kubenswrapper[37036]: I0312 14:49:46.428429 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kldzb\" (UniqueName: \"kubernetes.io/projected/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-kube-api-access-kldzb\") pod \"ovn-controller-4mq52-config-27v4x\" (UID: \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\") " pod="openstack/ovn-controller-4mq52-config-27v4x" Mar 12 14:49:46.429968 master-0 kubenswrapper[37036]: I0312 14:49:46.428499 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-scripts\") pod \"ovn-controller-4mq52-config-27v4x\" (UID: \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\") " pod="openstack/ovn-controller-4mq52-config-27v4x" Mar 12 14:49:46.436257 master-0 kubenswrapper[37036]: I0312 14:49:46.434686 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e","Type":"ContainerStarted","Data":"c950b841e636bb2d7947c8594ba54cae612daa94d3164f7ed45bd829ed7c3de2"} Mar 12 14:49:46.436257 master-0 kubenswrapper[37036]: I0312 14:49:46.436082 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 12 14:49:46.445007 master-0 kubenswrapper[37036]: I0312 14:49:46.439957 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f063fb36-4428-461a-8b29-3750c3f8217f","Type":"ContainerStarted","Data":"21d8c4ffcd790a3952476059e0a27a21c91c4d75c8ac6ef85544c12a14020fc3"} Mar 12 14:49:46.445007 master-0 kubenswrapper[37036]: I0312 14:49:46.440362 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:49:46.484058 master-0 kubenswrapper[37036]: I0312 14:49:46.483902 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=52.901927231 podStartE2EDuration="1m3.483882777s" podCreationTimestamp="2026-03-12 14:48:43 +0000 UTC" firstStartedPulling="2026-03-12 14:48:59.82299567 +0000 UTC m=+798.830736607" lastFinishedPulling="2026-03-12 14:49:10.404951216 +0000 UTC m=+809.412692153" observedRunningTime="2026-03-12 14:49:46.467550153 +0000 UTC m=+845.475291090" watchObservedRunningTime="2026-03-12 14:49:46.483882777 +0000 UTC m=+845.491623704" Mar 12 14:49:46.513802 master-0 kubenswrapper[37036]: I0312 14:49:46.513720 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=52.254393839 podStartE2EDuration="1m3.513704678s" podCreationTimestamp="2026-03-12 14:48:43 +0000 UTC" firstStartedPulling="2026-03-12 14:48:59.821538584 +0000 UTC m=+798.829279521" lastFinishedPulling="2026-03-12 14:49:11.080849423 +0000 UTC m=+810.088590360" observedRunningTime="2026-03-12 14:49:46.504457168 +0000 UTC m=+845.512198105" watchObservedRunningTime="2026-03-12 14:49:46.513704678 +0000 UTC m=+845.521445615" Mar 12 14:49:46.531307 master-0 kubenswrapper[37036]: I0312 14:49:46.531264 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-var-log-ovn\") pod \"ovn-controller-4mq52-config-27v4x\" (UID: \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\") " pod="openstack/ovn-controller-4mq52-config-27v4x" Mar 12 14:49:46.531426 master-0 kubenswrapper[37036]: I0312 14:49:46.531354 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kldzb\" (UniqueName: \"kubernetes.io/projected/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-kube-api-access-kldzb\") pod \"ovn-controller-4mq52-config-27v4x\" (UID: \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\") " pod="openstack/ovn-controller-4mq52-config-27v4x" Mar 12 14:49:46.531502 master-0 kubenswrapper[37036]: I0312 14:49:46.531453 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-var-log-ovn\") pod \"ovn-controller-4mq52-config-27v4x\" (UID: \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\") " pod="openstack/ovn-controller-4mq52-config-27v4x" Mar 12 14:49:46.531841 master-0 kubenswrapper[37036]: I0312 14:49:46.531699 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-scripts\") pod \"ovn-controller-4mq52-config-27v4x\" (UID: \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\") " pod="openstack/ovn-controller-4mq52-config-27v4x" Mar 12 14:49:46.532071 master-0 kubenswrapper[37036]: I0312 14:49:46.532042 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-var-run-ovn\") pod \"ovn-controller-4mq52-config-27v4x\" (UID: \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\") " pod="openstack/ovn-controller-4mq52-config-27v4x" Mar 12 14:49:46.532119 master-0 kubenswrapper[37036]: I0312 14:49:46.532082 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-var-run\") pod \"ovn-controller-4mq52-config-27v4x\" (UID: \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\") " pod="openstack/ovn-controller-4mq52-config-27v4x" Mar 12 14:49:46.532163 master-0 kubenswrapper[37036]: I0312 14:49:46.532129 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-var-run\") pod \"ovn-controller-4mq52-config-27v4x\" (UID: \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\") " pod="openstack/ovn-controller-4mq52-config-27v4x" Mar 12 14:49:46.532223 master-0 kubenswrapper[37036]: I0312 14:49:46.532174 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-additional-scripts\") pod \"ovn-controller-4mq52-config-27v4x\" (UID: \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\") " pod="openstack/ovn-controller-4mq52-config-27v4x" Mar 12 14:49:46.532520 master-0 kubenswrapper[37036]: I0312 14:49:46.532497 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-var-run-ovn\") pod \"ovn-controller-4mq52-config-27v4x\" (UID: \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\") " pod="openstack/ovn-controller-4mq52-config-27v4x" Mar 12 14:49:46.533207 master-0 kubenswrapper[37036]: I0312 14:49:46.533182 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-additional-scripts\") pod \"ovn-controller-4mq52-config-27v4x\" (UID: \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\") " pod="openstack/ovn-controller-4mq52-config-27v4x" Mar 12 14:49:46.534050 master-0 kubenswrapper[37036]: I0312 14:49:46.534016 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-scripts\") pod \"ovn-controller-4mq52-config-27v4x\" (UID: \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\") " pod="openstack/ovn-controller-4mq52-config-27v4x" Mar 12 14:49:46.550033 master-0 kubenswrapper[37036]: I0312 14:49:46.549900 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kldzb\" (UniqueName: \"kubernetes.io/projected/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-kube-api-access-kldzb\") pod \"ovn-controller-4mq52-config-27v4x\" (UID: \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\") " pod="openstack/ovn-controller-4mq52-config-27v4x" Mar 12 14:49:46.655080 master-0 kubenswrapper[37036]: I0312 14:49:46.652523 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4mq52-config-27v4x" Mar 12 14:49:46.662852 master-0 kubenswrapper[37036]: I0312 14:49:46.661509 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 12 14:49:46.718886 master-0 kubenswrapper[37036]: W0312 14:49:46.718805 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02b0bb9f_56cd_4ffe_9e37_2200e4baec09.slice/crio-ebdf5229a10689d1b4089a3b13bf3497421d8dfd183f7cd4790b81da49d5dca1 WatchSource:0}: Error finding container ebdf5229a10689d1b4089a3b13bf3497421d8dfd183f7cd4790b81da49d5dca1: Status 404 returned error can't find the container with id ebdf5229a10689d1b4089a3b13bf3497421d8dfd183f7cd4790b81da49d5dca1 Mar 12 14:49:46.950182 master-0 kubenswrapper[37036]: I0312 14:49:46.950107 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kvdsk" Mar 12 14:49:47.050905 master-0 kubenswrapper[37036]: I0312 14:49:47.050852 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e7b883c-d1db-467a-9f8f-641d11139185-operator-scripts\") pod \"5e7b883c-d1db-467a-9f8f-641d11139185\" (UID: \"5e7b883c-d1db-467a-9f8f-641d11139185\") " Mar 12 14:49:47.051471 master-0 kubenswrapper[37036]: I0312 14:49:47.051366 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e7b883c-d1db-467a-9f8f-641d11139185-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e7b883c-d1db-467a-9f8f-641d11139185" (UID: "5e7b883c-d1db-467a-9f8f-641d11139185"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:47.052031 master-0 kubenswrapper[37036]: I0312 14:49:47.051633 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58fdw\" (UniqueName: \"kubernetes.io/projected/5e7b883c-d1db-467a-9f8f-641d11139185-kube-api-access-58fdw\") pod \"5e7b883c-d1db-467a-9f8f-641d11139185\" (UID: \"5e7b883c-d1db-467a-9f8f-641d11139185\") " Mar 12 14:49:47.054409 master-0 kubenswrapper[37036]: I0312 14:49:47.054367 37036 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e7b883c-d1db-467a-9f8f-641d11139185-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:47.071502 master-0 kubenswrapper[37036]: I0312 14:49:47.071335 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e7b883c-d1db-467a-9f8f-641d11139185-kube-api-access-58fdw" (OuterVolumeSpecName: "kube-api-access-58fdw") pod "5e7b883c-d1db-467a-9f8f-641d11139185" (UID: "5e7b883c-d1db-467a-9f8f-641d11139185"). InnerVolumeSpecName "kube-api-access-58fdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:49:47.156045 master-0 kubenswrapper[37036]: I0312 14:49:47.155913 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58fdw\" (UniqueName: \"kubernetes.io/projected/5e7b883c-d1db-467a-9f8f-641d11139185-kube-api-access-58fdw\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:47.180798 master-0 kubenswrapper[37036]: I0312 14:49:47.180579 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4mq52-config-27v4x"] Mar 12 14:49:47.185537 master-0 kubenswrapper[37036]: W0312 14:49:47.185282 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3cb08e9_3851_4ab4_bc90_bd463bfee8b2.slice/crio-aaab1b5de1f35b530b3fe9b4040396b10fbbc8be26e75eb2163aefec4844b018 WatchSource:0}: Error finding container aaab1b5de1f35b530b3fe9b4040396b10fbbc8be26e75eb2163aefec4844b018: Status 404 returned error can't find the container with id aaab1b5de1f35b530b3fe9b4040396b10fbbc8be26e75eb2163aefec4844b018 Mar 12 14:49:47.476363 master-0 kubenswrapper[37036]: I0312 14:49:47.476098 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kvdsk" event={"ID":"5e7b883c-d1db-467a-9f8f-641d11139185","Type":"ContainerDied","Data":"1ff7a2c2484e6ea377ca1fa2f6b10b43a9031716f9ab86f005ea847810774291"} Mar 12 14:49:47.476363 master-0 kubenswrapper[37036]: I0312 14:49:47.476351 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ff7a2c2484e6ea377ca1fa2f6b10b43a9031716f9ab86f005ea847810774291" Mar 12 14:49:47.477113 master-0 kubenswrapper[37036]: I0312 14:49:47.476433 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kvdsk" Mar 12 14:49:47.487316 master-0 kubenswrapper[37036]: I0312 14:49:47.487254 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4mq52-config-27v4x" event={"ID":"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2","Type":"ContainerStarted","Data":"f636ee0e3059b014460c00cc556a02c3208e8c0f62e970894bdb2e5c1ad01b52"} Mar 12 14:49:47.487625 master-0 kubenswrapper[37036]: I0312 14:49:47.487595 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4mq52-config-27v4x" event={"ID":"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2","Type":"ContainerStarted","Data":"aaab1b5de1f35b530b3fe9b4040396b10fbbc8be26e75eb2163aefec4844b018"} Mar 12 14:49:47.493234 master-0 kubenswrapper[37036]: I0312 14:49:47.493184 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"02b0bb9f-56cd-4ffe-9e37-2200e4baec09","Type":"ContainerStarted","Data":"ebdf5229a10689d1b4089a3b13bf3497421d8dfd183f7cd4790b81da49d5dca1"} Mar 12 14:49:47.517213 master-0 kubenswrapper[37036]: I0312 14:49:47.517126 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-4mq52-config-27v4x" podStartSLOduration=1.517102433 podStartE2EDuration="1.517102433s" podCreationTimestamp="2026-03-12 14:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:49:47.506796708 +0000 UTC m=+846.514537655" watchObservedRunningTime="2026-03-12 14:49:47.517102433 +0000 UTC m=+846.524843380" Mar 12 14:49:48.514564 master-0 kubenswrapper[37036]: I0312 14:49:48.514497 37036 generic.go:334] "Generic (PLEG): container finished" podID="b3cb08e9-3851-4ab4-bc90-bd463bfee8b2" containerID="f636ee0e3059b014460c00cc556a02c3208e8c0f62e970894bdb2e5c1ad01b52" exitCode=0 Mar 12 14:49:48.514564 master-0 kubenswrapper[37036]: I0312 14:49:48.514559 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4mq52-config-27v4x" event={"ID":"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2","Type":"ContainerDied","Data":"f636ee0e3059b014460c00cc556a02c3208e8c0f62e970894bdb2e5c1ad01b52"} Mar 12 14:49:49.356825 master-0 kubenswrapper[37036]: I0312 14:49:49.354780 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-kvdsk"] Mar 12 14:49:49.367848 master-0 kubenswrapper[37036]: I0312 14:49:49.367678 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-kvdsk"] Mar 12 14:49:50.892770 master-0 kubenswrapper[37036]: I0312 14:49:50.892717 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-4mq52" Mar 12 14:49:51.277026 master-0 kubenswrapper[37036]: I0312 14:49:51.273203 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e7b883c-d1db-467a-9f8f-641d11139185" path="/var/lib/kubelet/pods/5e7b883c-d1db-467a-9f8f-641d11139185/volumes" Mar 12 14:49:54.399909 master-0 kubenswrapper[37036]: I0312 14:49:54.399836 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-sxdng"] Mar 12 14:49:54.401167 master-0 kubenswrapper[37036]: E0312 14:49:54.401137 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e7b883c-d1db-467a-9f8f-641d11139185" containerName="mariadb-account-create-update" Mar 12 14:49:54.401283 master-0 kubenswrapper[37036]: I0312 14:49:54.401268 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e7b883c-d1db-467a-9f8f-641d11139185" containerName="mariadb-account-create-update" Mar 12 14:49:54.401826 master-0 kubenswrapper[37036]: I0312 14:49:54.401808 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e7b883c-d1db-467a-9f8f-641d11139185" containerName="mariadb-account-create-update" Mar 12 14:49:54.402885 master-0 kubenswrapper[37036]: I0312 14:49:54.402863 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sxdng" Mar 12 14:49:54.407312 master-0 kubenswrapper[37036]: I0312 14:49:54.407258 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 12 14:49:54.470117 master-0 kubenswrapper[37036]: I0312 14:49:54.470047 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sxdng"] Mar 12 14:49:54.542718 master-0 kubenswrapper[37036]: I0312 14:49:54.542616 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v57n\" (UniqueName: \"kubernetes.io/projected/661deaeb-75cd-4a4f-b211-91dcffe41b1b-kube-api-access-9v57n\") pod \"root-account-create-update-sxdng\" (UID: \"661deaeb-75cd-4a4f-b211-91dcffe41b1b\") " pod="openstack/root-account-create-update-sxdng" Mar 12 14:49:54.542718 master-0 kubenswrapper[37036]: I0312 14:49:54.542724 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/661deaeb-75cd-4a4f-b211-91dcffe41b1b-operator-scripts\") pod \"root-account-create-update-sxdng\" (UID: \"661deaeb-75cd-4a4f-b211-91dcffe41b1b\") " pod="openstack/root-account-create-update-sxdng" Mar 12 14:49:54.645255 master-0 kubenswrapper[37036]: I0312 14:49:54.645208 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v57n\" (UniqueName: \"kubernetes.io/projected/661deaeb-75cd-4a4f-b211-91dcffe41b1b-kube-api-access-9v57n\") pod \"root-account-create-update-sxdng\" (UID: \"661deaeb-75cd-4a4f-b211-91dcffe41b1b\") " pod="openstack/root-account-create-update-sxdng" Mar 12 14:49:54.645531 master-0 kubenswrapper[37036]: I0312 14:49:54.645498 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/661deaeb-75cd-4a4f-b211-91dcffe41b1b-operator-scripts\") pod \"root-account-create-update-sxdng\" (UID: \"661deaeb-75cd-4a4f-b211-91dcffe41b1b\") " pod="openstack/root-account-create-update-sxdng" Mar 12 14:49:54.646605 master-0 kubenswrapper[37036]: I0312 14:49:54.646553 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/661deaeb-75cd-4a4f-b211-91dcffe41b1b-operator-scripts\") pod \"root-account-create-update-sxdng\" (UID: \"661deaeb-75cd-4a4f-b211-91dcffe41b1b\") " pod="openstack/root-account-create-update-sxdng" Mar 12 14:49:54.663250 master-0 kubenswrapper[37036]: I0312 14:49:54.663050 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v57n\" (UniqueName: \"kubernetes.io/projected/661deaeb-75cd-4a4f-b211-91dcffe41b1b-kube-api-access-9v57n\") pod \"root-account-create-update-sxdng\" (UID: \"661deaeb-75cd-4a4f-b211-91dcffe41b1b\") " pod="openstack/root-account-create-update-sxdng" Mar 12 14:49:54.760556 master-0 kubenswrapper[37036]: I0312 14:49:54.760488 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sxdng" Mar 12 14:49:55.591141 master-0 kubenswrapper[37036]: I0312 14:49:55.587275 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4mq52-config-27v4x" Mar 12 14:49:55.602187 master-0 kubenswrapper[37036]: I0312 14:49:55.602125 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4mq52-config-27v4x" Mar 12 14:49:55.602795 master-0 kubenswrapper[37036]: I0312 14:49:55.602665 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4mq52-config-27v4x" event={"ID":"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2","Type":"ContainerDied","Data":"aaab1b5de1f35b530b3fe9b4040396b10fbbc8be26e75eb2163aefec4844b018"} Mar 12 14:49:55.602795 master-0 kubenswrapper[37036]: I0312 14:49:55.602713 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaab1b5de1f35b530b3fe9b4040396b10fbbc8be26e75eb2163aefec4844b018" Mar 12 14:49:55.685239 master-0 kubenswrapper[37036]: I0312 14:49:55.684504 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-var-run\") pod \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\" (UID: \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\") " Mar 12 14:49:55.685239 master-0 kubenswrapper[37036]: I0312 14:49:55.684573 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-var-log-ovn\") pod \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\" (UID: \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\") " Mar 12 14:49:55.685239 master-0 kubenswrapper[37036]: I0312 14:49:55.684615 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-additional-scripts\") pod \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\" (UID: \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\") " Mar 12 14:49:55.685239 master-0 kubenswrapper[37036]: I0312 14:49:55.684629 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-var-run" (OuterVolumeSpecName: "var-run") pod "b3cb08e9-3851-4ab4-bc90-bd463bfee8b2" (UID: "b3cb08e9-3851-4ab4-bc90-bd463bfee8b2"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:49:55.685239 master-0 kubenswrapper[37036]: I0312 14:49:55.684676 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kldzb\" (UniqueName: \"kubernetes.io/projected/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-kube-api-access-kldzb\") pod \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\" (UID: \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\") " Mar 12 14:49:55.685239 master-0 kubenswrapper[37036]: I0312 14:49:55.684733 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "b3cb08e9-3851-4ab4-bc90-bd463bfee8b2" (UID: "b3cb08e9-3851-4ab4-bc90-bd463bfee8b2"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:49:55.685239 master-0 kubenswrapper[37036]: I0312 14:49:55.684748 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-var-run-ovn\") pod \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\" (UID: \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\") " Mar 12 14:49:55.685239 master-0 kubenswrapper[37036]: I0312 14:49:55.684807 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "b3cb08e9-3851-4ab4-bc90-bd463bfee8b2" (UID: "b3cb08e9-3851-4ab4-bc90-bd463bfee8b2"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:49:55.685239 master-0 kubenswrapper[37036]: I0312 14:49:55.684962 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-scripts\") pod \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\" (UID: \"b3cb08e9-3851-4ab4-bc90-bd463bfee8b2\") " Mar 12 14:49:55.685998 master-0 kubenswrapper[37036]: I0312 14:49:55.685758 37036 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-var-run\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:55.685998 master-0 kubenswrapper[37036]: I0312 14:49:55.685781 37036 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:55.685998 master-0 kubenswrapper[37036]: I0312 14:49:55.685794 37036 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:55.685998 master-0 kubenswrapper[37036]: I0312 14:49:55.685788 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "b3cb08e9-3851-4ab4-bc90-bd463bfee8b2" (UID: "b3cb08e9-3851-4ab4-bc90-bd463bfee8b2"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:55.688237 master-0 kubenswrapper[37036]: I0312 14:49:55.687374 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-scripts" (OuterVolumeSpecName: "scripts") pod "b3cb08e9-3851-4ab4-bc90-bd463bfee8b2" (UID: "b3cb08e9-3851-4ab4-bc90-bd463bfee8b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:55.688237 master-0 kubenswrapper[37036]: I0312 14:49:55.688179 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-kube-api-access-kldzb" (OuterVolumeSpecName: "kube-api-access-kldzb") pod "b3cb08e9-3851-4ab4-bc90-bd463bfee8b2" (UID: "b3cb08e9-3851-4ab4-bc90-bd463bfee8b2"). InnerVolumeSpecName "kube-api-access-kldzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:49:55.788183 master-0 kubenswrapper[37036]: I0312 14:49:55.788119 37036 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:55.788183 master-0 kubenswrapper[37036]: I0312 14:49:55.788173 37036 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-additional-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:55.788183 master-0 kubenswrapper[37036]: I0312 14:49:55.788190 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kldzb\" (UniqueName: \"kubernetes.io/projected/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2-kube-api-access-kldzb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:55.862109 master-0 kubenswrapper[37036]: I0312 14:49:55.862063 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sxdng"] Mar 12 14:49:56.615100 master-0 kubenswrapper[37036]: I0312 14:49:56.615037 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-989wd" event={"ID":"78a7388f-90a4-420a-b2ef-e31fb1fda25e","Type":"ContainerStarted","Data":"567497f53e6e5ca7827e775b06b0cc5ec8b399016a56e4b70007def96dfc70a2"} Mar 12 14:49:56.618103 master-0 kubenswrapper[37036]: I0312 14:49:56.618070 37036 generic.go:334] "Generic (PLEG): container finished" podID="661deaeb-75cd-4a4f-b211-91dcffe41b1b" containerID="9c5a1a8d51e8be6913666e45814bf240ec6a170bc82a90e87cd028d9f38fd2a3" exitCode=0 Mar 12 14:49:56.618255 master-0 kubenswrapper[37036]: I0312 14:49:56.618122 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sxdng" event={"ID":"661deaeb-75cd-4a4f-b211-91dcffe41b1b","Type":"ContainerDied","Data":"9c5a1a8d51e8be6913666e45814bf240ec6a170bc82a90e87cd028d9f38fd2a3"} Mar 12 14:49:56.618255 master-0 kubenswrapper[37036]: I0312 14:49:56.618141 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sxdng" event={"ID":"661deaeb-75cd-4a4f-b211-91dcffe41b1b","Type":"ContainerStarted","Data":"06dc58f00b86c171eed141db24aed1220f321f134d33db57f1babd413e15c16b"} Mar 12 14:49:56.620480 master-0 kubenswrapper[37036]: I0312 14:49:56.620440 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"02b0bb9f-56cd-4ffe-9e37-2200e4baec09","Type":"ContainerStarted","Data":"202e03cf442daab35e7c847c797ff79a457e2bd09157f6aece2b47ba12c3680e"} Mar 12 14:49:56.620480 master-0 kubenswrapper[37036]: I0312 14:49:56.620466 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"02b0bb9f-56cd-4ffe-9e37-2200e4baec09","Type":"ContainerStarted","Data":"c4320c8d60e394c655f4764c7e96b93144630e4877eb4f7b36b5c6c0c585b6ed"} Mar 12 14:49:56.620480 master-0 kubenswrapper[37036]: I0312 14:49:56.620475 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"02b0bb9f-56cd-4ffe-9e37-2200e4baec09","Type":"ContainerStarted","Data":"de2780d35d4540df78869051328580c1f731bbc4d2d2dbea80e9b00db2804d47"} Mar 12 14:49:56.620480 master-0 kubenswrapper[37036]: I0312 14:49:56.620483 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"02b0bb9f-56cd-4ffe-9e37-2200e4baec09","Type":"ContainerStarted","Data":"69f2e3612388d0601f801069fb4b804daeabbc2dd656c5fd036d3b94bab69598"} Mar 12 14:49:56.671942 master-0 kubenswrapper[37036]: I0312 14:49:56.670501 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-989wd" podStartSLOduration=2.327150894 podStartE2EDuration="15.670476931s" podCreationTimestamp="2026-03-12 14:49:41 +0000 UTC" firstStartedPulling="2026-03-12 14:49:42.131272531 +0000 UTC m=+841.139013468" lastFinishedPulling="2026-03-12 14:49:55.474598568 +0000 UTC m=+854.482339505" observedRunningTime="2026-03-12 14:49:56.639261475 +0000 UTC m=+855.647002412" watchObservedRunningTime="2026-03-12 14:49:56.670476931 +0000 UTC m=+855.678217868" Mar 12 14:49:56.801106 master-0 kubenswrapper[37036]: I0312 14:49:56.798764 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-4mq52-config-27v4x"] Mar 12 14:49:56.834226 master-0 kubenswrapper[37036]: I0312 14:49:56.834157 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-4mq52-config-27v4x"] Mar 12 14:49:57.263953 master-0 kubenswrapper[37036]: I0312 14:49:57.261954 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3cb08e9-3851-4ab4-bc90-bd463bfee8b2" path="/var/lib/kubelet/pods/b3cb08e9-3851-4ab4-bc90-bd463bfee8b2/volumes" Mar 12 14:49:58.095971 master-0 kubenswrapper[37036]: I0312 14:49:58.095873 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sxdng" Mar 12 14:49:58.246538 master-0 kubenswrapper[37036]: I0312 14:49:58.241387 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/661deaeb-75cd-4a4f-b211-91dcffe41b1b-operator-scripts\") pod \"661deaeb-75cd-4a4f-b211-91dcffe41b1b\" (UID: \"661deaeb-75cd-4a4f-b211-91dcffe41b1b\") " Mar 12 14:49:58.246538 master-0 kubenswrapper[37036]: I0312 14:49:58.241577 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v57n\" (UniqueName: \"kubernetes.io/projected/661deaeb-75cd-4a4f-b211-91dcffe41b1b-kube-api-access-9v57n\") pod \"661deaeb-75cd-4a4f-b211-91dcffe41b1b\" (UID: \"661deaeb-75cd-4a4f-b211-91dcffe41b1b\") " Mar 12 14:49:58.246538 master-0 kubenswrapper[37036]: I0312 14:49:58.244986 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/661deaeb-75cd-4a4f-b211-91dcffe41b1b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "661deaeb-75cd-4a4f-b211-91dcffe41b1b" (UID: "661deaeb-75cd-4a4f-b211-91dcffe41b1b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:49:58.246538 master-0 kubenswrapper[37036]: I0312 14:49:58.246113 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/661deaeb-75cd-4a4f-b211-91dcffe41b1b-kube-api-access-9v57n" (OuterVolumeSpecName: "kube-api-access-9v57n") pod "661deaeb-75cd-4a4f-b211-91dcffe41b1b" (UID: "661deaeb-75cd-4a4f-b211-91dcffe41b1b"). InnerVolumeSpecName "kube-api-access-9v57n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:49:58.344941 master-0 kubenswrapper[37036]: I0312 14:49:58.344429 37036 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/661deaeb-75cd-4a4f-b211-91dcffe41b1b-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:58.344941 master-0 kubenswrapper[37036]: I0312 14:49:58.344496 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v57n\" (UniqueName: \"kubernetes.io/projected/661deaeb-75cd-4a4f-b211-91dcffe41b1b-kube-api-access-9v57n\") on node \"master-0\" DevicePath \"\"" Mar 12 14:49:58.641164 master-0 kubenswrapper[37036]: I0312 14:49:58.641093 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sxdng" event={"ID":"661deaeb-75cd-4a4f-b211-91dcffe41b1b","Type":"ContainerDied","Data":"06dc58f00b86c171eed141db24aed1220f321f134d33db57f1babd413e15c16b"} Mar 12 14:49:58.641164 master-0 kubenswrapper[37036]: I0312 14:49:58.641142 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sxdng" Mar 12 14:49:58.641164 master-0 kubenswrapper[37036]: I0312 14:49:58.641152 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06dc58f00b86c171eed141db24aed1220f321f134d33db57f1babd413e15c16b" Mar 12 14:49:58.645003 master-0 kubenswrapper[37036]: I0312 14:49:58.644963 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"02b0bb9f-56cd-4ffe-9e37-2200e4baec09","Type":"ContainerStarted","Data":"f57c02a4a933cae1b34c02c57d2ddc6895702277f0bec99ed0dd8ab8ed0b076b"} Mar 12 14:49:58.645099 master-0 kubenswrapper[37036]: I0312 14:49:58.645005 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"02b0bb9f-56cd-4ffe-9e37-2200e4baec09","Type":"ContainerStarted","Data":"aa16b660c91a8356b37835ea412a110e90bfb4b12debfecb45a85a503531e85b"} Mar 12 14:49:58.645099 master-0 kubenswrapper[37036]: I0312 14:49:58.645021 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"02b0bb9f-56cd-4ffe-9e37-2200e4baec09","Type":"ContainerStarted","Data":"495ccf4c49ca25f8d30fc72ab89bcfe0bab654b158e0c6827b4a59b220ad4c43"} Mar 12 14:49:59.288173 master-0 kubenswrapper[37036]: I0312 14:49:59.288109 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 12 14:49:59.688002 master-0 kubenswrapper[37036]: I0312 14:49:59.687930 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"02b0bb9f-56cd-4ffe-9e37-2200e4baec09","Type":"ContainerStarted","Data":"88e987940233c21012594eb75d447676e6e72604b4c0f7a49038a753f3d941c1"} Mar 12 14:49:59.871062 master-0 kubenswrapper[37036]: I0312 14:49:59.870180 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-d5cg7"] Mar 12 14:49:59.871062 master-0 kubenswrapper[37036]: E0312 14:49:59.870625 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="661deaeb-75cd-4a4f-b211-91dcffe41b1b" containerName="mariadb-account-create-update" Mar 12 14:49:59.871062 master-0 kubenswrapper[37036]: I0312 14:49:59.870639 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="661deaeb-75cd-4a4f-b211-91dcffe41b1b" containerName="mariadb-account-create-update" Mar 12 14:49:59.871062 master-0 kubenswrapper[37036]: E0312 14:49:59.870669 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3cb08e9-3851-4ab4-bc90-bd463bfee8b2" containerName="ovn-config" Mar 12 14:49:59.871062 master-0 kubenswrapper[37036]: I0312 14:49:59.870678 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3cb08e9-3851-4ab4-bc90-bd463bfee8b2" containerName="ovn-config" Mar 12 14:49:59.871062 master-0 kubenswrapper[37036]: I0312 14:49:59.870995 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="661deaeb-75cd-4a4f-b211-91dcffe41b1b" containerName="mariadb-account-create-update" Mar 12 14:49:59.871062 master-0 kubenswrapper[37036]: I0312 14:49:59.871030 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3cb08e9-3851-4ab4-bc90-bd463bfee8b2" containerName="ovn-config" Mar 12 14:49:59.877127 master-0 kubenswrapper[37036]: I0312 14:49:59.873415 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-d5cg7" Mar 12 14:49:59.884302 master-0 kubenswrapper[37036]: I0312 14:49:59.883456 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-d5cg7"] Mar 12 14:49:59.988024 master-0 kubenswrapper[37036]: I0312 14:49:59.986490 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/690417a0-ecec-4a79-ab5b-789407fec2b0-operator-scripts\") pod \"cinder-db-create-d5cg7\" (UID: \"690417a0-ecec-4a79-ab5b-789407fec2b0\") " pod="openstack/cinder-db-create-d5cg7" Mar 12 14:49:59.988024 master-0 kubenswrapper[37036]: I0312 14:49:59.986703 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkqqx\" (UniqueName: \"kubernetes.io/projected/690417a0-ecec-4a79-ab5b-789407fec2b0-kube-api-access-lkqqx\") pod \"cinder-db-create-d5cg7\" (UID: \"690417a0-ecec-4a79-ab5b-789407fec2b0\") " pod="openstack/cinder-db-create-d5cg7" Mar 12 14:50:00.002459 master-0 kubenswrapper[37036]: I0312 14:50:00.001463 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-0f2a-account-create-update-7cc89"] Mar 12 14:50:00.004052 master-0 kubenswrapper[37036]: I0312 14:50:00.003142 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0f2a-account-create-update-7cc89" Mar 12 14:50:00.010775 master-0 kubenswrapper[37036]: I0312 14:50:00.010740 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 12 14:50:00.029113 master-0 kubenswrapper[37036]: I0312 14:50:00.028419 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0f2a-account-create-update-7cc89"] Mar 12 14:50:00.089052 master-0 kubenswrapper[37036]: I0312 14:50:00.088998 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/690417a0-ecec-4a79-ab5b-789407fec2b0-operator-scripts\") pod \"cinder-db-create-d5cg7\" (UID: \"690417a0-ecec-4a79-ab5b-789407fec2b0\") " pod="openstack/cinder-db-create-d5cg7" Mar 12 14:50:00.089319 master-0 kubenswrapper[37036]: I0312 14:50:00.089303 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d80a8ef-edb7-4620-a67f-dcdcdd80a907-operator-scripts\") pod \"cinder-0f2a-account-create-update-7cc89\" (UID: \"4d80a8ef-edb7-4620-a67f-dcdcdd80a907\") " pod="openstack/cinder-0f2a-account-create-update-7cc89" Mar 12 14:50:00.089512 master-0 kubenswrapper[37036]: I0312 14:50:00.089467 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bstg9\" (UniqueName: \"kubernetes.io/projected/4d80a8ef-edb7-4620-a67f-dcdcdd80a907-kube-api-access-bstg9\") pod \"cinder-0f2a-account-create-update-7cc89\" (UID: \"4d80a8ef-edb7-4620-a67f-dcdcdd80a907\") " pod="openstack/cinder-0f2a-account-create-update-7cc89" Mar 12 14:50:00.089967 master-0 kubenswrapper[37036]: I0312 14:50:00.089931 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/690417a0-ecec-4a79-ab5b-789407fec2b0-operator-scripts\") pod \"cinder-db-create-d5cg7\" (UID: \"690417a0-ecec-4a79-ab5b-789407fec2b0\") " pod="openstack/cinder-db-create-d5cg7" Mar 12 14:50:00.090093 master-0 kubenswrapper[37036]: I0312 14:50:00.090063 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkqqx\" (UniqueName: \"kubernetes.io/projected/690417a0-ecec-4a79-ab5b-789407fec2b0-kube-api-access-lkqqx\") pod \"cinder-db-create-d5cg7\" (UID: \"690417a0-ecec-4a79-ab5b-789407fec2b0\") " pod="openstack/cinder-db-create-d5cg7" Mar 12 14:50:00.120731 master-0 kubenswrapper[37036]: I0312 14:50:00.120688 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkqqx\" (UniqueName: \"kubernetes.io/projected/690417a0-ecec-4a79-ab5b-789407fec2b0-kube-api-access-lkqqx\") pod \"cinder-db-create-d5cg7\" (UID: \"690417a0-ecec-4a79-ab5b-789407fec2b0\") " pod="openstack/cinder-db-create-d5cg7" Mar 12 14:50:00.196990 master-0 kubenswrapper[37036]: I0312 14:50:00.196641 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bstg9\" (UniqueName: \"kubernetes.io/projected/4d80a8ef-edb7-4620-a67f-dcdcdd80a907-kube-api-access-bstg9\") pod \"cinder-0f2a-account-create-update-7cc89\" (UID: \"4d80a8ef-edb7-4620-a67f-dcdcdd80a907\") " pod="openstack/cinder-0f2a-account-create-update-7cc89" Mar 12 14:50:00.197225 master-0 kubenswrapper[37036]: I0312 14:50:00.197009 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d80a8ef-edb7-4620-a67f-dcdcdd80a907-operator-scripts\") pod \"cinder-0f2a-account-create-update-7cc89\" (UID: \"4d80a8ef-edb7-4620-a67f-dcdcdd80a907\") " pod="openstack/cinder-0f2a-account-create-update-7cc89" Mar 12 14:50:00.197280 master-0 kubenswrapper[37036]: I0312 14:50:00.197213 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-wmxbp"] Mar 12 14:50:00.198106 master-0 kubenswrapper[37036]: I0312 14:50:00.197845 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d80a8ef-edb7-4620-a67f-dcdcdd80a907-operator-scripts\") pod \"cinder-0f2a-account-create-update-7cc89\" (UID: \"4d80a8ef-edb7-4620-a67f-dcdcdd80a907\") " pod="openstack/cinder-0f2a-account-create-update-7cc89" Mar 12 14:50:00.198935 master-0 kubenswrapper[37036]: I0312 14:50:00.198667 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wmxbp" Mar 12 14:50:00.202873 master-0 kubenswrapper[37036]: I0312 14:50:00.202837 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 12 14:50:00.203349 master-0 kubenswrapper[37036]: I0312 14:50:00.203324 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 12 14:50:00.208197 master-0 kubenswrapper[37036]: I0312 14:50:00.206748 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 12 14:50:00.240422 master-0 kubenswrapper[37036]: I0312 14:50:00.226058 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bstg9\" (UniqueName: \"kubernetes.io/projected/4d80a8ef-edb7-4620-a67f-dcdcdd80a907-kube-api-access-bstg9\") pod \"cinder-0f2a-account-create-update-7cc89\" (UID: \"4d80a8ef-edb7-4620-a67f-dcdcdd80a907\") " pod="openstack/cinder-0f2a-account-create-update-7cc89" Mar 12 14:50:00.248669 master-0 kubenswrapper[37036]: I0312 14:50:00.246772 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-d5cg7" Mar 12 14:50:00.268330 master-0 kubenswrapper[37036]: I0312 14:50:00.268265 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wmxbp"] Mar 12 14:50:00.298577 master-0 kubenswrapper[37036]: I0312 14:50:00.298509 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4272c013-816c-4779-a81d-2945610612f3-config-data\") pod \"keystone-db-sync-wmxbp\" (UID: \"4272c013-816c-4779-a81d-2945610612f3\") " pod="openstack/keystone-db-sync-wmxbp" Mar 12 14:50:00.299191 master-0 kubenswrapper[37036]: I0312 14:50:00.299084 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlrnn\" (UniqueName: \"kubernetes.io/projected/4272c013-816c-4779-a81d-2945610612f3-kube-api-access-mlrnn\") pod \"keystone-db-sync-wmxbp\" (UID: \"4272c013-816c-4779-a81d-2945610612f3\") " pod="openstack/keystone-db-sync-wmxbp" Mar 12 14:50:00.309432 master-0 kubenswrapper[37036]: I0312 14:50:00.302017 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4272c013-816c-4779-a81d-2945610612f3-combined-ca-bundle\") pod \"keystone-db-sync-wmxbp\" (UID: \"4272c013-816c-4779-a81d-2945610612f3\") " pod="openstack/keystone-db-sync-wmxbp" Mar 12 14:50:00.336882 master-0 kubenswrapper[37036]: I0312 14:50:00.336816 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0f2a-account-create-update-7cc89" Mar 12 14:50:00.358324 master-0 kubenswrapper[37036]: I0312 14:50:00.358195 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-gmb6j"] Mar 12 14:50:00.359654 master-0 kubenswrapper[37036]: I0312 14:50:00.359620 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gmb6j" Mar 12 14:50:00.369292 master-0 kubenswrapper[37036]: I0312 14:50:00.369244 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-gmb6j"] Mar 12 14:50:00.379263 master-0 kubenswrapper[37036]: I0312 14:50:00.378662 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8992-account-create-update-6z47s"] Mar 12 14:50:00.380381 master-0 kubenswrapper[37036]: I0312 14:50:00.379961 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8992-account-create-update-6z47s" Mar 12 14:50:00.387842 master-0 kubenswrapper[37036]: I0312 14:50:00.382956 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 12 14:50:00.390638 master-0 kubenswrapper[37036]: I0312 14:50:00.390337 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8992-account-create-update-6z47s"] Mar 12 14:50:00.414959 master-0 kubenswrapper[37036]: I0312 14:50:00.411048 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ad87258-d41a-4e83-9215-354562cf0075-operator-scripts\") pod \"neutron-db-create-gmb6j\" (UID: \"7ad87258-d41a-4e83-9215-354562cf0075\") " pod="openstack/neutron-db-create-gmb6j" Mar 12 14:50:00.414959 master-0 kubenswrapper[37036]: I0312 14:50:00.411266 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlrnn\" (UniqueName: \"kubernetes.io/projected/4272c013-816c-4779-a81d-2945610612f3-kube-api-access-mlrnn\") pod \"keystone-db-sync-wmxbp\" (UID: \"4272c013-816c-4779-a81d-2945610612f3\") " pod="openstack/keystone-db-sync-wmxbp" Mar 12 14:50:00.414959 master-0 kubenswrapper[37036]: I0312 14:50:00.411342 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2d9c\" (UniqueName: \"kubernetes.io/projected/7ad87258-d41a-4e83-9215-354562cf0075-kube-api-access-f2d9c\") pod \"neutron-db-create-gmb6j\" (UID: \"7ad87258-d41a-4e83-9215-354562cf0075\") " pod="openstack/neutron-db-create-gmb6j" Mar 12 14:50:00.414959 master-0 kubenswrapper[37036]: I0312 14:50:00.411379 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4272c013-816c-4779-a81d-2945610612f3-combined-ca-bundle\") pod \"keystone-db-sync-wmxbp\" (UID: \"4272c013-816c-4779-a81d-2945610612f3\") " pod="openstack/keystone-db-sync-wmxbp" Mar 12 14:50:00.414959 master-0 kubenswrapper[37036]: I0312 14:50:00.411447 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4272c013-816c-4779-a81d-2945610612f3-config-data\") pod \"keystone-db-sync-wmxbp\" (UID: \"4272c013-816c-4779-a81d-2945610612f3\") " pod="openstack/keystone-db-sync-wmxbp" Mar 12 14:50:00.422326 master-0 kubenswrapper[37036]: I0312 14:50:00.422259 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4272c013-816c-4779-a81d-2945610612f3-combined-ca-bundle\") pod \"keystone-db-sync-wmxbp\" (UID: \"4272c013-816c-4779-a81d-2945610612f3\") " pod="openstack/keystone-db-sync-wmxbp" Mar 12 14:50:00.423052 master-0 kubenswrapper[37036]: I0312 14:50:00.422983 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4272c013-816c-4779-a81d-2945610612f3-config-data\") pod \"keystone-db-sync-wmxbp\" (UID: \"4272c013-816c-4779-a81d-2945610612f3\") " pod="openstack/keystone-db-sync-wmxbp" Mar 12 14:50:00.438101 master-0 kubenswrapper[37036]: I0312 14:50:00.438031 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlrnn\" (UniqueName: \"kubernetes.io/projected/4272c013-816c-4779-a81d-2945610612f3-kube-api-access-mlrnn\") pod \"keystone-db-sync-wmxbp\" (UID: \"4272c013-816c-4779-a81d-2945610612f3\") " pod="openstack/keystone-db-sync-wmxbp" Mar 12 14:50:00.515566 master-0 kubenswrapper[37036]: I0312 14:50:00.513927 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/800b0f0d-e0c1-458c-92b5-2773be83e138-operator-scripts\") pod \"neutron-8992-account-create-update-6z47s\" (UID: \"800b0f0d-e0c1-458c-92b5-2773be83e138\") " pod="openstack/neutron-8992-account-create-update-6z47s" Mar 12 14:50:00.515566 master-0 kubenswrapper[37036]: I0312 14:50:00.514007 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb94z\" (UniqueName: \"kubernetes.io/projected/800b0f0d-e0c1-458c-92b5-2773be83e138-kube-api-access-nb94z\") pod \"neutron-8992-account-create-update-6z47s\" (UID: \"800b0f0d-e0c1-458c-92b5-2773be83e138\") " pod="openstack/neutron-8992-account-create-update-6z47s" Mar 12 14:50:00.515566 master-0 kubenswrapper[37036]: I0312 14:50:00.514171 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ad87258-d41a-4e83-9215-354562cf0075-operator-scripts\") pod \"neutron-db-create-gmb6j\" (UID: \"7ad87258-d41a-4e83-9215-354562cf0075\") " pod="openstack/neutron-db-create-gmb6j" Mar 12 14:50:00.515566 master-0 kubenswrapper[37036]: I0312 14:50:00.514328 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2d9c\" (UniqueName: \"kubernetes.io/projected/7ad87258-d41a-4e83-9215-354562cf0075-kube-api-access-f2d9c\") pod \"neutron-db-create-gmb6j\" (UID: \"7ad87258-d41a-4e83-9215-354562cf0075\") " pod="openstack/neutron-db-create-gmb6j" Mar 12 14:50:00.515566 master-0 kubenswrapper[37036]: I0312 14:50:00.515518 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ad87258-d41a-4e83-9215-354562cf0075-operator-scripts\") pod \"neutron-db-create-gmb6j\" (UID: \"7ad87258-d41a-4e83-9215-354562cf0075\") " pod="openstack/neutron-db-create-gmb6j" Mar 12 14:50:00.538499 master-0 kubenswrapper[37036]: I0312 14:50:00.538431 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2d9c\" (UniqueName: \"kubernetes.io/projected/7ad87258-d41a-4e83-9215-354562cf0075-kube-api-access-f2d9c\") pod \"neutron-db-create-gmb6j\" (UID: \"7ad87258-d41a-4e83-9215-354562cf0075\") " pod="openstack/neutron-db-create-gmb6j" Mar 12 14:50:00.591886 master-0 kubenswrapper[37036]: I0312 14:50:00.591834 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wmxbp" Mar 12 14:50:00.618029 master-0 kubenswrapper[37036]: I0312 14:50:00.616681 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/800b0f0d-e0c1-458c-92b5-2773be83e138-operator-scripts\") pod \"neutron-8992-account-create-update-6z47s\" (UID: \"800b0f0d-e0c1-458c-92b5-2773be83e138\") " pod="openstack/neutron-8992-account-create-update-6z47s" Mar 12 14:50:00.618029 master-0 kubenswrapper[37036]: I0312 14:50:00.616756 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb94z\" (UniqueName: \"kubernetes.io/projected/800b0f0d-e0c1-458c-92b5-2773be83e138-kube-api-access-nb94z\") pod \"neutron-8992-account-create-update-6z47s\" (UID: \"800b0f0d-e0c1-458c-92b5-2773be83e138\") " pod="openstack/neutron-8992-account-create-update-6z47s" Mar 12 14:50:00.618029 master-0 kubenswrapper[37036]: I0312 14:50:00.617971 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/800b0f0d-e0c1-458c-92b5-2773be83e138-operator-scripts\") pod \"neutron-8992-account-create-update-6z47s\" (UID: \"800b0f0d-e0c1-458c-92b5-2773be83e138\") " pod="openstack/neutron-8992-account-create-update-6z47s" Mar 12 14:50:00.635206 master-0 kubenswrapper[37036]: I0312 14:50:00.635126 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb94z\" (UniqueName: \"kubernetes.io/projected/800b0f0d-e0c1-458c-92b5-2773be83e138-kube-api-access-nb94z\") pod \"neutron-8992-account-create-update-6z47s\" (UID: \"800b0f0d-e0c1-458c-92b5-2773be83e138\") " pod="openstack/neutron-8992-account-create-update-6z47s" Mar 12 14:50:00.697809 master-0 kubenswrapper[37036]: I0312 14:50:00.697349 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gmb6j" Mar 12 14:50:00.713763 master-0 kubenswrapper[37036]: I0312 14:50:00.713293 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8992-account-create-update-6z47s" Mar 12 14:50:01.419377 master-0 kubenswrapper[37036]: I0312 14:50:01.418012 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wmxbp"] Mar 12 14:50:01.431292 master-0 kubenswrapper[37036]: I0312 14:50:01.430716 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0f2a-account-create-update-7cc89"] Mar 12 14:50:01.718039 master-0 kubenswrapper[37036]: I0312 14:50:01.717991 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wmxbp" event={"ID":"4272c013-816c-4779-a81d-2945610612f3","Type":"ContainerStarted","Data":"471220c1173241cc7681d8ae6714416c9681b82238c9087d1cb2dc9c7c50785f"} Mar 12 14:50:01.726114 master-0 kubenswrapper[37036]: I0312 14:50:01.726062 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"02b0bb9f-56cd-4ffe-9e37-2200e4baec09","Type":"ContainerStarted","Data":"4279615984ca441740f6b9f969578cc5d5ba34f6cef607e9a1d2616e9aa2c378"} Mar 12 14:50:01.726215 master-0 kubenswrapper[37036]: I0312 14:50:01.726131 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"02b0bb9f-56cd-4ffe-9e37-2200e4baec09","Type":"ContainerStarted","Data":"9de23346b9ec604f3b57998c776a8077aa6146b5e010f8aeae054d0aec5e6a20"} Mar 12 14:50:01.726215 master-0 kubenswrapper[37036]: I0312 14:50:01.726147 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"02b0bb9f-56cd-4ffe-9e37-2200e4baec09","Type":"ContainerStarted","Data":"9f0745a0f7c2f20afeca320928de3b2ea6a20e3a917a01f34de56af8d0a25327"} Mar 12 14:50:01.732088 master-0 kubenswrapper[37036]: I0312 14:50:01.732035 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0f2a-account-create-update-7cc89" event={"ID":"4d80a8ef-edb7-4620-a67f-dcdcdd80a907","Type":"ContainerStarted","Data":"0bd200136aae3d6b4ee03bc09b29dfcafaaf3d47ba602d56e1f0c9af7751a733"} Mar 12 14:50:01.732147 master-0 kubenswrapper[37036]: I0312 14:50:01.732094 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0f2a-account-create-update-7cc89" event={"ID":"4d80a8ef-edb7-4620-a67f-dcdcdd80a907","Type":"ContainerStarted","Data":"41678cd6b8231b1de031620c0387ae80f07c8e31d9da3164aaadee1c70f4fb54"} Mar 12 14:50:01.767059 master-0 kubenswrapper[37036]: I0312 14:50:01.765967 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-0f2a-account-create-update-7cc89" podStartSLOduration=2.765949646 podStartE2EDuration="2.765949646s" podCreationTimestamp="2026-03-12 14:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:01.757527557 +0000 UTC m=+860.765268494" watchObservedRunningTime="2026-03-12 14:50:01.765949646 +0000 UTC m=+860.773690583" Mar 12 14:50:01.849660 master-0 kubenswrapper[37036]: I0312 14:50:01.847076 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 12 14:50:01.860308 master-0 kubenswrapper[37036]: W0312 14:50:01.857485 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ad87258_d41a_4e83_9215_354562cf0075.slice/crio-7ebb7871066975ec6d98906f589d10e36c8c9ac160a82d13c6db5cc61f5854ef WatchSource:0}: Error finding container 7ebb7871066975ec6d98906f589d10e36c8c9ac160a82d13c6db5cc61f5854ef: Status 404 returned error can't find the container with id 7ebb7871066975ec6d98906f589d10e36c8c9ac160a82d13c6db5cc61f5854ef Mar 12 14:50:01.883963 master-0 kubenswrapper[37036]: I0312 14:50:01.882826 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-gmb6j"] Mar 12 14:50:01.917429 master-0 kubenswrapper[37036]: I0312 14:50:01.915661 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-d5cg7"] Mar 12 14:50:01.961053 master-0 kubenswrapper[37036]: I0312 14:50:01.960985 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8992-account-create-update-6z47s"] Mar 12 14:50:02.752087 master-0 kubenswrapper[37036]: I0312 14:50:02.752010 37036 generic.go:334] "Generic (PLEG): container finished" podID="4d80a8ef-edb7-4620-a67f-dcdcdd80a907" containerID="0bd200136aae3d6b4ee03bc09b29dfcafaaf3d47ba602d56e1f0c9af7751a733" exitCode=0 Mar 12 14:50:02.752589 master-0 kubenswrapper[37036]: I0312 14:50:02.752089 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0f2a-account-create-update-7cc89" event={"ID":"4d80a8ef-edb7-4620-a67f-dcdcdd80a907","Type":"ContainerDied","Data":"0bd200136aae3d6b4ee03bc09b29dfcafaaf3d47ba602d56e1f0c9af7751a733"} Mar 12 14:50:02.754208 master-0 kubenswrapper[37036]: I0312 14:50:02.754040 37036 generic.go:334] "Generic (PLEG): container finished" podID="7ad87258-d41a-4e83-9215-354562cf0075" containerID="5704b832261354cf2252d77901e285b7dd4eb9e5a26cca0464c872bff351ddc1" exitCode=0 Mar 12 14:50:02.754208 master-0 kubenswrapper[37036]: I0312 14:50:02.754087 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gmb6j" event={"ID":"7ad87258-d41a-4e83-9215-354562cf0075","Type":"ContainerDied","Data":"5704b832261354cf2252d77901e285b7dd4eb9e5a26cca0464c872bff351ddc1"} Mar 12 14:50:02.754208 master-0 kubenswrapper[37036]: I0312 14:50:02.754113 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gmb6j" event={"ID":"7ad87258-d41a-4e83-9215-354562cf0075","Type":"ContainerStarted","Data":"7ebb7871066975ec6d98906f589d10e36c8c9ac160a82d13c6db5cc61f5854ef"} Mar 12 14:50:02.756532 master-0 kubenswrapper[37036]: I0312 14:50:02.755634 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8992-account-create-update-6z47s" event={"ID":"800b0f0d-e0c1-458c-92b5-2773be83e138","Type":"ContainerStarted","Data":"bb53bcb6a9ae33cd3888f7eec9b60f5c459d9cff75ace5d9d7aadfe6cdc15816"} Mar 12 14:50:02.756532 master-0 kubenswrapper[37036]: I0312 14:50:02.755660 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8992-account-create-update-6z47s" event={"ID":"800b0f0d-e0c1-458c-92b5-2773be83e138","Type":"ContainerStarted","Data":"f770652d5ba992b2f450a6c2ffc1b9f2277bb1e253f9d9824da4aecd5ebdc3fc"} Mar 12 14:50:02.763055 master-0 kubenswrapper[37036]: I0312 14:50:02.762984 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"02b0bb9f-56cd-4ffe-9e37-2200e4baec09","Type":"ContainerStarted","Data":"b22405495c7fafc2ef7a36a7b34c25c8f2c788bc9e99b5e514d314a7f87ec58f"} Mar 12 14:50:02.763265 master-0 kubenswrapper[37036]: I0312 14:50:02.763067 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"02b0bb9f-56cd-4ffe-9e37-2200e4baec09","Type":"ContainerStarted","Data":"3a972a1920719afe3d0345fc52ffee7c180ca585ef7c2ace917f41461f2f222f"} Mar 12 14:50:02.763265 master-0 kubenswrapper[37036]: I0312 14:50:02.763079 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"02b0bb9f-56cd-4ffe-9e37-2200e4baec09","Type":"ContainerStarted","Data":"14d26cd07e7dd931a2ada88b8632f29f2bf903f3fbfaf9c3fbae816da9ae79fc"} Mar 12 14:50:02.765167 master-0 kubenswrapper[37036]: I0312 14:50:02.765126 37036 generic.go:334] "Generic (PLEG): container finished" podID="690417a0-ecec-4a79-ab5b-789407fec2b0" containerID="6f0bbda313200547b7afc1c642c43b0573686532b81ecba60e71e755b5d89759" exitCode=0 Mar 12 14:50:02.765262 master-0 kubenswrapper[37036]: I0312 14:50:02.765185 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-d5cg7" event={"ID":"690417a0-ecec-4a79-ab5b-789407fec2b0","Type":"ContainerDied","Data":"6f0bbda313200547b7afc1c642c43b0573686532b81ecba60e71e755b5d89759"} Mar 12 14:50:02.765262 master-0 kubenswrapper[37036]: I0312 14:50:02.765216 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-d5cg7" event={"ID":"690417a0-ecec-4a79-ab5b-789407fec2b0","Type":"ContainerStarted","Data":"02b6e40db722d5da6245e7609ac27d06836a16dec52ac7a6ff1714c030a462a4"} Mar 12 14:50:02.831725 master-0 kubenswrapper[37036]: I0312 14:50:02.831629 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8992-account-create-update-6z47s" podStartSLOduration=2.831610007 podStartE2EDuration="2.831610007s" podCreationTimestamp="2026-03-12 14:50:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:02.80556202 +0000 UTC m=+861.813302947" watchObservedRunningTime="2026-03-12 14:50:02.831610007 +0000 UTC m=+861.839350944" Mar 12 14:50:03.779829 master-0 kubenswrapper[37036]: I0312 14:50:03.779769 37036 generic.go:334] "Generic (PLEG): container finished" podID="800b0f0d-e0c1-458c-92b5-2773be83e138" containerID="bb53bcb6a9ae33cd3888f7eec9b60f5c459d9cff75ace5d9d7aadfe6cdc15816" exitCode=0 Mar 12 14:50:03.781614 master-0 kubenswrapper[37036]: I0312 14:50:03.779861 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8992-account-create-update-6z47s" event={"ID":"800b0f0d-e0c1-458c-92b5-2773be83e138","Type":"ContainerDied","Data":"bb53bcb6a9ae33cd3888f7eec9b60f5c459d9cff75ace5d9d7aadfe6cdc15816"} Mar 12 14:50:03.787635 master-0 kubenswrapper[37036]: I0312 14:50:03.787582 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"02b0bb9f-56cd-4ffe-9e37-2200e4baec09","Type":"ContainerStarted","Data":"ea429f10fe7e9b533d699fb7671c189ad97118f285d85093891cbbfc9f76fa91"} Mar 12 14:50:03.840425 master-0 kubenswrapper[37036]: I0312 14:50:03.840333 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=22.85090499 podStartE2EDuration="36.840309543s" podCreationTimestamp="2026-03-12 14:49:27 +0000 UTC" firstStartedPulling="2026-03-12 14:49:46.736111228 +0000 UTC m=+845.743852165" lastFinishedPulling="2026-03-12 14:50:00.725515781 +0000 UTC m=+859.733256718" observedRunningTime="2026-03-12 14:50:03.834574451 +0000 UTC m=+862.842315388" watchObservedRunningTime="2026-03-12 14:50:03.840309543 +0000 UTC m=+862.848050480" Mar 12 14:50:04.203393 master-0 kubenswrapper[37036]: I0312 14:50:04.187976 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-857cb5775f-5b6ts"] Mar 12 14:50:04.209318 master-0 kubenswrapper[37036]: I0312 14:50:04.204883 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" Mar 12 14:50:04.229805 master-0 kubenswrapper[37036]: I0312 14:50:04.228578 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 12 14:50:04.249474 master-0 kubenswrapper[37036]: I0312 14:50:04.248950 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-857cb5775f-5b6ts"] Mar 12 14:50:04.280254 master-0 kubenswrapper[37036]: I0312 14:50:04.280116 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-dns-swift-storage-0\") pod \"dnsmasq-dns-857cb5775f-5b6ts\" (UID: \"3f950b6e-f5ef-4938-99f9-e37c8300503e\") " pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" Mar 12 14:50:04.280254 master-0 kubenswrapper[37036]: I0312 14:50:04.280205 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnxgp\" (UniqueName: \"kubernetes.io/projected/3f950b6e-f5ef-4938-99f9-e37c8300503e-kube-api-access-xnxgp\") pod \"dnsmasq-dns-857cb5775f-5b6ts\" (UID: \"3f950b6e-f5ef-4938-99f9-e37c8300503e\") " pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" Mar 12 14:50:04.280254 master-0 kubenswrapper[37036]: I0312 14:50:04.280248 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-dns-svc\") pod \"dnsmasq-dns-857cb5775f-5b6ts\" (UID: \"3f950b6e-f5ef-4938-99f9-e37c8300503e\") " pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" Mar 12 14:50:04.280530 master-0 kubenswrapper[37036]: I0312 14:50:04.280302 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-config\") pod \"dnsmasq-dns-857cb5775f-5b6ts\" (UID: \"3f950b6e-f5ef-4938-99f9-e37c8300503e\") " pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" Mar 12 14:50:04.280530 master-0 kubenswrapper[37036]: I0312 14:50:04.280418 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-ovsdbserver-sb\") pod \"dnsmasq-dns-857cb5775f-5b6ts\" (UID: \"3f950b6e-f5ef-4938-99f9-e37c8300503e\") " pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" Mar 12 14:50:04.280625 master-0 kubenswrapper[37036]: I0312 14:50:04.280560 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-ovsdbserver-nb\") pod \"dnsmasq-dns-857cb5775f-5b6ts\" (UID: \"3f950b6e-f5ef-4938-99f9-e37c8300503e\") " pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" Mar 12 14:50:04.383461 master-0 kubenswrapper[37036]: I0312 14:50:04.383394 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnxgp\" (UniqueName: \"kubernetes.io/projected/3f950b6e-f5ef-4938-99f9-e37c8300503e-kube-api-access-xnxgp\") pod \"dnsmasq-dns-857cb5775f-5b6ts\" (UID: \"3f950b6e-f5ef-4938-99f9-e37c8300503e\") " pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" Mar 12 14:50:04.383685 master-0 kubenswrapper[37036]: I0312 14:50:04.383480 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-dns-svc\") pod \"dnsmasq-dns-857cb5775f-5b6ts\" (UID: \"3f950b6e-f5ef-4938-99f9-e37c8300503e\") " pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" Mar 12 14:50:04.383685 master-0 kubenswrapper[37036]: I0312 14:50:04.383534 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-config\") pod \"dnsmasq-dns-857cb5775f-5b6ts\" (UID: \"3f950b6e-f5ef-4938-99f9-e37c8300503e\") " pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" Mar 12 14:50:04.383685 master-0 kubenswrapper[37036]: I0312 14:50:04.383628 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-ovsdbserver-sb\") pod \"dnsmasq-dns-857cb5775f-5b6ts\" (UID: \"3f950b6e-f5ef-4938-99f9-e37c8300503e\") " pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" Mar 12 14:50:04.383792 master-0 kubenswrapper[37036]: I0312 14:50:04.383728 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-ovsdbserver-nb\") pod \"dnsmasq-dns-857cb5775f-5b6ts\" (UID: \"3f950b6e-f5ef-4938-99f9-e37c8300503e\") " pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" Mar 12 14:50:04.383886 master-0 kubenswrapper[37036]: I0312 14:50:04.383852 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-dns-swift-storage-0\") pod \"dnsmasq-dns-857cb5775f-5b6ts\" (UID: \"3f950b6e-f5ef-4938-99f9-e37c8300503e\") " pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" Mar 12 14:50:04.385893 master-0 kubenswrapper[37036]: I0312 14:50:04.385852 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-dns-swift-storage-0\") pod \"dnsmasq-dns-857cb5775f-5b6ts\" (UID: \"3f950b6e-f5ef-4938-99f9-e37c8300503e\") " pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" Mar 12 14:50:04.389982 master-0 kubenswrapper[37036]: I0312 14:50:04.388537 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-config\") pod \"dnsmasq-dns-857cb5775f-5b6ts\" (UID: \"3f950b6e-f5ef-4938-99f9-e37c8300503e\") " pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" Mar 12 14:50:04.389982 master-0 kubenswrapper[37036]: I0312 14:50:04.388631 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-dns-svc\") pod \"dnsmasq-dns-857cb5775f-5b6ts\" (UID: \"3f950b6e-f5ef-4938-99f9-e37c8300503e\") " pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" Mar 12 14:50:04.390155 master-0 kubenswrapper[37036]: I0312 14:50:04.390083 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-ovsdbserver-nb\") pod \"dnsmasq-dns-857cb5775f-5b6ts\" (UID: \"3f950b6e-f5ef-4938-99f9-e37c8300503e\") " pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" Mar 12 14:50:04.392345 master-0 kubenswrapper[37036]: I0312 14:50:04.391330 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-ovsdbserver-sb\") pod \"dnsmasq-dns-857cb5775f-5b6ts\" (UID: \"3f950b6e-f5ef-4938-99f9-e37c8300503e\") " pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" Mar 12 14:50:04.403102 master-0 kubenswrapper[37036]: I0312 14:50:04.403056 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnxgp\" (UniqueName: \"kubernetes.io/projected/3f950b6e-f5ef-4938-99f9-e37c8300503e-kube-api-access-xnxgp\") pod \"dnsmasq-dns-857cb5775f-5b6ts\" (UID: \"3f950b6e-f5ef-4938-99f9-e37c8300503e\") " pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" Mar 12 14:50:04.546000 master-0 kubenswrapper[37036]: I0312 14:50:04.545943 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" Mar 12 14:50:06.827155 master-0 kubenswrapper[37036]: I0312 14:50:06.827105 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-d5cg7" event={"ID":"690417a0-ecec-4a79-ab5b-789407fec2b0","Type":"ContainerDied","Data":"02b6e40db722d5da6245e7609ac27d06836a16dec52ac7a6ff1714c030a462a4"} Mar 12 14:50:06.827155 master-0 kubenswrapper[37036]: I0312 14:50:06.827154 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02b6e40db722d5da6245e7609ac27d06836a16dec52ac7a6ff1714c030a462a4" Mar 12 14:50:06.829517 master-0 kubenswrapper[37036]: I0312 14:50:06.829428 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0f2a-account-create-update-7cc89" event={"ID":"4d80a8ef-edb7-4620-a67f-dcdcdd80a907","Type":"ContainerDied","Data":"41678cd6b8231b1de031620c0387ae80f07c8e31d9da3164aaadee1c70f4fb54"} Mar 12 14:50:06.829517 master-0 kubenswrapper[37036]: I0312 14:50:06.829455 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41678cd6b8231b1de031620c0387ae80f07c8e31d9da3164aaadee1c70f4fb54" Mar 12 14:50:06.831304 master-0 kubenswrapper[37036]: I0312 14:50:06.831274 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gmb6j" event={"ID":"7ad87258-d41a-4e83-9215-354562cf0075","Type":"ContainerDied","Data":"7ebb7871066975ec6d98906f589d10e36c8c9ac160a82d13c6db5cc61f5854ef"} Mar 12 14:50:06.831304 master-0 kubenswrapper[37036]: I0312 14:50:06.831297 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ebb7871066975ec6d98906f589d10e36c8c9ac160a82d13c6db5cc61f5854ef" Mar 12 14:50:07.156925 master-0 kubenswrapper[37036]: I0312 14:50:07.152557 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0f2a-account-create-update-7cc89" Mar 12 14:50:07.163834 master-0 kubenswrapper[37036]: I0312 14:50:07.158793 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-d5cg7" Mar 12 14:50:07.184700 master-0 kubenswrapper[37036]: I0312 14:50:07.184649 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gmb6j" Mar 12 14:50:07.202853 master-0 kubenswrapper[37036]: I0312 14:50:07.198342 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8992-account-create-update-6z47s" Mar 12 14:50:07.258239 master-0 kubenswrapper[37036]: I0312 14:50:07.257731 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d80a8ef-edb7-4620-a67f-dcdcdd80a907-operator-scripts\") pod \"4d80a8ef-edb7-4620-a67f-dcdcdd80a907\" (UID: \"4d80a8ef-edb7-4620-a67f-dcdcdd80a907\") " Mar 12 14:50:07.258453 master-0 kubenswrapper[37036]: I0312 14:50:07.258262 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/800b0f0d-e0c1-458c-92b5-2773be83e138-operator-scripts\") pod \"800b0f0d-e0c1-458c-92b5-2773be83e138\" (UID: \"800b0f0d-e0c1-458c-92b5-2773be83e138\") " Mar 12 14:50:07.258453 master-0 kubenswrapper[37036]: I0312 14:50:07.258182 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d80a8ef-edb7-4620-a67f-dcdcdd80a907-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d80a8ef-edb7-4620-a67f-dcdcdd80a907" (UID: "4d80a8ef-edb7-4620-a67f-dcdcdd80a907"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:07.258453 master-0 kubenswrapper[37036]: I0312 14:50:07.258436 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/690417a0-ecec-4a79-ab5b-789407fec2b0-operator-scripts\") pod \"690417a0-ecec-4a79-ab5b-789407fec2b0\" (UID: \"690417a0-ecec-4a79-ab5b-789407fec2b0\") " Mar 12 14:50:07.258774 master-0 kubenswrapper[37036]: I0312 14:50:07.258738 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/800b0f0d-e0c1-458c-92b5-2773be83e138-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "800b0f0d-e0c1-458c-92b5-2773be83e138" (UID: "800b0f0d-e0c1-458c-92b5-2773be83e138"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:07.258999 master-0 kubenswrapper[37036]: I0312 14:50:07.258977 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/690417a0-ecec-4a79-ab5b-789407fec2b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "690417a0-ecec-4a79-ab5b-789407fec2b0" (UID: "690417a0-ecec-4a79-ab5b-789407fec2b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:07.259152 master-0 kubenswrapper[37036]: I0312 14:50:07.259133 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bstg9\" (UniqueName: \"kubernetes.io/projected/4d80a8ef-edb7-4620-a67f-dcdcdd80a907-kube-api-access-bstg9\") pod \"4d80a8ef-edb7-4620-a67f-dcdcdd80a907\" (UID: \"4d80a8ef-edb7-4620-a67f-dcdcdd80a907\") " Mar 12 14:50:07.259713 master-0 kubenswrapper[37036]: I0312 14:50:07.259683 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ad87258-d41a-4e83-9215-354562cf0075-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ad87258-d41a-4e83-9215-354562cf0075" (UID: "7ad87258-d41a-4e83-9215-354562cf0075"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:07.259713 master-0 kubenswrapper[37036]: I0312 14:50:07.259700 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ad87258-d41a-4e83-9215-354562cf0075-operator-scripts\") pod \"7ad87258-d41a-4e83-9215-354562cf0075\" (UID: \"7ad87258-d41a-4e83-9215-354562cf0075\") " Mar 12 14:50:07.259808 master-0 kubenswrapper[37036]: I0312 14:50:07.259779 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2d9c\" (UniqueName: \"kubernetes.io/projected/7ad87258-d41a-4e83-9215-354562cf0075-kube-api-access-f2d9c\") pod \"7ad87258-d41a-4e83-9215-354562cf0075\" (UID: \"7ad87258-d41a-4e83-9215-354562cf0075\") " Mar 12 14:50:07.259863 master-0 kubenswrapper[37036]: I0312 14:50:07.259838 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb94z\" (UniqueName: \"kubernetes.io/projected/800b0f0d-e0c1-458c-92b5-2773be83e138-kube-api-access-nb94z\") pod \"800b0f0d-e0c1-458c-92b5-2773be83e138\" (UID: \"800b0f0d-e0c1-458c-92b5-2773be83e138\") " Mar 12 14:50:07.260221 master-0 kubenswrapper[37036]: I0312 14:50:07.260192 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkqqx\" (UniqueName: \"kubernetes.io/projected/690417a0-ecec-4a79-ab5b-789407fec2b0-kube-api-access-lkqqx\") pod \"690417a0-ecec-4a79-ab5b-789407fec2b0\" (UID: \"690417a0-ecec-4a79-ab5b-789407fec2b0\") " Mar 12 14:50:07.261282 master-0 kubenswrapper[37036]: I0312 14:50:07.261254 37036 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d80a8ef-edb7-4620-a67f-dcdcdd80a907-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:07.261282 master-0 kubenswrapper[37036]: I0312 14:50:07.261283 37036 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/800b0f0d-e0c1-458c-92b5-2773be83e138-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:07.261409 master-0 kubenswrapper[37036]: I0312 14:50:07.261298 37036 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/690417a0-ecec-4a79-ab5b-789407fec2b0-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:07.261409 master-0 kubenswrapper[37036]: I0312 14:50:07.261338 37036 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ad87258-d41a-4e83-9215-354562cf0075-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:07.262557 master-0 kubenswrapper[37036]: I0312 14:50:07.262516 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/800b0f0d-e0c1-458c-92b5-2773be83e138-kube-api-access-nb94z" (OuterVolumeSpecName: "kube-api-access-nb94z") pod "800b0f0d-e0c1-458c-92b5-2773be83e138" (UID: "800b0f0d-e0c1-458c-92b5-2773be83e138"). InnerVolumeSpecName "kube-api-access-nb94z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:50:07.263594 master-0 kubenswrapper[37036]: I0312 14:50:07.262888 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ad87258-d41a-4e83-9215-354562cf0075-kube-api-access-f2d9c" (OuterVolumeSpecName: "kube-api-access-f2d9c") pod "7ad87258-d41a-4e83-9215-354562cf0075" (UID: "7ad87258-d41a-4e83-9215-354562cf0075"). InnerVolumeSpecName "kube-api-access-f2d9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:50:07.265628 master-0 kubenswrapper[37036]: I0312 14:50:07.265576 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/690417a0-ecec-4a79-ab5b-789407fec2b0-kube-api-access-lkqqx" (OuterVolumeSpecName: "kube-api-access-lkqqx") pod "690417a0-ecec-4a79-ab5b-789407fec2b0" (UID: "690417a0-ecec-4a79-ab5b-789407fec2b0"). InnerVolumeSpecName "kube-api-access-lkqqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:50:07.265745 master-0 kubenswrapper[37036]: I0312 14:50:07.265675 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d80a8ef-edb7-4620-a67f-dcdcdd80a907-kube-api-access-bstg9" (OuterVolumeSpecName: "kube-api-access-bstg9") pod "4d80a8ef-edb7-4620-a67f-dcdcdd80a907" (UID: "4d80a8ef-edb7-4620-a67f-dcdcdd80a907"). InnerVolumeSpecName "kube-api-access-bstg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:50:07.315381 master-0 kubenswrapper[37036]: I0312 14:50:07.315309 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-857cb5775f-5b6ts"] Mar 12 14:50:07.321234 master-0 kubenswrapper[37036]: W0312 14:50:07.321175 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f950b6e_f5ef_4938_99f9_e37c8300503e.slice/crio-f9e9d1d2f371344eb8fd090dcdb1fafe16db88d56860c55c49354d74d41f1137 WatchSource:0}: Error finding container f9e9d1d2f371344eb8fd090dcdb1fafe16db88d56860c55c49354d74d41f1137: Status 404 returned error can't find the container with id f9e9d1d2f371344eb8fd090dcdb1fafe16db88d56860c55c49354d74d41f1137 Mar 12 14:50:07.362991 master-0 kubenswrapper[37036]: I0312 14:50:07.362946 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bstg9\" (UniqueName: \"kubernetes.io/projected/4d80a8ef-edb7-4620-a67f-dcdcdd80a907-kube-api-access-bstg9\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:07.362991 master-0 kubenswrapper[37036]: I0312 14:50:07.362987 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2d9c\" (UniqueName: \"kubernetes.io/projected/7ad87258-d41a-4e83-9215-354562cf0075-kube-api-access-f2d9c\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:07.362991 master-0 kubenswrapper[37036]: I0312 14:50:07.362998 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb94z\" (UniqueName: \"kubernetes.io/projected/800b0f0d-e0c1-458c-92b5-2773be83e138-kube-api-access-nb94z\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:07.363233 master-0 kubenswrapper[37036]: I0312 14:50:07.363008 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkqqx\" (UniqueName: \"kubernetes.io/projected/690417a0-ecec-4a79-ab5b-789407fec2b0-kube-api-access-lkqqx\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:07.844970 master-0 kubenswrapper[37036]: I0312 14:50:07.844917 37036 generic.go:334] "Generic (PLEG): container finished" podID="78a7388f-90a4-420a-b2ef-e31fb1fda25e" containerID="567497f53e6e5ca7827e775b06b0cc5ec8b399016a56e4b70007def96dfc70a2" exitCode=0 Mar 12 14:50:07.845536 master-0 kubenswrapper[37036]: I0312 14:50:07.845013 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-989wd" event={"ID":"78a7388f-90a4-420a-b2ef-e31fb1fda25e","Type":"ContainerDied","Data":"567497f53e6e5ca7827e775b06b0cc5ec8b399016a56e4b70007def96dfc70a2"} Mar 12 14:50:07.849458 master-0 kubenswrapper[37036]: I0312 14:50:07.849399 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8992-account-create-update-6z47s" Mar 12 14:50:07.849728 master-0 kubenswrapper[37036]: I0312 14:50:07.849403 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8992-account-create-update-6z47s" event={"ID":"800b0f0d-e0c1-458c-92b5-2773be83e138","Type":"ContainerDied","Data":"f770652d5ba992b2f450a6c2ffc1b9f2277bb1e253f9d9824da4aecd5ebdc3fc"} Mar 12 14:50:07.849728 master-0 kubenswrapper[37036]: I0312 14:50:07.849540 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f770652d5ba992b2f450a6c2ffc1b9f2277bb1e253f9d9824da4aecd5ebdc3fc" Mar 12 14:50:07.854667 master-0 kubenswrapper[37036]: I0312 14:50:07.854627 37036 generic.go:334] "Generic (PLEG): container finished" podID="3f950b6e-f5ef-4938-99f9-e37c8300503e" containerID="c972c203d3bf53d695027e969c8b1b886294322736c363a1240bd465b588a3c1" exitCode=0 Mar 12 14:50:07.854784 master-0 kubenswrapper[37036]: I0312 14:50:07.854721 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" event={"ID":"3f950b6e-f5ef-4938-99f9-e37c8300503e","Type":"ContainerDied","Data":"c972c203d3bf53d695027e969c8b1b886294322736c363a1240bd465b588a3c1"} Mar 12 14:50:07.854784 master-0 kubenswrapper[37036]: I0312 14:50:07.854761 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" event={"ID":"3f950b6e-f5ef-4938-99f9-e37c8300503e","Type":"ContainerStarted","Data":"f9e9d1d2f371344eb8fd090dcdb1fafe16db88d56860c55c49354d74d41f1137"} Mar 12 14:50:07.857975 master-0 kubenswrapper[37036]: I0312 14:50:07.857692 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0f2a-account-create-update-7cc89" Mar 12 14:50:07.857975 master-0 kubenswrapper[37036]: I0312 14:50:07.857783 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wmxbp" event={"ID":"4272c013-816c-4779-a81d-2945610612f3","Type":"ContainerStarted","Data":"b5e71b97f3cb342c1df35c00a0a1fd789fb1da64152dd434e9321f899e419b74"} Mar 12 14:50:07.857975 master-0 kubenswrapper[37036]: I0312 14:50:07.857828 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-d5cg7" Mar 12 14:50:07.857975 master-0 kubenswrapper[37036]: I0312 14:50:07.857850 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gmb6j" Mar 12 14:50:07.913845 master-0 kubenswrapper[37036]: I0312 14:50:07.913733 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-wmxbp" podStartSLOduration=2.469627092 podStartE2EDuration="7.913709081s" podCreationTimestamp="2026-03-12 14:50:00 +0000 UTC" firstStartedPulling="2026-03-12 14:50:01.452554537 +0000 UTC m=+860.460295474" lastFinishedPulling="2026-03-12 14:50:06.896636506 +0000 UTC m=+865.904377463" observedRunningTime="2026-03-12 14:50:07.90318528 +0000 UTC m=+866.910926217" watchObservedRunningTime="2026-03-12 14:50:07.913709081 +0000 UTC m=+866.921450008" Mar 12 14:50:08.871339 master-0 kubenswrapper[37036]: I0312 14:50:08.871278 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" event={"ID":"3f950b6e-f5ef-4938-99f9-e37c8300503e","Type":"ContainerStarted","Data":"bba00f07251299f9e676dbca2144caa6f941550cda874ab8688617e6d5959a37"} Mar 12 14:50:08.871862 master-0 kubenswrapper[37036]: I0312 14:50:08.871832 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" Mar 12 14:50:08.898318 master-0 kubenswrapper[37036]: I0312 14:50:08.898223 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" podStartSLOduration=4.898196297 podStartE2EDuration="4.898196297s" podCreationTimestamp="2026-03-12 14:50:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:08.891226904 +0000 UTC m=+867.898967851" watchObservedRunningTime="2026-03-12 14:50:08.898196297 +0000 UTC m=+867.905937254" Mar 12 14:50:09.353711 master-0 kubenswrapper[37036]: I0312 14:50:09.353651 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-989wd" Mar 12 14:50:09.415307 master-0 kubenswrapper[37036]: I0312 14:50:09.415244 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/78a7388f-90a4-420a-b2ef-e31fb1fda25e-db-sync-config-data\") pod \"78a7388f-90a4-420a-b2ef-e31fb1fda25e\" (UID: \"78a7388f-90a4-420a-b2ef-e31fb1fda25e\") " Mar 12 14:50:09.415508 master-0 kubenswrapper[37036]: I0312 14:50:09.415316 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78a7388f-90a4-420a-b2ef-e31fb1fda25e-combined-ca-bundle\") pod \"78a7388f-90a4-420a-b2ef-e31fb1fda25e\" (UID: \"78a7388f-90a4-420a-b2ef-e31fb1fda25e\") " Mar 12 14:50:09.415543 master-0 kubenswrapper[37036]: I0312 14:50:09.415518 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78a7388f-90a4-420a-b2ef-e31fb1fda25e-config-data\") pod \"78a7388f-90a4-420a-b2ef-e31fb1fda25e\" (UID: \"78a7388f-90a4-420a-b2ef-e31fb1fda25e\") " Mar 12 14:50:09.415621 master-0 kubenswrapper[37036]: I0312 14:50:09.415584 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmtpb\" (UniqueName: \"kubernetes.io/projected/78a7388f-90a4-420a-b2ef-e31fb1fda25e-kube-api-access-dmtpb\") pod \"78a7388f-90a4-420a-b2ef-e31fb1fda25e\" (UID: \"78a7388f-90a4-420a-b2ef-e31fb1fda25e\") " Mar 12 14:50:09.420936 master-0 kubenswrapper[37036]: I0312 14:50:09.420687 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78a7388f-90a4-420a-b2ef-e31fb1fda25e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "78a7388f-90a4-420a-b2ef-e31fb1fda25e" (UID: "78a7388f-90a4-420a-b2ef-e31fb1fda25e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:09.424470 master-0 kubenswrapper[37036]: I0312 14:50:09.423112 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78a7388f-90a4-420a-b2ef-e31fb1fda25e-kube-api-access-dmtpb" (OuterVolumeSpecName: "kube-api-access-dmtpb") pod "78a7388f-90a4-420a-b2ef-e31fb1fda25e" (UID: "78a7388f-90a4-420a-b2ef-e31fb1fda25e"). InnerVolumeSpecName "kube-api-access-dmtpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:50:09.462917 master-0 kubenswrapper[37036]: I0312 14:50:09.462831 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78a7388f-90a4-420a-b2ef-e31fb1fda25e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78a7388f-90a4-420a-b2ef-e31fb1fda25e" (UID: "78a7388f-90a4-420a-b2ef-e31fb1fda25e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:09.476891 master-0 kubenswrapper[37036]: I0312 14:50:09.476821 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78a7388f-90a4-420a-b2ef-e31fb1fda25e-config-data" (OuterVolumeSpecName: "config-data") pod "78a7388f-90a4-420a-b2ef-e31fb1fda25e" (UID: "78a7388f-90a4-420a-b2ef-e31fb1fda25e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:09.517638 master-0 kubenswrapper[37036]: I0312 14:50:09.517581 37036 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/78a7388f-90a4-420a-b2ef-e31fb1fda25e-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:09.517638 master-0 kubenswrapper[37036]: I0312 14:50:09.517626 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78a7388f-90a4-420a-b2ef-e31fb1fda25e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:09.517638 master-0 kubenswrapper[37036]: I0312 14:50:09.517636 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78a7388f-90a4-420a-b2ef-e31fb1fda25e-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:09.517638 master-0 kubenswrapper[37036]: I0312 14:50:09.517648 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmtpb\" (UniqueName: \"kubernetes.io/projected/78a7388f-90a4-420a-b2ef-e31fb1fda25e-kube-api-access-dmtpb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:09.883994 master-0 kubenswrapper[37036]: I0312 14:50:09.883863 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-989wd" Mar 12 14:50:09.888180 master-0 kubenswrapper[37036]: I0312 14:50:09.888127 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-989wd" event={"ID":"78a7388f-90a4-420a-b2ef-e31fb1fda25e","Type":"ContainerDied","Data":"cce253be1910b2659f7b5bbe842e262b1b048b283695aa30f3c81858c3aa77ca"} Mar 12 14:50:09.888326 master-0 kubenswrapper[37036]: I0312 14:50:09.888186 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cce253be1910b2659f7b5bbe842e262b1b048b283695aa30f3c81858c3aa77ca" Mar 12 14:50:10.099923 master-0 kubenswrapper[37036]: E0312 14:50:10.099838 37036 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78a7388f_90a4_420a_b2ef_e31fb1fda25e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78a7388f_90a4_420a_b2ef_e31fb1fda25e.slice/crio-cce253be1910b2659f7b5bbe842e262b1b048b283695aa30f3c81858c3aa77ca\": RecentStats: unable to find data in memory cache]" Mar 12 14:50:10.730995 master-0 kubenswrapper[37036]: I0312 14:50:10.729403 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-857cb5775f-5b6ts"] Mar 12 14:50:10.759951 master-0 kubenswrapper[37036]: I0312 14:50:10.759019 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86748b6cff-htlt4"] Mar 12 14:50:10.759951 master-0 kubenswrapper[37036]: E0312 14:50:10.759470 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="800b0f0d-e0c1-458c-92b5-2773be83e138" containerName="mariadb-account-create-update" Mar 12 14:50:10.759951 master-0 kubenswrapper[37036]: I0312 14:50:10.759486 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="800b0f0d-e0c1-458c-92b5-2773be83e138" containerName="mariadb-account-create-update" Mar 12 14:50:10.759951 master-0 kubenswrapper[37036]: E0312 14:50:10.759535 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ad87258-d41a-4e83-9215-354562cf0075" containerName="mariadb-database-create" Mar 12 14:50:10.759951 master-0 kubenswrapper[37036]: I0312 14:50:10.759542 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ad87258-d41a-4e83-9215-354562cf0075" containerName="mariadb-database-create" Mar 12 14:50:10.759951 master-0 kubenswrapper[37036]: E0312 14:50:10.759549 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d80a8ef-edb7-4620-a67f-dcdcdd80a907" containerName="mariadb-account-create-update" Mar 12 14:50:10.759951 master-0 kubenswrapper[37036]: I0312 14:50:10.759556 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d80a8ef-edb7-4620-a67f-dcdcdd80a907" containerName="mariadb-account-create-update" Mar 12 14:50:10.759951 master-0 kubenswrapper[37036]: E0312 14:50:10.759568 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78a7388f-90a4-420a-b2ef-e31fb1fda25e" containerName="glance-db-sync" Mar 12 14:50:10.759951 master-0 kubenswrapper[37036]: I0312 14:50:10.759576 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a7388f-90a4-420a-b2ef-e31fb1fda25e" containerName="glance-db-sync" Mar 12 14:50:10.759951 master-0 kubenswrapper[37036]: E0312 14:50:10.759585 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690417a0-ecec-4a79-ab5b-789407fec2b0" containerName="mariadb-database-create" Mar 12 14:50:10.759951 master-0 kubenswrapper[37036]: I0312 14:50:10.759591 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="690417a0-ecec-4a79-ab5b-789407fec2b0" containerName="mariadb-database-create" Mar 12 14:50:10.759951 master-0 kubenswrapper[37036]: I0312 14:50:10.759793 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="800b0f0d-e0c1-458c-92b5-2773be83e138" containerName="mariadb-account-create-update" Mar 12 14:50:10.759951 master-0 kubenswrapper[37036]: I0312 14:50:10.759824 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d80a8ef-edb7-4620-a67f-dcdcdd80a907" containerName="mariadb-account-create-update" Mar 12 14:50:10.759951 master-0 kubenswrapper[37036]: I0312 14:50:10.759842 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ad87258-d41a-4e83-9215-354562cf0075" containerName="mariadb-database-create" Mar 12 14:50:10.759951 master-0 kubenswrapper[37036]: I0312 14:50:10.759857 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="690417a0-ecec-4a79-ab5b-789407fec2b0" containerName="mariadb-database-create" Mar 12 14:50:10.759951 master-0 kubenswrapper[37036]: I0312 14:50:10.759871 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="78a7388f-90a4-420a-b2ef-e31fb1fda25e" containerName="glance-db-sync" Mar 12 14:50:10.763921 master-0 kubenswrapper[37036]: I0312 14:50:10.760983 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86748b6cff-htlt4" Mar 12 14:50:10.794920 master-0 kubenswrapper[37036]: I0312 14:50:10.793606 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86748b6cff-htlt4"] Mar 12 14:50:10.874859 master-0 kubenswrapper[37036]: I0312 14:50:10.874811 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vzs9\" (UniqueName: \"kubernetes.io/projected/284b7914-0d1c-48f7-8d61-c7e0f8f30643-kube-api-access-2vzs9\") pod \"dnsmasq-dns-86748b6cff-htlt4\" (UID: \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\") " pod="openstack/dnsmasq-dns-86748b6cff-htlt4" Mar 12 14:50:10.875090 master-0 kubenswrapper[37036]: I0312 14:50:10.874913 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-ovsdbserver-nb\") pod \"dnsmasq-dns-86748b6cff-htlt4\" (UID: \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\") " pod="openstack/dnsmasq-dns-86748b6cff-htlt4" Mar 12 14:50:10.875090 master-0 kubenswrapper[37036]: I0312 14:50:10.874967 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-dns-swift-storage-0\") pod \"dnsmasq-dns-86748b6cff-htlt4\" (UID: \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\") " pod="openstack/dnsmasq-dns-86748b6cff-htlt4" Mar 12 14:50:10.875090 master-0 kubenswrapper[37036]: I0312 14:50:10.874994 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-config\") pod \"dnsmasq-dns-86748b6cff-htlt4\" (UID: \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\") " pod="openstack/dnsmasq-dns-86748b6cff-htlt4" Mar 12 14:50:10.875090 master-0 kubenswrapper[37036]: I0312 14:50:10.875056 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-ovsdbserver-sb\") pod \"dnsmasq-dns-86748b6cff-htlt4\" (UID: \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\") " pod="openstack/dnsmasq-dns-86748b6cff-htlt4" Mar 12 14:50:10.875090 master-0 kubenswrapper[37036]: I0312 14:50:10.875089 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-dns-svc\") pod \"dnsmasq-dns-86748b6cff-htlt4\" (UID: \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\") " pod="openstack/dnsmasq-dns-86748b6cff-htlt4" Mar 12 14:50:10.976557 master-0 kubenswrapper[37036]: I0312 14:50:10.976498 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-dns-swift-storage-0\") pod \"dnsmasq-dns-86748b6cff-htlt4\" (UID: \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\") " pod="openstack/dnsmasq-dns-86748b6cff-htlt4" Mar 12 14:50:10.976557 master-0 kubenswrapper[37036]: I0312 14:50:10.976566 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-config\") pod \"dnsmasq-dns-86748b6cff-htlt4\" (UID: \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\") " pod="openstack/dnsmasq-dns-86748b6cff-htlt4" Mar 12 14:50:10.977150 master-0 kubenswrapper[37036]: I0312 14:50:10.976663 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-ovsdbserver-sb\") pod \"dnsmasq-dns-86748b6cff-htlt4\" (UID: \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\") " pod="openstack/dnsmasq-dns-86748b6cff-htlt4" Mar 12 14:50:10.977150 master-0 kubenswrapper[37036]: I0312 14:50:10.976710 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-dns-svc\") pod \"dnsmasq-dns-86748b6cff-htlt4\" (UID: \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\") " pod="openstack/dnsmasq-dns-86748b6cff-htlt4" Mar 12 14:50:10.977150 master-0 kubenswrapper[37036]: I0312 14:50:10.976746 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vzs9\" (UniqueName: \"kubernetes.io/projected/284b7914-0d1c-48f7-8d61-c7e0f8f30643-kube-api-access-2vzs9\") pod \"dnsmasq-dns-86748b6cff-htlt4\" (UID: \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\") " pod="openstack/dnsmasq-dns-86748b6cff-htlt4" Mar 12 14:50:10.977150 master-0 kubenswrapper[37036]: I0312 14:50:10.976813 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-ovsdbserver-nb\") pod \"dnsmasq-dns-86748b6cff-htlt4\" (UID: \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\") " pod="openstack/dnsmasq-dns-86748b6cff-htlt4" Mar 12 14:50:10.977925 master-0 kubenswrapper[37036]: I0312 14:50:10.977713 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-ovsdbserver-nb\") pod \"dnsmasq-dns-86748b6cff-htlt4\" (UID: \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\") " pod="openstack/dnsmasq-dns-86748b6cff-htlt4" Mar 12 14:50:10.978467 master-0 kubenswrapper[37036]: I0312 14:50:10.978444 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-dns-swift-storage-0\") pod \"dnsmasq-dns-86748b6cff-htlt4\" (UID: \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\") " pod="openstack/dnsmasq-dns-86748b6cff-htlt4" Mar 12 14:50:10.979132 master-0 kubenswrapper[37036]: I0312 14:50:10.979103 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-dns-svc\") pod \"dnsmasq-dns-86748b6cff-htlt4\" (UID: \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\") " pod="openstack/dnsmasq-dns-86748b6cff-htlt4" Mar 12 14:50:10.984978 master-0 kubenswrapper[37036]: I0312 14:50:10.980052 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-config\") pod \"dnsmasq-dns-86748b6cff-htlt4\" (UID: \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\") " pod="openstack/dnsmasq-dns-86748b6cff-htlt4" Mar 12 14:50:10.984978 master-0 kubenswrapper[37036]: I0312 14:50:10.980183 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-ovsdbserver-sb\") pod \"dnsmasq-dns-86748b6cff-htlt4\" (UID: \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\") " pod="openstack/dnsmasq-dns-86748b6cff-htlt4" Mar 12 14:50:11.010920 master-0 kubenswrapper[37036]: I0312 14:50:11.010737 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vzs9\" (UniqueName: \"kubernetes.io/projected/284b7914-0d1c-48f7-8d61-c7e0f8f30643-kube-api-access-2vzs9\") pod \"dnsmasq-dns-86748b6cff-htlt4\" (UID: \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\") " pod="openstack/dnsmasq-dns-86748b6cff-htlt4" Mar 12 14:50:11.098721 master-0 kubenswrapper[37036]: I0312 14:50:11.096363 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86748b6cff-htlt4" Mar 12 14:50:11.615484 master-0 kubenswrapper[37036]: W0312 14:50:11.615427 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod284b7914_0d1c_48f7_8d61_c7e0f8f30643.slice/crio-daf50909abed412988f93299ad66da2a815094a54a1d3f8613930a04f026c754 WatchSource:0}: Error finding container daf50909abed412988f93299ad66da2a815094a54a1d3f8613930a04f026c754: Status 404 returned error can't find the container with id daf50909abed412988f93299ad66da2a815094a54a1d3f8613930a04f026c754 Mar 12 14:50:11.623304 master-0 kubenswrapper[37036]: I0312 14:50:11.622702 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86748b6cff-htlt4"] Mar 12 14:50:11.926185 master-0 kubenswrapper[37036]: I0312 14:50:11.926135 37036 generic.go:334] "Generic (PLEG): container finished" podID="284b7914-0d1c-48f7-8d61-c7e0f8f30643" containerID="5318083dd04a34176f30d0fcbbe366237fe435455f783dce408230a66aff40b1" exitCode=0 Mar 12 14:50:11.926508 master-0 kubenswrapper[37036]: I0312 14:50:11.926435 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86748b6cff-htlt4" event={"ID":"284b7914-0d1c-48f7-8d61-c7e0f8f30643","Type":"ContainerDied","Data":"5318083dd04a34176f30d0fcbbe366237fe435455f783dce408230a66aff40b1"} Mar 12 14:50:11.926564 master-0 kubenswrapper[37036]: I0312 14:50:11.926514 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86748b6cff-htlt4" event={"ID":"284b7914-0d1c-48f7-8d61-c7e0f8f30643","Type":"ContainerStarted","Data":"daf50909abed412988f93299ad66da2a815094a54a1d3f8613930a04f026c754"} Mar 12 14:50:11.927639 master-0 kubenswrapper[37036]: I0312 14:50:11.927578 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" podUID="3f950b6e-f5ef-4938-99f9-e37c8300503e" containerName="dnsmasq-dns" containerID="cri-o://bba00f07251299f9e676dbca2144caa6f941550cda874ab8688617e6d5959a37" gracePeriod=10 Mar 12 14:50:12.504434 master-0 kubenswrapper[37036]: I0312 14:50:12.504390 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" Mar 12 14:50:12.627576 master-0 kubenswrapper[37036]: I0312 14:50:12.623656 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-config\") pod \"3f950b6e-f5ef-4938-99f9-e37c8300503e\" (UID: \"3f950b6e-f5ef-4938-99f9-e37c8300503e\") " Mar 12 14:50:12.627576 master-0 kubenswrapper[37036]: I0312 14:50:12.624203 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxgp\" (UniqueName: \"kubernetes.io/projected/3f950b6e-f5ef-4938-99f9-e37c8300503e-kube-api-access-xnxgp\") pod \"3f950b6e-f5ef-4938-99f9-e37c8300503e\" (UID: \"3f950b6e-f5ef-4938-99f9-e37c8300503e\") " Mar 12 14:50:12.627576 master-0 kubenswrapper[37036]: I0312 14:50:12.624335 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-ovsdbserver-nb\") pod \"3f950b6e-f5ef-4938-99f9-e37c8300503e\" (UID: \"3f950b6e-f5ef-4938-99f9-e37c8300503e\") " Mar 12 14:50:12.627576 master-0 kubenswrapper[37036]: I0312 14:50:12.624423 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-dns-svc\") pod \"3f950b6e-f5ef-4938-99f9-e37c8300503e\" (UID: \"3f950b6e-f5ef-4938-99f9-e37c8300503e\") " Mar 12 14:50:12.627576 master-0 kubenswrapper[37036]: I0312 14:50:12.624678 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-dns-swift-storage-0\") pod \"3f950b6e-f5ef-4938-99f9-e37c8300503e\" (UID: \"3f950b6e-f5ef-4938-99f9-e37c8300503e\") " Mar 12 14:50:12.627576 master-0 kubenswrapper[37036]: I0312 14:50:12.624735 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-ovsdbserver-sb\") pod \"3f950b6e-f5ef-4938-99f9-e37c8300503e\" (UID: \"3f950b6e-f5ef-4938-99f9-e37c8300503e\") " Mar 12 14:50:12.627576 master-0 kubenswrapper[37036]: I0312 14:50:12.627234 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f950b6e-f5ef-4938-99f9-e37c8300503e-kube-api-access-xnxgp" (OuterVolumeSpecName: "kube-api-access-xnxgp") pod "3f950b6e-f5ef-4938-99f9-e37c8300503e" (UID: "3f950b6e-f5ef-4938-99f9-e37c8300503e"). InnerVolumeSpecName "kube-api-access-xnxgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:50:12.631697 master-0 kubenswrapper[37036]: I0312 14:50:12.630319 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnxgp\" (UniqueName: \"kubernetes.io/projected/3f950b6e-f5ef-4938-99f9-e37c8300503e-kube-api-access-xnxgp\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:12.691113 master-0 kubenswrapper[37036]: I0312 14:50:12.691044 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3f950b6e-f5ef-4938-99f9-e37c8300503e" (UID: "3f950b6e-f5ef-4938-99f9-e37c8300503e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:12.691452 master-0 kubenswrapper[37036]: I0312 14:50:12.691405 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3f950b6e-f5ef-4938-99f9-e37c8300503e" (UID: "3f950b6e-f5ef-4938-99f9-e37c8300503e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:12.700669 master-0 kubenswrapper[37036]: I0312 14:50:12.700620 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3f950b6e-f5ef-4938-99f9-e37c8300503e" (UID: "3f950b6e-f5ef-4938-99f9-e37c8300503e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:12.701513 master-0 kubenswrapper[37036]: I0312 14:50:12.701480 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-config" (OuterVolumeSpecName: "config") pod "3f950b6e-f5ef-4938-99f9-e37c8300503e" (UID: "3f950b6e-f5ef-4938-99f9-e37c8300503e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:12.706046 master-0 kubenswrapper[37036]: I0312 14:50:12.705997 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3f950b6e-f5ef-4938-99f9-e37c8300503e" (UID: "3f950b6e-f5ef-4938-99f9-e37c8300503e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:12.731965 master-0 kubenswrapper[37036]: I0312 14:50:12.731866 37036 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:12.731965 master-0 kubenswrapper[37036]: I0312 14:50:12.731938 37036 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:12.731965 master-0 kubenswrapper[37036]: I0312 14:50:12.731948 37036 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:12.731965 master-0 kubenswrapper[37036]: I0312 14:50:12.731960 37036 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:12.731965 master-0 kubenswrapper[37036]: I0312 14:50:12.731973 37036 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f950b6e-f5ef-4938-99f9-e37c8300503e-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:12.938686 master-0 kubenswrapper[37036]: I0312 14:50:12.938539 37036 generic.go:334] "Generic (PLEG): container finished" podID="4272c013-816c-4779-a81d-2945610612f3" containerID="b5e71b97f3cb342c1df35c00a0a1fd789fb1da64152dd434e9321f899e419b74" exitCode=0 Mar 12 14:50:12.938686 master-0 kubenswrapper[37036]: I0312 14:50:12.938613 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wmxbp" event={"ID":"4272c013-816c-4779-a81d-2945610612f3","Type":"ContainerDied","Data":"b5e71b97f3cb342c1df35c00a0a1fd789fb1da64152dd434e9321f899e419b74"} Mar 12 14:50:12.959456 master-0 kubenswrapper[37036]: I0312 14:50:12.959391 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86748b6cff-htlt4" event={"ID":"284b7914-0d1c-48f7-8d61-c7e0f8f30643","Type":"ContainerStarted","Data":"c4a08851f0b3233aab59755d11f549f9aa14cecaa062d58a68caf5e529357ffd"} Mar 12 14:50:12.961371 master-0 kubenswrapper[37036]: I0312 14:50:12.960834 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86748b6cff-htlt4" Mar 12 14:50:12.968259 master-0 kubenswrapper[37036]: I0312 14:50:12.966330 37036 generic.go:334] "Generic (PLEG): container finished" podID="3f950b6e-f5ef-4938-99f9-e37c8300503e" containerID="bba00f07251299f9e676dbca2144caa6f941550cda874ab8688617e6d5959a37" exitCode=0 Mar 12 14:50:12.968259 master-0 kubenswrapper[37036]: I0312 14:50:12.966434 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" event={"ID":"3f950b6e-f5ef-4938-99f9-e37c8300503e","Type":"ContainerDied","Data":"bba00f07251299f9e676dbca2144caa6f941550cda874ab8688617e6d5959a37"} Mar 12 14:50:12.968259 master-0 kubenswrapper[37036]: I0312 14:50:12.966469 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" event={"ID":"3f950b6e-f5ef-4938-99f9-e37c8300503e","Type":"ContainerDied","Data":"f9e9d1d2f371344eb8fd090dcdb1fafe16db88d56860c55c49354d74d41f1137"} Mar 12 14:50:12.968259 master-0 kubenswrapper[37036]: I0312 14:50:12.966492 37036 scope.go:117] "RemoveContainer" containerID="bba00f07251299f9e676dbca2144caa6f941550cda874ab8688617e6d5959a37" Mar 12 14:50:12.968259 master-0 kubenswrapper[37036]: I0312 14:50:12.966529 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-857cb5775f-5b6ts" Mar 12 14:50:13.004647 master-0 kubenswrapper[37036]: I0312 14:50:13.002599 37036 scope.go:117] "RemoveContainer" containerID="c972c203d3bf53d695027e969c8b1b886294322736c363a1240bd465b588a3c1" Mar 12 14:50:13.031008 master-0 kubenswrapper[37036]: I0312 14:50:13.028777 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86748b6cff-htlt4" podStartSLOduration=3.028752283 podStartE2EDuration="3.028752283s" podCreationTimestamp="2026-03-12 14:50:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:13.019027581 +0000 UTC m=+872.026768518" watchObservedRunningTime="2026-03-12 14:50:13.028752283 +0000 UTC m=+872.036493210" Mar 12 14:50:13.051401 master-0 kubenswrapper[37036]: I0312 14:50:13.051355 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-857cb5775f-5b6ts"] Mar 12 14:50:13.061742 master-0 kubenswrapper[37036]: I0312 14:50:13.061672 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-857cb5775f-5b6ts"] Mar 12 14:50:13.078979 master-0 kubenswrapper[37036]: I0312 14:50:13.078363 37036 scope.go:117] "RemoveContainer" containerID="bba00f07251299f9e676dbca2144caa6f941550cda874ab8688617e6d5959a37" Mar 12 14:50:13.078979 master-0 kubenswrapper[37036]: E0312 14:50:13.078882 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bba00f07251299f9e676dbca2144caa6f941550cda874ab8688617e6d5959a37\": container with ID starting with bba00f07251299f9e676dbca2144caa6f941550cda874ab8688617e6d5959a37 not found: ID does not exist" containerID="bba00f07251299f9e676dbca2144caa6f941550cda874ab8688617e6d5959a37" Mar 12 14:50:13.078979 master-0 kubenswrapper[37036]: I0312 14:50:13.078933 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bba00f07251299f9e676dbca2144caa6f941550cda874ab8688617e6d5959a37"} err="failed to get container status \"bba00f07251299f9e676dbca2144caa6f941550cda874ab8688617e6d5959a37\": rpc error: code = NotFound desc = could not find container \"bba00f07251299f9e676dbca2144caa6f941550cda874ab8688617e6d5959a37\": container with ID starting with bba00f07251299f9e676dbca2144caa6f941550cda874ab8688617e6d5959a37 not found: ID does not exist" Mar 12 14:50:13.078979 master-0 kubenswrapper[37036]: I0312 14:50:13.078961 37036 scope.go:117] "RemoveContainer" containerID="c972c203d3bf53d695027e969c8b1b886294322736c363a1240bd465b588a3c1" Mar 12 14:50:13.079495 master-0 kubenswrapper[37036]: E0312 14:50:13.079424 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c972c203d3bf53d695027e969c8b1b886294322736c363a1240bd465b588a3c1\": container with ID starting with c972c203d3bf53d695027e969c8b1b886294322736c363a1240bd465b588a3c1 not found: ID does not exist" containerID="c972c203d3bf53d695027e969c8b1b886294322736c363a1240bd465b588a3c1" Mar 12 14:50:13.079600 master-0 kubenswrapper[37036]: I0312 14:50:13.079498 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c972c203d3bf53d695027e969c8b1b886294322736c363a1240bd465b588a3c1"} err="failed to get container status \"c972c203d3bf53d695027e969c8b1b886294322736c363a1240bd465b588a3c1\": rpc error: code = NotFound desc = could not find container \"c972c203d3bf53d695027e969c8b1b886294322736c363a1240bd465b588a3c1\": container with ID starting with c972c203d3bf53d695027e969c8b1b886294322736c363a1240bd465b588a3c1 not found: ID does not exist" Mar 12 14:50:13.252468 master-0 kubenswrapper[37036]: I0312 14:50:13.252398 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f950b6e-f5ef-4938-99f9-e37c8300503e" path="/var/lib/kubelet/pods/3f950b6e-f5ef-4938-99f9-e37c8300503e/volumes" Mar 12 14:50:14.420299 master-0 kubenswrapper[37036]: I0312 14:50:14.419946 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wmxbp" Mar 12 14:50:14.462752 master-0 kubenswrapper[37036]: I0312 14:50:14.462683 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4272c013-816c-4779-a81d-2945610612f3-config-data\") pod \"4272c013-816c-4779-a81d-2945610612f3\" (UID: \"4272c013-816c-4779-a81d-2945610612f3\") " Mar 12 14:50:14.463082 master-0 kubenswrapper[37036]: I0312 14:50:14.462931 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4272c013-816c-4779-a81d-2945610612f3-combined-ca-bundle\") pod \"4272c013-816c-4779-a81d-2945610612f3\" (UID: \"4272c013-816c-4779-a81d-2945610612f3\") " Mar 12 14:50:14.463082 master-0 kubenswrapper[37036]: I0312 14:50:14.463008 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlrnn\" (UniqueName: \"kubernetes.io/projected/4272c013-816c-4779-a81d-2945610612f3-kube-api-access-mlrnn\") pod \"4272c013-816c-4779-a81d-2945610612f3\" (UID: \"4272c013-816c-4779-a81d-2945610612f3\") " Mar 12 14:50:14.467959 master-0 kubenswrapper[37036]: I0312 14:50:14.467855 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4272c013-816c-4779-a81d-2945610612f3-kube-api-access-mlrnn" (OuterVolumeSpecName: "kube-api-access-mlrnn") pod "4272c013-816c-4779-a81d-2945610612f3" (UID: "4272c013-816c-4779-a81d-2945610612f3"). InnerVolumeSpecName "kube-api-access-mlrnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:50:14.489854 master-0 kubenswrapper[37036]: I0312 14:50:14.489801 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4272c013-816c-4779-a81d-2945610612f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4272c013-816c-4779-a81d-2945610612f3" (UID: "4272c013-816c-4779-a81d-2945610612f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:14.514366 master-0 kubenswrapper[37036]: I0312 14:50:14.514310 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4272c013-816c-4779-a81d-2945610612f3-config-data" (OuterVolumeSpecName: "config-data") pod "4272c013-816c-4779-a81d-2945610612f3" (UID: "4272c013-816c-4779-a81d-2945610612f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:14.566764 master-0 kubenswrapper[37036]: I0312 14:50:14.566686 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4272c013-816c-4779-a81d-2945610612f3-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:14.566764 master-0 kubenswrapper[37036]: I0312 14:50:14.566739 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4272c013-816c-4779-a81d-2945610612f3-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:14.566764 master-0 kubenswrapper[37036]: I0312 14:50:14.566750 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlrnn\" (UniqueName: \"kubernetes.io/projected/4272c013-816c-4779-a81d-2945610612f3-kube-api-access-mlrnn\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:14.991440 master-0 kubenswrapper[37036]: I0312 14:50:14.991322 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wmxbp" event={"ID":"4272c013-816c-4779-a81d-2945610612f3","Type":"ContainerDied","Data":"471220c1173241cc7681d8ae6714416c9681b82238c9087d1cb2dc9c7c50785f"} Mar 12 14:50:14.991440 master-0 kubenswrapper[37036]: I0312 14:50:14.991373 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="471220c1173241cc7681d8ae6714416c9681b82238c9087d1cb2dc9c7c50785f" Mar 12 14:50:14.991440 master-0 kubenswrapper[37036]: I0312 14:50:14.991370 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wmxbp" Mar 12 14:50:15.284358 master-0 kubenswrapper[37036]: I0312 14:50:15.284239 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86748b6cff-htlt4"] Mar 12 14:50:15.358687 master-0 kubenswrapper[37036]: I0312 14:50:15.358615 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jf2m8"] Mar 12 14:50:15.369880 master-0 kubenswrapper[37036]: E0312 14:50:15.359129 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4272c013-816c-4779-a81d-2945610612f3" containerName="keystone-db-sync" Mar 12 14:50:15.369880 master-0 kubenswrapper[37036]: I0312 14:50:15.359150 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="4272c013-816c-4779-a81d-2945610612f3" containerName="keystone-db-sync" Mar 12 14:50:15.369880 master-0 kubenswrapper[37036]: E0312 14:50:15.359202 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f950b6e-f5ef-4938-99f9-e37c8300503e" containerName="init" Mar 12 14:50:15.369880 master-0 kubenswrapper[37036]: I0312 14:50:15.359208 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f950b6e-f5ef-4938-99f9-e37c8300503e" containerName="init" Mar 12 14:50:15.369880 master-0 kubenswrapper[37036]: E0312 14:50:15.359248 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f950b6e-f5ef-4938-99f9-e37c8300503e" containerName="dnsmasq-dns" Mar 12 14:50:15.369880 master-0 kubenswrapper[37036]: I0312 14:50:15.359254 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f950b6e-f5ef-4938-99f9-e37c8300503e" containerName="dnsmasq-dns" Mar 12 14:50:15.369880 master-0 kubenswrapper[37036]: I0312 14:50:15.359425 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="4272c013-816c-4779-a81d-2945610612f3" containerName="keystone-db-sync" Mar 12 14:50:15.369880 master-0 kubenswrapper[37036]: I0312 14:50:15.359492 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f950b6e-f5ef-4938-99f9-e37c8300503e" containerName="dnsmasq-dns" Mar 12 14:50:15.369880 master-0 kubenswrapper[37036]: I0312 14:50:15.360163 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jf2m8" Mar 12 14:50:15.369880 master-0 kubenswrapper[37036]: I0312 14:50:15.364516 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 12 14:50:15.369880 master-0 kubenswrapper[37036]: I0312 14:50:15.364687 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 12 14:50:15.369880 master-0 kubenswrapper[37036]: I0312 14:50:15.364792 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 12 14:50:15.369880 master-0 kubenswrapper[37036]: I0312 14:50:15.364969 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 12 14:50:15.390930 master-0 kubenswrapper[37036]: I0312 14:50:15.390473 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8wjp\" (UniqueName: \"kubernetes.io/projected/b5f99014-23e5-4733-a3a7-ed02f994e177-kube-api-access-p8wjp\") pod \"keystone-bootstrap-jf2m8\" (UID: \"b5f99014-23e5-4733-a3a7-ed02f994e177\") " pod="openstack/keystone-bootstrap-jf2m8" Mar 12 14:50:15.390930 master-0 kubenswrapper[37036]: I0312 14:50:15.390641 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-config-data\") pod \"keystone-bootstrap-jf2m8\" (UID: \"b5f99014-23e5-4733-a3a7-ed02f994e177\") " pod="openstack/keystone-bootstrap-jf2m8" Mar 12 14:50:15.390930 master-0 kubenswrapper[37036]: I0312 14:50:15.390676 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-combined-ca-bundle\") pod \"keystone-bootstrap-jf2m8\" (UID: \"b5f99014-23e5-4733-a3a7-ed02f994e177\") " pod="openstack/keystone-bootstrap-jf2m8" Mar 12 14:50:15.390930 master-0 kubenswrapper[37036]: I0312 14:50:15.390705 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-credential-keys\") pod \"keystone-bootstrap-jf2m8\" (UID: \"b5f99014-23e5-4733-a3a7-ed02f994e177\") " pod="openstack/keystone-bootstrap-jf2m8" Mar 12 14:50:15.390930 master-0 kubenswrapper[37036]: I0312 14:50:15.390927 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-fernet-keys\") pod \"keystone-bootstrap-jf2m8\" (UID: \"b5f99014-23e5-4733-a3a7-ed02f994e177\") " pod="openstack/keystone-bootstrap-jf2m8" Mar 12 14:50:15.391400 master-0 kubenswrapper[37036]: I0312 14:50:15.390997 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-scripts\") pod \"keystone-bootstrap-jf2m8\" (UID: \"b5f99014-23e5-4733-a3a7-ed02f994e177\") " pod="openstack/keystone-bootstrap-jf2m8" Mar 12 14:50:15.395934 master-0 kubenswrapper[37036]: I0312 14:50:15.395070 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-579d698c49-9q778"] Mar 12 14:50:15.403710 master-0 kubenswrapper[37036]: I0312 14:50:15.403643 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-579d698c49-9q778" Mar 12 14:50:15.414408 master-0 kubenswrapper[37036]: I0312 14:50:15.414053 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jf2m8"] Mar 12 14:50:15.445601 master-0 kubenswrapper[37036]: I0312 14:50:15.445516 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-579d698c49-9q778"] Mar 12 14:50:15.510242 master-0 kubenswrapper[37036]: I0312 14:50:15.497189 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-dns-svc\") pod \"dnsmasq-dns-579d698c49-9q778\" (UID: \"40b9131a-7cef-4638-8a57-e2109afa2584\") " pod="openstack/dnsmasq-dns-579d698c49-9q778" Mar 12 14:50:15.510242 master-0 kubenswrapper[37036]: I0312 14:50:15.497248 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-config-data\") pod \"keystone-bootstrap-jf2m8\" (UID: \"b5f99014-23e5-4733-a3a7-ed02f994e177\") " pod="openstack/keystone-bootstrap-jf2m8" Mar 12 14:50:15.510242 master-0 kubenswrapper[37036]: I0312 14:50:15.497272 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-config\") pod \"dnsmasq-dns-579d698c49-9q778\" (UID: \"40b9131a-7cef-4638-8a57-e2109afa2584\") " pod="openstack/dnsmasq-dns-579d698c49-9q778" Mar 12 14:50:15.510242 master-0 kubenswrapper[37036]: I0312 14:50:15.497295 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-combined-ca-bundle\") pod \"keystone-bootstrap-jf2m8\" (UID: \"b5f99014-23e5-4733-a3a7-ed02f994e177\") " pod="openstack/keystone-bootstrap-jf2m8" Mar 12 14:50:15.510242 master-0 kubenswrapper[37036]: I0312 14:50:15.497320 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-credential-keys\") pod \"keystone-bootstrap-jf2m8\" (UID: \"b5f99014-23e5-4733-a3a7-ed02f994e177\") " pod="openstack/keystone-bootstrap-jf2m8" Mar 12 14:50:15.510242 master-0 kubenswrapper[37036]: I0312 14:50:15.497454 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxxvd\" (UniqueName: \"kubernetes.io/projected/40b9131a-7cef-4638-8a57-e2109afa2584-kube-api-access-gxxvd\") pod \"dnsmasq-dns-579d698c49-9q778\" (UID: \"40b9131a-7cef-4638-8a57-e2109afa2584\") " pod="openstack/dnsmasq-dns-579d698c49-9q778" Mar 12 14:50:15.510242 master-0 kubenswrapper[37036]: I0312 14:50:15.497507 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-dns-swift-storage-0\") pod \"dnsmasq-dns-579d698c49-9q778\" (UID: \"40b9131a-7cef-4638-8a57-e2109afa2584\") " pod="openstack/dnsmasq-dns-579d698c49-9q778" Mar 12 14:50:15.510242 master-0 kubenswrapper[37036]: I0312 14:50:15.497545 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-fernet-keys\") pod \"keystone-bootstrap-jf2m8\" (UID: \"b5f99014-23e5-4733-a3a7-ed02f994e177\") " pod="openstack/keystone-bootstrap-jf2m8" Mar 12 14:50:15.510242 master-0 kubenswrapper[37036]: I0312 14:50:15.497575 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-scripts\") pod \"keystone-bootstrap-jf2m8\" (UID: \"b5f99014-23e5-4733-a3a7-ed02f994e177\") " pod="openstack/keystone-bootstrap-jf2m8" Mar 12 14:50:15.510242 master-0 kubenswrapper[37036]: I0312 14:50:15.497648 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8wjp\" (UniqueName: \"kubernetes.io/projected/b5f99014-23e5-4733-a3a7-ed02f994e177-kube-api-access-p8wjp\") pod \"keystone-bootstrap-jf2m8\" (UID: \"b5f99014-23e5-4733-a3a7-ed02f994e177\") " pod="openstack/keystone-bootstrap-jf2m8" Mar 12 14:50:15.510242 master-0 kubenswrapper[37036]: I0312 14:50:15.497708 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-ovsdbserver-sb\") pod \"dnsmasq-dns-579d698c49-9q778\" (UID: \"40b9131a-7cef-4638-8a57-e2109afa2584\") " pod="openstack/dnsmasq-dns-579d698c49-9q778" Mar 12 14:50:15.510242 master-0 kubenswrapper[37036]: I0312 14:50:15.497764 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-ovsdbserver-nb\") pod \"dnsmasq-dns-579d698c49-9q778\" (UID: \"40b9131a-7cef-4638-8a57-e2109afa2584\") " pod="openstack/dnsmasq-dns-579d698c49-9q778" Mar 12 14:50:15.518314 master-0 kubenswrapper[37036]: I0312 14:50:15.512058 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-config-data\") pod \"keystone-bootstrap-jf2m8\" (UID: \"b5f99014-23e5-4733-a3a7-ed02f994e177\") " pod="openstack/keystone-bootstrap-jf2m8" Mar 12 14:50:15.526699 master-0 kubenswrapper[37036]: I0312 14:50:15.526634 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-fernet-keys\") pod \"keystone-bootstrap-jf2m8\" (UID: \"b5f99014-23e5-4733-a3a7-ed02f994e177\") " pod="openstack/keystone-bootstrap-jf2m8" Mar 12 14:50:15.533948 master-0 kubenswrapper[37036]: I0312 14:50:15.533613 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-scripts\") pod \"keystone-bootstrap-jf2m8\" (UID: \"b5f99014-23e5-4733-a3a7-ed02f994e177\") " pod="openstack/keystone-bootstrap-jf2m8" Mar 12 14:50:15.538314 master-0 kubenswrapper[37036]: I0312 14:50:15.537475 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-credential-keys\") pod \"keystone-bootstrap-jf2m8\" (UID: \"b5f99014-23e5-4733-a3a7-ed02f994e177\") " pod="openstack/keystone-bootstrap-jf2m8" Mar 12 14:50:15.555924 master-0 kubenswrapper[37036]: I0312 14:50:15.554720 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8wjp\" (UniqueName: \"kubernetes.io/projected/b5f99014-23e5-4733-a3a7-ed02f994e177-kube-api-access-p8wjp\") pod \"keystone-bootstrap-jf2m8\" (UID: \"b5f99014-23e5-4733-a3a7-ed02f994e177\") " pod="openstack/keystone-bootstrap-jf2m8" Mar 12 14:50:15.561947 master-0 kubenswrapper[37036]: I0312 14:50:15.558290 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-combined-ca-bundle\") pod \"keystone-bootstrap-jf2m8\" (UID: \"b5f99014-23e5-4733-a3a7-ed02f994e177\") " pod="openstack/keystone-bootstrap-jf2m8" Mar 12 14:50:15.603238 master-0 kubenswrapper[37036]: I0312 14:50:15.600191 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-ovsdbserver-nb\") pod \"dnsmasq-dns-579d698c49-9q778\" (UID: \"40b9131a-7cef-4638-8a57-e2109afa2584\") " pod="openstack/dnsmasq-dns-579d698c49-9q778" Mar 12 14:50:15.603238 master-0 kubenswrapper[37036]: I0312 14:50:15.600303 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-dns-svc\") pod \"dnsmasq-dns-579d698c49-9q778\" (UID: \"40b9131a-7cef-4638-8a57-e2109afa2584\") " pod="openstack/dnsmasq-dns-579d698c49-9q778" Mar 12 14:50:15.603238 master-0 kubenswrapper[37036]: I0312 14:50:15.600336 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-config\") pod \"dnsmasq-dns-579d698c49-9q778\" (UID: \"40b9131a-7cef-4638-8a57-e2109afa2584\") " pod="openstack/dnsmasq-dns-579d698c49-9q778" Mar 12 14:50:15.603238 master-0 kubenswrapper[37036]: I0312 14:50:15.600385 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxxvd\" (UniqueName: \"kubernetes.io/projected/40b9131a-7cef-4638-8a57-e2109afa2584-kube-api-access-gxxvd\") pod \"dnsmasq-dns-579d698c49-9q778\" (UID: \"40b9131a-7cef-4638-8a57-e2109afa2584\") " pod="openstack/dnsmasq-dns-579d698c49-9q778" Mar 12 14:50:15.603238 master-0 kubenswrapper[37036]: I0312 14:50:15.600421 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-dns-swift-storage-0\") pod \"dnsmasq-dns-579d698c49-9q778\" (UID: \"40b9131a-7cef-4638-8a57-e2109afa2584\") " pod="openstack/dnsmasq-dns-579d698c49-9q778" Mar 12 14:50:15.610403 master-0 kubenswrapper[37036]: I0312 14:50:15.605255 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-ovsdbserver-nb\") pod \"dnsmasq-dns-579d698c49-9q778\" (UID: \"40b9131a-7cef-4638-8a57-e2109afa2584\") " pod="openstack/dnsmasq-dns-579d698c49-9q778" Mar 12 14:50:15.610403 master-0 kubenswrapper[37036]: I0312 14:50:15.605348 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-config\") pod \"dnsmasq-dns-579d698c49-9q778\" (UID: \"40b9131a-7cef-4638-8a57-e2109afa2584\") " pod="openstack/dnsmasq-dns-579d698c49-9q778" Mar 12 14:50:15.610403 master-0 kubenswrapper[37036]: I0312 14:50:15.606143 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-dns-svc\") pod \"dnsmasq-dns-579d698c49-9q778\" (UID: \"40b9131a-7cef-4638-8a57-e2109afa2584\") " pod="openstack/dnsmasq-dns-579d698c49-9q778" Mar 12 14:50:15.624956 master-0 kubenswrapper[37036]: I0312 14:50:15.622826 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-dns-swift-storage-0\") pod \"dnsmasq-dns-579d698c49-9q778\" (UID: \"40b9131a-7cef-4638-8a57-e2109afa2584\") " pod="openstack/dnsmasq-dns-579d698c49-9q778" Mar 12 14:50:15.624956 master-0 kubenswrapper[37036]: I0312 14:50:15.622974 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-ovsdbserver-sb\") pod \"dnsmasq-dns-579d698c49-9q778\" (UID: \"40b9131a-7cef-4638-8a57-e2109afa2584\") " pod="openstack/dnsmasq-dns-579d698c49-9q778" Mar 12 14:50:15.626776 master-0 kubenswrapper[37036]: I0312 14:50:15.625289 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-ovsdbserver-sb\") pod \"dnsmasq-dns-579d698c49-9q778\" (UID: \"40b9131a-7cef-4638-8a57-e2109afa2584\") " pod="openstack/dnsmasq-dns-579d698c49-9q778" Mar 12 14:50:15.668604 master-0 kubenswrapper[37036]: I0312 14:50:15.668553 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxxvd\" (UniqueName: \"kubernetes.io/projected/40b9131a-7cef-4638-8a57-e2109afa2584-kube-api-access-gxxvd\") pod \"dnsmasq-dns-579d698c49-9q778\" (UID: \"40b9131a-7cef-4638-8a57-e2109afa2584\") " pod="openstack/dnsmasq-dns-579d698c49-9q778" Mar 12 14:50:15.672990 master-0 kubenswrapper[37036]: I0312 14:50:15.672951 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-create-5ghhs"] Mar 12 14:50:15.674384 master-0 kubenswrapper[37036]: I0312 14:50:15.674332 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-5ghhs" Mar 12 14:50:15.722801 master-0 kubenswrapper[37036]: I0312 14:50:15.722744 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jf2m8" Mar 12 14:50:15.726249 master-0 kubenswrapper[37036]: I0312 14:50:15.725601 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-5ghhs"] Mar 12 14:50:15.736404 master-0 kubenswrapper[37036]: I0312 14:50:15.735603 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-a807-account-create-update-qt766"] Mar 12 14:50:15.737510 master-0 kubenswrapper[37036]: I0312 14:50:15.737190 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-a807-account-create-update-qt766" Mar 12 14:50:15.739440 master-0 kubenswrapper[37036]: I0312 14:50:15.738990 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93af238f-2462-4584-872e-6e7c2c98b599-operator-scripts\") pod \"ironic-db-create-5ghhs\" (UID: \"93af238f-2462-4584-872e-6e7c2c98b599\") " pod="openstack/ironic-db-create-5ghhs" Mar 12 14:50:15.739440 master-0 kubenswrapper[37036]: I0312 14:50:15.739047 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5jft\" (UniqueName: \"kubernetes.io/projected/93af238f-2462-4584-872e-6e7c2c98b599-kube-api-access-s5jft\") pod \"ironic-db-create-5ghhs\" (UID: \"93af238f-2462-4584-872e-6e7c2c98b599\") " pod="openstack/ironic-db-create-5ghhs" Mar 12 14:50:15.743389 master-0 kubenswrapper[37036]: I0312 14:50:15.743204 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-db-secret" Mar 12 14:50:15.744084 master-0 kubenswrapper[37036]: I0312 14:50:15.744034 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-a807-account-create-update-qt766"] Mar 12 14:50:15.844029 master-0 kubenswrapper[37036]: I0312 14:50:15.829441 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-579d698c49-9q778" Mar 12 14:50:15.845063 master-0 kubenswrapper[37036]: I0312 14:50:15.845001 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65ac62ad-c04b-410b-bdc7-44e1663f6682-operator-scripts\") pod \"ironic-a807-account-create-update-qt766\" (UID: \"65ac62ad-c04b-410b-bdc7-44e1663f6682\") " pod="openstack/ironic-a807-account-create-update-qt766" Mar 12 14:50:15.845142 master-0 kubenswrapper[37036]: I0312 14:50:15.845106 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93af238f-2462-4584-872e-6e7c2c98b599-operator-scripts\") pod \"ironic-db-create-5ghhs\" (UID: \"93af238f-2462-4584-872e-6e7c2c98b599\") " pod="openstack/ironic-db-create-5ghhs" Mar 12 14:50:15.845205 master-0 kubenswrapper[37036]: I0312 14:50:15.845180 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5jft\" (UniqueName: \"kubernetes.io/projected/93af238f-2462-4584-872e-6e7c2c98b599-kube-api-access-s5jft\") pod \"ironic-db-create-5ghhs\" (UID: \"93af238f-2462-4584-872e-6e7c2c98b599\") " pod="openstack/ironic-db-create-5ghhs" Mar 12 14:50:15.845334 master-0 kubenswrapper[37036]: I0312 14:50:15.845308 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pdfd\" (UniqueName: \"kubernetes.io/projected/65ac62ad-c04b-410b-bdc7-44e1663f6682-kube-api-access-5pdfd\") pod \"ironic-a807-account-create-update-qt766\" (UID: \"65ac62ad-c04b-410b-bdc7-44e1663f6682\") " pod="openstack/ironic-a807-account-create-update-qt766" Mar 12 14:50:15.897997 master-0 kubenswrapper[37036]: I0312 14:50:15.897965 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93af238f-2462-4584-872e-6e7c2c98b599-operator-scripts\") pod \"ironic-db-create-5ghhs\" (UID: \"93af238f-2462-4584-872e-6e7c2c98b599\") " pod="openstack/ironic-db-create-5ghhs" Mar 12 14:50:15.947764 master-0 kubenswrapper[37036]: I0312 14:50:15.947684 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pdfd\" (UniqueName: \"kubernetes.io/projected/65ac62ad-c04b-410b-bdc7-44e1663f6682-kube-api-access-5pdfd\") pod \"ironic-a807-account-create-update-qt766\" (UID: \"65ac62ad-c04b-410b-bdc7-44e1663f6682\") " pod="openstack/ironic-a807-account-create-update-qt766" Mar 12 14:50:15.947871 master-0 kubenswrapper[37036]: I0312 14:50:15.947815 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65ac62ad-c04b-410b-bdc7-44e1663f6682-operator-scripts\") pod \"ironic-a807-account-create-update-qt766\" (UID: \"65ac62ad-c04b-410b-bdc7-44e1663f6682\") " pod="openstack/ironic-a807-account-create-update-qt766" Mar 12 14:50:15.948882 master-0 kubenswrapper[37036]: I0312 14:50:15.948800 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65ac62ad-c04b-410b-bdc7-44e1663f6682-operator-scripts\") pod \"ironic-a807-account-create-update-qt766\" (UID: \"65ac62ad-c04b-410b-bdc7-44e1663f6682\") " pod="openstack/ironic-a807-account-create-update-qt766" Mar 12 14:50:16.006913 master-0 kubenswrapper[37036]: I0312 14:50:15.993398 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-05598-db-sync-6bdmp"] Mar 12 14:50:16.006913 master-0 kubenswrapper[37036]: I0312 14:50:15.995287 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-db-sync-6bdmp" Mar 12 14:50:16.006913 master-0 kubenswrapper[37036]: I0312 14:50:16.005666 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-05598-config-data" Mar 12 14:50:16.006913 master-0 kubenswrapper[37036]: I0312 14:50:16.005855 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-05598-scripts" Mar 12 14:50:16.022038 master-0 kubenswrapper[37036]: I0312 14:50:16.019627 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86748b6cff-htlt4" podUID="284b7914-0d1c-48f7-8d61-c7e0f8f30643" containerName="dnsmasq-dns" containerID="cri-o://c4a08851f0b3233aab59755d11f549f9aa14cecaa062d58a68caf5e529357ffd" gracePeriod=10 Mar 12 14:50:16.074987 master-0 kubenswrapper[37036]: I0312 14:50:16.071297 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75876022-f077-4c9e-95c1-3d0b1dbb61a3-combined-ca-bundle\") pod \"cinder-05598-db-sync-6bdmp\" (UID: \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\") " pod="openstack/cinder-05598-db-sync-6bdmp" Mar 12 14:50:16.074987 master-0 kubenswrapper[37036]: I0312 14:50:16.071381 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75876022-f077-4c9e-95c1-3d0b1dbb61a3-scripts\") pod \"cinder-05598-db-sync-6bdmp\" (UID: \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\") " pod="openstack/cinder-05598-db-sync-6bdmp" Mar 12 14:50:16.074987 master-0 kubenswrapper[37036]: I0312 14:50:16.071538 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlggb\" (UniqueName: \"kubernetes.io/projected/75876022-f077-4c9e-95c1-3d0b1dbb61a3-kube-api-access-nlggb\") pod \"cinder-05598-db-sync-6bdmp\" (UID: \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\") " pod="openstack/cinder-05598-db-sync-6bdmp" Mar 12 14:50:16.074987 master-0 kubenswrapper[37036]: I0312 14:50:16.071849 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75876022-f077-4c9e-95c1-3d0b1dbb61a3-config-data\") pod \"cinder-05598-db-sync-6bdmp\" (UID: \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\") " pod="openstack/cinder-05598-db-sync-6bdmp" Mar 12 14:50:16.074987 master-0 kubenswrapper[37036]: I0312 14:50:16.071919 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75876022-f077-4c9e-95c1-3d0b1dbb61a3-db-sync-config-data\") pod \"cinder-05598-db-sync-6bdmp\" (UID: \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\") " pod="openstack/cinder-05598-db-sync-6bdmp" Mar 12 14:50:16.074987 master-0 kubenswrapper[37036]: I0312 14:50:16.071984 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75876022-f077-4c9e-95c1-3d0b1dbb61a3-etc-machine-id\") pod \"cinder-05598-db-sync-6bdmp\" (UID: \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\") " pod="openstack/cinder-05598-db-sync-6bdmp" Mar 12 14:50:16.202124 master-0 kubenswrapper[37036]: I0312 14:50:16.195572 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75876022-f077-4c9e-95c1-3d0b1dbb61a3-scripts\") pod \"cinder-05598-db-sync-6bdmp\" (UID: \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\") " pod="openstack/cinder-05598-db-sync-6bdmp" Mar 12 14:50:16.202124 master-0 kubenswrapper[37036]: I0312 14:50:16.174103 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75876022-f077-4c9e-95c1-3d0b1dbb61a3-scripts\") pod \"cinder-05598-db-sync-6bdmp\" (UID: \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\") " pod="openstack/cinder-05598-db-sync-6bdmp" Mar 12 14:50:16.202124 master-0 kubenswrapper[37036]: I0312 14:50:16.195845 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlggb\" (UniqueName: \"kubernetes.io/projected/75876022-f077-4c9e-95c1-3d0b1dbb61a3-kube-api-access-nlggb\") pod \"cinder-05598-db-sync-6bdmp\" (UID: \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\") " pod="openstack/cinder-05598-db-sync-6bdmp" Mar 12 14:50:16.202124 master-0 kubenswrapper[37036]: I0312 14:50:16.196027 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75876022-f077-4c9e-95c1-3d0b1dbb61a3-config-data\") pod \"cinder-05598-db-sync-6bdmp\" (UID: \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\") " pod="openstack/cinder-05598-db-sync-6bdmp" Mar 12 14:50:16.202124 master-0 kubenswrapper[37036]: I0312 14:50:16.196071 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75876022-f077-4c9e-95c1-3d0b1dbb61a3-db-sync-config-data\") pod \"cinder-05598-db-sync-6bdmp\" (UID: \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\") " pod="openstack/cinder-05598-db-sync-6bdmp" Mar 12 14:50:16.202124 master-0 kubenswrapper[37036]: I0312 14:50:16.196135 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75876022-f077-4c9e-95c1-3d0b1dbb61a3-etc-machine-id\") pod \"cinder-05598-db-sync-6bdmp\" (UID: \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\") " pod="openstack/cinder-05598-db-sync-6bdmp" Mar 12 14:50:16.211113 master-0 kubenswrapper[37036]: I0312 14:50:16.207181 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75876022-f077-4c9e-95c1-3d0b1dbb61a3-etc-machine-id\") pod \"cinder-05598-db-sync-6bdmp\" (UID: \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\") " pod="openstack/cinder-05598-db-sync-6bdmp" Mar 12 14:50:16.211113 master-0 kubenswrapper[37036]: I0312 14:50:16.207360 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75876022-f077-4c9e-95c1-3d0b1dbb61a3-combined-ca-bundle\") pod \"cinder-05598-db-sync-6bdmp\" (UID: \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\") " pod="openstack/cinder-05598-db-sync-6bdmp" Mar 12 14:50:16.223889 master-0 kubenswrapper[37036]: I0312 14:50:16.222788 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5jft\" (UniqueName: \"kubernetes.io/projected/93af238f-2462-4584-872e-6e7c2c98b599-kube-api-access-s5jft\") pod \"ironic-db-create-5ghhs\" (UID: \"93af238f-2462-4584-872e-6e7c2c98b599\") " pod="openstack/ironic-db-create-5ghhs" Mar 12 14:50:16.223889 master-0 kubenswrapper[37036]: I0312 14:50:16.223468 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75876022-f077-4c9e-95c1-3d0b1dbb61a3-combined-ca-bundle\") pod \"cinder-05598-db-sync-6bdmp\" (UID: \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\") " pod="openstack/cinder-05598-db-sync-6bdmp" Mar 12 14:50:16.229719 master-0 kubenswrapper[37036]: I0312 14:50:16.229670 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pdfd\" (UniqueName: \"kubernetes.io/projected/65ac62ad-c04b-410b-bdc7-44e1663f6682-kube-api-access-5pdfd\") pod \"ironic-a807-account-create-update-qt766\" (UID: \"65ac62ad-c04b-410b-bdc7-44e1663f6682\") " pod="openstack/ironic-a807-account-create-update-qt766" Mar 12 14:50:16.229987 master-0 kubenswrapper[37036]: I0312 14:50:16.229950 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75876022-f077-4c9e-95c1-3d0b1dbb61a3-db-sync-config-data\") pod \"cinder-05598-db-sync-6bdmp\" (UID: \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\") " pod="openstack/cinder-05598-db-sync-6bdmp" Mar 12 14:50:16.230330 master-0 kubenswrapper[37036]: I0312 14:50:16.230289 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75876022-f077-4c9e-95c1-3d0b1dbb61a3-config-data\") pod \"cinder-05598-db-sync-6bdmp\" (UID: \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\") " pod="openstack/cinder-05598-db-sync-6bdmp" Mar 12 14:50:16.281762 master-0 kubenswrapper[37036]: I0312 14:50:16.271316 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlggb\" (UniqueName: \"kubernetes.io/projected/75876022-f077-4c9e-95c1-3d0b1dbb61a3-kube-api-access-nlggb\") pod \"cinder-05598-db-sync-6bdmp\" (UID: \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\") " pod="openstack/cinder-05598-db-sync-6bdmp" Mar 12 14:50:16.328329 master-0 kubenswrapper[37036]: I0312 14:50:16.318240 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-05598-db-sync-6bdmp"] Mar 12 14:50:16.331353 master-0 kubenswrapper[37036]: I0312 14:50:16.329715 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-db-sync-6bdmp" Mar 12 14:50:16.348634 master-0 kubenswrapper[37036]: I0312 14:50:16.348582 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-5ghhs" Mar 12 14:50:16.382640 master-0 kubenswrapper[37036]: I0312 14:50:16.367059 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-a807-account-create-update-qt766" Mar 12 14:50:16.440918 master-0 kubenswrapper[37036]: I0312 14:50:16.437966 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-rdgvk"] Mar 12 14:50:16.440918 master-0 kubenswrapper[37036]: I0312 14:50:16.439462 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rdgvk" Mar 12 14:50:16.458265 master-0 kubenswrapper[37036]: I0312 14:50:16.458227 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 12 14:50:16.458708 master-0 kubenswrapper[37036]: I0312 14:50:16.458551 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 12 14:50:16.492023 master-0 kubenswrapper[37036]: I0312 14:50:16.490104 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-79jsx"] Mar 12 14:50:16.492212 master-0 kubenswrapper[37036]: I0312 14:50:16.492084 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-79jsx" Mar 12 14:50:16.506838 master-0 kubenswrapper[37036]: I0312 14:50:16.506771 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 12 14:50:16.507273 master-0 kubenswrapper[37036]: I0312 14:50:16.507083 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 12 14:50:16.517979 master-0 kubenswrapper[37036]: I0312 14:50:16.513943 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-579d698c49-9q778"] Mar 12 14:50:16.517979 master-0 kubenswrapper[37036]: I0312 14:50:16.515309 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36a55e95-783b-40ef-996a-5e29f87dc118-combined-ca-bundle\") pod \"placement-db-sync-rdgvk\" (UID: \"36a55e95-783b-40ef-996a-5e29f87dc118\") " pod="openstack/placement-db-sync-rdgvk" Mar 12 14:50:16.517979 master-0 kubenswrapper[37036]: I0312 14:50:16.515447 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36a55e95-783b-40ef-996a-5e29f87dc118-logs\") pod \"placement-db-sync-rdgvk\" (UID: \"36a55e95-783b-40ef-996a-5e29f87dc118\") " pod="openstack/placement-db-sync-rdgvk" Mar 12 14:50:16.517979 master-0 kubenswrapper[37036]: I0312 14:50:16.515474 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z6d7\" (UniqueName: \"kubernetes.io/projected/36a55e95-783b-40ef-996a-5e29f87dc118-kube-api-access-2z6d7\") pod \"placement-db-sync-rdgvk\" (UID: \"36a55e95-783b-40ef-996a-5e29f87dc118\") " pod="openstack/placement-db-sync-rdgvk" Mar 12 14:50:16.517979 master-0 kubenswrapper[37036]: I0312 14:50:16.515533 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36a55e95-783b-40ef-996a-5e29f87dc118-scripts\") pod \"placement-db-sync-rdgvk\" (UID: \"36a55e95-783b-40ef-996a-5e29f87dc118\") " pod="openstack/placement-db-sync-rdgvk" Mar 12 14:50:16.517979 master-0 kubenswrapper[37036]: I0312 14:50:16.515566 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36a55e95-783b-40ef-996a-5e29f87dc118-config-data\") pod \"placement-db-sync-rdgvk\" (UID: \"36a55e95-783b-40ef-996a-5e29f87dc118\") " pod="openstack/placement-db-sync-rdgvk" Mar 12 14:50:16.572378 master-0 kubenswrapper[37036]: I0312 14:50:16.561548 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-79jsx"] Mar 12 14:50:16.572378 master-0 kubenswrapper[37036]: I0312 14:50:16.569949 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-rdgvk"] Mar 12 14:50:16.594046 master-0 kubenswrapper[37036]: I0312 14:50:16.593994 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jf2m8"] Mar 12 14:50:16.619558 master-0 kubenswrapper[37036]: I0312 14:50:16.619505 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36a55e95-783b-40ef-996a-5e29f87dc118-config-data\") pod \"placement-db-sync-rdgvk\" (UID: \"36a55e95-783b-40ef-996a-5e29f87dc118\") " pod="openstack/placement-db-sync-rdgvk" Mar 12 14:50:16.619746 master-0 kubenswrapper[37036]: I0312 14:50:16.619610 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36a55e95-783b-40ef-996a-5e29f87dc118-combined-ca-bundle\") pod \"placement-db-sync-rdgvk\" (UID: \"36a55e95-783b-40ef-996a-5e29f87dc118\") " pod="openstack/placement-db-sync-rdgvk" Mar 12 14:50:16.620004 master-0 kubenswrapper[37036]: I0312 14:50:16.619835 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa5c40b0-d90b-4a98-af67-d37503c2c2dc-combined-ca-bundle\") pod \"neutron-db-sync-79jsx\" (UID: \"fa5c40b0-d90b-4a98-af67-d37503c2c2dc\") " pod="openstack/neutron-db-sync-79jsx" Mar 12 14:50:16.620004 master-0 kubenswrapper[37036]: I0312 14:50:16.619970 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36a55e95-783b-40ef-996a-5e29f87dc118-logs\") pod \"placement-db-sync-rdgvk\" (UID: \"36a55e95-783b-40ef-996a-5e29f87dc118\") " pod="openstack/placement-db-sync-rdgvk" Mar 12 14:50:16.621920 master-0 kubenswrapper[37036]: I0312 14:50:16.621758 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z6d7\" (UniqueName: \"kubernetes.io/projected/36a55e95-783b-40ef-996a-5e29f87dc118-kube-api-access-2z6d7\") pod \"placement-db-sync-rdgvk\" (UID: \"36a55e95-783b-40ef-996a-5e29f87dc118\") " pod="openstack/placement-db-sync-rdgvk" Mar 12 14:50:16.621920 master-0 kubenswrapper[37036]: I0312 14:50:16.621838 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sjxl\" (UniqueName: \"kubernetes.io/projected/fa5c40b0-d90b-4a98-af67-d37503c2c2dc-kube-api-access-8sjxl\") pod \"neutron-db-sync-79jsx\" (UID: \"fa5c40b0-d90b-4a98-af67-d37503c2c2dc\") " pod="openstack/neutron-db-sync-79jsx" Mar 12 14:50:16.622041 master-0 kubenswrapper[37036]: I0312 14:50:16.621936 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fa5c40b0-d90b-4a98-af67-d37503c2c2dc-config\") pod \"neutron-db-sync-79jsx\" (UID: \"fa5c40b0-d90b-4a98-af67-d37503c2c2dc\") " pod="openstack/neutron-db-sync-79jsx" Mar 12 14:50:16.622103 master-0 kubenswrapper[37036]: I0312 14:50:16.622079 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36a55e95-783b-40ef-996a-5e29f87dc118-scripts\") pod \"placement-db-sync-rdgvk\" (UID: \"36a55e95-783b-40ef-996a-5e29f87dc118\") " pod="openstack/placement-db-sync-rdgvk" Mar 12 14:50:16.622710 master-0 kubenswrapper[37036]: I0312 14:50:16.622671 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c468f6c5c-sfk59"] Mar 12 14:50:16.626421 master-0 kubenswrapper[37036]: I0312 14:50:16.625627 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" Mar 12 14:50:16.634221 master-0 kubenswrapper[37036]: I0312 14:50:16.634130 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36a55e95-783b-40ef-996a-5e29f87dc118-logs\") pod \"placement-db-sync-rdgvk\" (UID: \"36a55e95-783b-40ef-996a-5e29f87dc118\") " pod="openstack/placement-db-sync-rdgvk" Mar 12 14:50:16.636633 master-0 kubenswrapper[37036]: I0312 14:50:16.636584 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36a55e95-783b-40ef-996a-5e29f87dc118-config-data\") pod \"placement-db-sync-rdgvk\" (UID: \"36a55e95-783b-40ef-996a-5e29f87dc118\") " pod="openstack/placement-db-sync-rdgvk" Mar 12 14:50:16.642819 master-0 kubenswrapper[37036]: I0312 14:50:16.642762 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c468f6c5c-sfk59"] Mar 12 14:50:16.643654 master-0 kubenswrapper[37036]: I0312 14:50:16.643626 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36a55e95-783b-40ef-996a-5e29f87dc118-scripts\") pod \"placement-db-sync-rdgvk\" (UID: \"36a55e95-783b-40ef-996a-5e29f87dc118\") " pod="openstack/placement-db-sync-rdgvk" Mar 12 14:50:16.660647 master-0 kubenswrapper[37036]: I0312 14:50:16.660371 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36a55e95-783b-40ef-996a-5e29f87dc118-combined-ca-bundle\") pod \"placement-db-sync-rdgvk\" (UID: \"36a55e95-783b-40ef-996a-5e29f87dc118\") " pod="openstack/placement-db-sync-rdgvk" Mar 12 14:50:16.690590 master-0 kubenswrapper[37036]: I0312 14:50:16.690512 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-579d698c49-9q778"] Mar 12 14:50:16.691302 master-0 kubenswrapper[37036]: I0312 14:50:16.691280 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z6d7\" (UniqueName: \"kubernetes.io/projected/36a55e95-783b-40ef-996a-5e29f87dc118-kube-api-access-2z6d7\") pod \"placement-db-sync-rdgvk\" (UID: \"36a55e95-783b-40ef-996a-5e29f87dc118\") " pod="openstack/placement-db-sync-rdgvk" Mar 12 14:50:16.730819 master-0 kubenswrapper[37036]: I0312 14:50:16.726885 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa5c40b0-d90b-4a98-af67-d37503c2c2dc-combined-ca-bundle\") pod \"neutron-db-sync-79jsx\" (UID: \"fa5c40b0-d90b-4a98-af67-d37503c2c2dc\") " pod="openstack/neutron-db-sync-79jsx" Mar 12 14:50:16.730819 master-0 kubenswrapper[37036]: I0312 14:50:16.727055 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sjxl\" (UniqueName: \"kubernetes.io/projected/fa5c40b0-d90b-4a98-af67-d37503c2c2dc-kube-api-access-8sjxl\") pod \"neutron-db-sync-79jsx\" (UID: \"fa5c40b0-d90b-4a98-af67-d37503c2c2dc\") " pod="openstack/neutron-db-sync-79jsx" Mar 12 14:50:16.730819 master-0 kubenswrapper[37036]: I0312 14:50:16.727105 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fa5c40b0-d90b-4a98-af67-d37503c2c2dc-config\") pod \"neutron-db-sync-79jsx\" (UID: \"fa5c40b0-d90b-4a98-af67-d37503c2c2dc\") " pod="openstack/neutron-db-sync-79jsx" Mar 12 14:50:16.730819 master-0 kubenswrapper[37036]: I0312 14:50:16.727164 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x54xp\" (UniqueName: \"kubernetes.io/projected/4957b7fc-e353-40a6-b5c5-39b608bb366d-kube-api-access-x54xp\") pod \"dnsmasq-dns-7c468f6c5c-sfk59\" (UID: \"4957b7fc-e353-40a6-b5c5-39b608bb366d\") " pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" Mar 12 14:50:16.730819 master-0 kubenswrapper[37036]: I0312 14:50:16.727221 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-dns-swift-storage-0\") pod \"dnsmasq-dns-7c468f6c5c-sfk59\" (UID: \"4957b7fc-e353-40a6-b5c5-39b608bb366d\") " pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" Mar 12 14:50:16.730819 master-0 kubenswrapper[37036]: I0312 14:50:16.727266 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-ovsdbserver-nb\") pod \"dnsmasq-dns-7c468f6c5c-sfk59\" (UID: \"4957b7fc-e353-40a6-b5c5-39b608bb366d\") " pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" Mar 12 14:50:16.730819 master-0 kubenswrapper[37036]: I0312 14:50:16.727352 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-ovsdbserver-sb\") pod \"dnsmasq-dns-7c468f6c5c-sfk59\" (UID: \"4957b7fc-e353-40a6-b5c5-39b608bb366d\") " pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" Mar 12 14:50:16.730819 master-0 kubenswrapper[37036]: I0312 14:50:16.727377 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-config\") pod \"dnsmasq-dns-7c468f6c5c-sfk59\" (UID: \"4957b7fc-e353-40a6-b5c5-39b608bb366d\") " pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" Mar 12 14:50:16.730819 master-0 kubenswrapper[37036]: I0312 14:50:16.727439 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-dns-svc\") pod \"dnsmasq-dns-7c468f6c5c-sfk59\" (UID: \"4957b7fc-e353-40a6-b5c5-39b608bb366d\") " pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" Mar 12 14:50:16.734375 master-0 kubenswrapper[37036]: I0312 14:50:16.731711 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa5c40b0-d90b-4a98-af67-d37503c2c2dc-combined-ca-bundle\") pod \"neutron-db-sync-79jsx\" (UID: \"fa5c40b0-d90b-4a98-af67-d37503c2c2dc\") " pod="openstack/neutron-db-sync-79jsx" Mar 12 14:50:16.734375 master-0 kubenswrapper[37036]: I0312 14:50:16.732465 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fa5c40b0-d90b-4a98-af67-d37503c2c2dc-config\") pod \"neutron-db-sync-79jsx\" (UID: \"fa5c40b0-d90b-4a98-af67-d37503c2c2dc\") " pod="openstack/neutron-db-sync-79jsx" Mar 12 14:50:16.788230 master-0 kubenswrapper[37036]: I0312 14:50:16.783493 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sjxl\" (UniqueName: \"kubernetes.io/projected/fa5c40b0-d90b-4a98-af67-d37503c2c2dc-kube-api-access-8sjxl\") pod \"neutron-db-sync-79jsx\" (UID: \"fa5c40b0-d90b-4a98-af67-d37503c2c2dc\") " pod="openstack/neutron-db-sync-79jsx" Mar 12 14:50:16.788230 master-0 kubenswrapper[37036]: I0312 14:50:16.786701 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rdgvk" Mar 12 14:50:16.830247 master-0 kubenswrapper[37036]: I0312 14:50:16.830188 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-79jsx" Mar 12 14:50:16.832059 master-0 kubenswrapper[37036]: I0312 14:50:16.831703 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-dns-swift-storage-0\") pod \"dnsmasq-dns-7c468f6c5c-sfk59\" (UID: \"4957b7fc-e353-40a6-b5c5-39b608bb366d\") " pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" Mar 12 14:50:16.832059 master-0 kubenswrapper[37036]: I0312 14:50:16.831792 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-ovsdbserver-nb\") pod \"dnsmasq-dns-7c468f6c5c-sfk59\" (UID: \"4957b7fc-e353-40a6-b5c5-39b608bb366d\") " pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" Mar 12 14:50:16.832059 master-0 kubenswrapper[37036]: I0312 14:50:16.831882 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-ovsdbserver-sb\") pod \"dnsmasq-dns-7c468f6c5c-sfk59\" (UID: \"4957b7fc-e353-40a6-b5c5-39b608bb366d\") " pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" Mar 12 14:50:16.835122 master-0 kubenswrapper[37036]: I0312 14:50:16.833523 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-dns-swift-storage-0\") pod \"dnsmasq-dns-7c468f6c5c-sfk59\" (UID: \"4957b7fc-e353-40a6-b5c5-39b608bb366d\") " pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" Mar 12 14:50:16.843628 master-0 kubenswrapper[37036]: I0312 14:50:16.843529 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-ovsdbserver-nb\") pod \"dnsmasq-dns-7c468f6c5c-sfk59\" (UID: \"4957b7fc-e353-40a6-b5c5-39b608bb366d\") " pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" Mar 12 14:50:16.854003 master-0 kubenswrapper[37036]: I0312 14:50:16.851095 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-ovsdbserver-sb\") pod \"dnsmasq-dns-7c468f6c5c-sfk59\" (UID: \"4957b7fc-e353-40a6-b5c5-39b608bb366d\") " pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" Mar 12 14:50:16.854003 master-0 kubenswrapper[37036]: I0312 14:50:16.853935 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-config\") pod \"dnsmasq-dns-7c468f6c5c-sfk59\" (UID: \"4957b7fc-e353-40a6-b5c5-39b608bb366d\") " pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" Mar 12 14:50:16.854205 master-0 kubenswrapper[37036]: I0312 14:50:16.854117 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-dns-svc\") pod \"dnsmasq-dns-7c468f6c5c-sfk59\" (UID: \"4957b7fc-e353-40a6-b5c5-39b608bb366d\") " pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" Mar 12 14:50:16.854996 master-0 kubenswrapper[37036]: I0312 14:50:16.854320 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x54xp\" (UniqueName: \"kubernetes.io/projected/4957b7fc-e353-40a6-b5c5-39b608bb366d-kube-api-access-x54xp\") pod \"dnsmasq-dns-7c468f6c5c-sfk59\" (UID: \"4957b7fc-e353-40a6-b5c5-39b608bb366d\") " pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" Mar 12 14:50:16.857104 master-0 kubenswrapper[37036]: I0312 14:50:16.855236 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-config\") pod \"dnsmasq-dns-7c468f6c5c-sfk59\" (UID: \"4957b7fc-e353-40a6-b5c5-39b608bb366d\") " pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" Mar 12 14:50:16.887649 master-0 kubenswrapper[37036]: I0312 14:50:16.860691 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-dns-svc\") pod \"dnsmasq-dns-7c468f6c5c-sfk59\" (UID: \"4957b7fc-e353-40a6-b5c5-39b608bb366d\") " pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" Mar 12 14:50:16.937290 master-0 kubenswrapper[37036]: I0312 14:50:16.931456 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x54xp\" (UniqueName: \"kubernetes.io/projected/4957b7fc-e353-40a6-b5c5-39b608bb366d-kube-api-access-x54xp\") pod \"dnsmasq-dns-7c468f6c5c-sfk59\" (UID: \"4957b7fc-e353-40a6-b5c5-39b608bb366d\") " pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" Mar 12 14:50:17.045425 master-0 kubenswrapper[37036]: I0312 14:50:17.044407 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-579d698c49-9q778" event={"ID":"40b9131a-7cef-4638-8a57-e2109afa2584","Type":"ContainerStarted","Data":"7b4eb8d1634baf2c21a53d70d4589893f90a1eebd332fe7e85e726ffaf0e2a4c"} Mar 12 14:50:17.049354 master-0 kubenswrapper[37036]: I0312 14:50:17.046262 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jf2m8" event={"ID":"b5f99014-23e5-4733-a3a7-ed02f994e177","Type":"ContainerStarted","Data":"af46b1b22523d15b3dd9a6a98d793f5968a2e64866ce1781c5c35a4b616397e0"} Mar 12 14:50:17.067008 master-0 kubenswrapper[37036]: I0312 14:50:17.064572 37036 generic.go:334] "Generic (PLEG): container finished" podID="284b7914-0d1c-48f7-8d61-c7e0f8f30643" containerID="c4a08851f0b3233aab59755d11f549f9aa14cecaa062d58a68caf5e529357ffd" exitCode=0 Mar 12 14:50:17.067008 master-0 kubenswrapper[37036]: I0312 14:50:17.064642 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86748b6cff-htlt4" event={"ID":"284b7914-0d1c-48f7-8d61-c7e0f8f30643","Type":"ContainerDied","Data":"c4a08851f0b3233aab59755d11f549f9aa14cecaa062d58a68caf5e529357ffd"} Mar 12 14:50:17.212249 master-0 kubenswrapper[37036]: I0312 14:50:17.211979 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" Mar 12 14:50:17.333475 master-0 kubenswrapper[37036]: I0312 14:50:17.332387 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-05598-db-sync-6bdmp"] Mar 12 14:50:17.333475 master-0 kubenswrapper[37036]: I0312 14:50:17.332413 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86748b6cff-htlt4" Mar 12 14:50:17.494037 master-0 kubenswrapper[37036]: I0312 14:50:17.492548 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-dns-swift-storage-0\") pod \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\" (UID: \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\") " Mar 12 14:50:17.494037 master-0 kubenswrapper[37036]: I0312 14:50:17.493820 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-config\") pod \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\" (UID: \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\") " Mar 12 14:50:17.494037 master-0 kubenswrapper[37036]: I0312 14:50:17.493857 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vzs9\" (UniqueName: \"kubernetes.io/projected/284b7914-0d1c-48f7-8d61-c7e0f8f30643-kube-api-access-2vzs9\") pod \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\" (UID: \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\") " Mar 12 14:50:17.494037 master-0 kubenswrapper[37036]: I0312 14:50:17.493999 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-ovsdbserver-sb\") pod \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\" (UID: \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\") " Mar 12 14:50:17.494037 master-0 kubenswrapper[37036]: I0312 14:50:17.494050 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-ovsdbserver-nb\") pod \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\" (UID: \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\") " Mar 12 14:50:17.494749 master-0 kubenswrapper[37036]: I0312 14:50:17.494233 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-dns-svc\") pod \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\" (UID: \"284b7914-0d1c-48f7-8d61-c7e0f8f30643\") " Mar 12 14:50:17.497942 master-0 kubenswrapper[37036]: I0312 14:50:17.497853 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/284b7914-0d1c-48f7-8d61-c7e0f8f30643-kube-api-access-2vzs9" (OuterVolumeSpecName: "kube-api-access-2vzs9") pod "284b7914-0d1c-48f7-8d61-c7e0f8f30643" (UID: "284b7914-0d1c-48f7-8d61-c7e0f8f30643"). InnerVolumeSpecName "kube-api-access-2vzs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:50:17.596354 master-0 kubenswrapper[37036]: I0312 14:50:17.577394 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-bc20e-default-external-api-0"] Mar 12 14:50:17.596354 master-0 kubenswrapper[37036]: E0312 14:50:17.580334 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="284b7914-0d1c-48f7-8d61-c7e0f8f30643" containerName="init" Mar 12 14:50:17.596354 master-0 kubenswrapper[37036]: I0312 14:50:17.580371 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="284b7914-0d1c-48f7-8d61-c7e0f8f30643" containerName="init" Mar 12 14:50:17.596354 master-0 kubenswrapper[37036]: E0312 14:50:17.585221 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="284b7914-0d1c-48f7-8d61-c7e0f8f30643" containerName="dnsmasq-dns" Mar 12 14:50:17.596354 master-0 kubenswrapper[37036]: I0312 14:50:17.585241 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="284b7914-0d1c-48f7-8d61-c7e0f8f30643" containerName="dnsmasq-dns" Mar 12 14:50:17.596354 master-0 kubenswrapper[37036]: I0312 14:50:17.586163 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="284b7914-0d1c-48f7-8d61-c7e0f8f30643" containerName="dnsmasq-dns" Mar 12 14:50:17.596354 master-0 kubenswrapper[37036]: I0312 14:50:17.589800 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:17.615468 master-0 kubenswrapper[37036]: I0312 14:50:17.610480 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-bc20e-default-external-config-data" Mar 12 14:50:17.615468 master-0 kubenswrapper[37036]: I0312 14:50:17.610857 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 12 14:50:17.615468 master-0 kubenswrapper[37036]: I0312 14:50:17.611084 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 12 14:50:17.615468 master-0 kubenswrapper[37036]: I0312 14:50:17.612794 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bc20e-default-external-api-0"] Mar 12 14:50:17.645022 master-0 kubenswrapper[37036]: I0312 14:50:17.644952 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "284b7914-0d1c-48f7-8d61-c7e0f8f30643" (UID: "284b7914-0d1c-48f7-8d61-c7e0f8f30643"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:17.678578 master-0 kubenswrapper[37036]: I0312 14:50:17.678527 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vzs9\" (UniqueName: \"kubernetes.io/projected/284b7914-0d1c-48f7-8d61-c7e0f8f30643-kube-api-access-2vzs9\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:17.699121 master-0 kubenswrapper[37036]: I0312 14:50:17.691893 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-config" (OuterVolumeSpecName: "config") pod "284b7914-0d1c-48f7-8d61-c7e0f8f30643" (UID: "284b7914-0d1c-48f7-8d61-c7e0f8f30643"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:17.699121 master-0 kubenswrapper[37036]: I0312 14:50:17.696804 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "284b7914-0d1c-48f7-8d61-c7e0f8f30643" (UID: "284b7914-0d1c-48f7-8d61-c7e0f8f30643"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:17.739735 master-0 kubenswrapper[37036]: I0312 14:50:17.739676 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "284b7914-0d1c-48f7-8d61-c7e0f8f30643" (UID: "284b7914-0d1c-48f7-8d61-c7e0f8f30643"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:17.773829 master-0 kubenswrapper[37036]: I0312 14:50:17.771090 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-5ghhs"] Mar 12 14:50:17.773829 master-0 kubenswrapper[37036]: I0312 14:50:17.772947 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "284b7914-0d1c-48f7-8d61-c7e0f8f30643" (UID: "284b7914-0d1c-48f7-8d61-c7e0f8f30643"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:17.780722 master-0 kubenswrapper[37036]: I0312 14:50:17.780665 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04bda76a-263f-41d9-a5e0-1a2638a6893f-config-data\") pod \"glance-bc20e-default-external-api-0\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:17.780931 master-0 kubenswrapper[37036]: I0312 14:50:17.780740 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04bda76a-263f-41d9-a5e0-1a2638a6893f-logs\") pod \"glance-bc20e-default-external-api-0\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:17.780931 master-0 kubenswrapper[37036]: I0312 14:50:17.780802 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04bda76a-263f-41d9-a5e0-1a2638a6893f-scripts\") pod \"glance-bc20e-default-external-api-0\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:17.780931 master-0 kubenswrapper[37036]: I0312 14:50:17.780830 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04bda76a-263f-41d9-a5e0-1a2638a6893f-public-tls-certs\") pod \"glance-bc20e-default-external-api-0\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:17.780931 master-0 kubenswrapper[37036]: I0312 14:50:17.780925 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04bda76a-263f-41d9-a5e0-1a2638a6893f-combined-ca-bundle\") pod \"glance-bc20e-default-external-api-0\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:17.781136 master-0 kubenswrapper[37036]: I0312 14:50:17.780961 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a339e40b-4843-4796-8fbe-a3a0ca45a5a2\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b388bada-a531-4c3f-bf6b-3b84af4376f1\") pod \"glance-bc20e-default-external-api-0\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:17.781136 master-0 kubenswrapper[37036]: I0312 14:50:17.781056 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbhcm\" (UniqueName: \"kubernetes.io/projected/04bda76a-263f-41d9-a5e0-1a2638a6893f-kube-api-access-zbhcm\") pod \"glance-bc20e-default-external-api-0\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:17.781136 master-0 kubenswrapper[37036]: I0312 14:50:17.781094 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/04bda76a-263f-41d9-a5e0-1a2638a6893f-httpd-run\") pod \"glance-bc20e-default-external-api-0\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:17.781233 master-0 kubenswrapper[37036]: I0312 14:50:17.781181 37036 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:17.781233 master-0 kubenswrapper[37036]: I0312 14:50:17.781199 37036 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:17.781233 master-0 kubenswrapper[37036]: I0312 14:50:17.781210 37036 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:17.781233 master-0 kubenswrapper[37036]: I0312 14:50:17.781221 37036 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:17.781233 master-0 kubenswrapper[37036]: I0312 14:50:17.781233 37036 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/284b7914-0d1c-48f7-8d61-c7e0f8f30643-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:17.860645 master-0 kubenswrapper[37036]: I0312 14:50:17.845643 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-a807-account-create-update-qt766"] Mar 12 14:50:17.946057 master-0 kubenswrapper[37036]: I0312 14:50:17.927913 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04bda76a-263f-41d9-a5e0-1a2638a6893f-config-data\") pod \"glance-bc20e-default-external-api-0\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:17.946057 master-0 kubenswrapper[37036]: I0312 14:50:17.928000 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04bda76a-263f-41d9-a5e0-1a2638a6893f-logs\") pod \"glance-bc20e-default-external-api-0\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:17.946057 master-0 kubenswrapper[37036]: I0312 14:50:17.928063 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04bda76a-263f-41d9-a5e0-1a2638a6893f-scripts\") pod \"glance-bc20e-default-external-api-0\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:17.946057 master-0 kubenswrapper[37036]: I0312 14:50:17.928095 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04bda76a-263f-41d9-a5e0-1a2638a6893f-public-tls-certs\") pod \"glance-bc20e-default-external-api-0\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:17.946057 master-0 kubenswrapper[37036]: I0312 14:50:17.928179 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04bda76a-263f-41d9-a5e0-1a2638a6893f-combined-ca-bundle\") pod \"glance-bc20e-default-external-api-0\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:17.946057 master-0 kubenswrapper[37036]: I0312 14:50:17.928220 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a339e40b-4843-4796-8fbe-a3a0ca45a5a2\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b388bada-a531-4c3f-bf6b-3b84af4376f1\") pod \"glance-bc20e-default-external-api-0\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:17.946057 master-0 kubenswrapper[37036]: I0312 14:50:17.928324 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbhcm\" (UniqueName: \"kubernetes.io/projected/04bda76a-263f-41d9-a5e0-1a2638a6893f-kube-api-access-zbhcm\") pod \"glance-bc20e-default-external-api-0\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:17.946057 master-0 kubenswrapper[37036]: I0312 14:50:17.928356 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/04bda76a-263f-41d9-a5e0-1a2638a6893f-httpd-run\") pod \"glance-bc20e-default-external-api-0\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:17.946057 master-0 kubenswrapper[37036]: I0312 14:50:17.928961 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/04bda76a-263f-41d9-a5e0-1a2638a6893f-httpd-run\") pod \"glance-bc20e-default-external-api-0\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:17.946057 master-0 kubenswrapper[37036]: I0312 14:50:17.944000 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04bda76a-263f-41d9-a5e0-1a2638a6893f-logs\") pod \"glance-bc20e-default-external-api-0\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:17.978023 master-0 kubenswrapper[37036]: I0312 14:50:17.960272 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04bda76a-263f-41d9-a5e0-1a2638a6893f-scripts\") pod \"glance-bc20e-default-external-api-0\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:17.978023 master-0 kubenswrapper[37036]: I0312 14:50:17.965122 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04bda76a-263f-41d9-a5e0-1a2638a6893f-public-tls-certs\") pod \"glance-bc20e-default-external-api-0\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:17.978023 master-0 kubenswrapper[37036]: I0312 14:50:17.965690 37036 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 14:50:17.978023 master-0 kubenswrapper[37036]: I0312 14:50:17.965714 37036 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a339e40b-4843-4796-8fbe-a3a0ca45a5a2\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b388bada-a531-4c3f-bf6b-3b84af4376f1\") pod \"glance-bc20e-default-external-api-0\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/6c4df80a2d0bf34399f5f7093642ff3bb6c859672516bc87c6e10e693c5b3679/globalmount\"" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:17.988949 master-0 kubenswrapper[37036]: I0312 14:50:17.982239 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04bda76a-263f-41d9-a5e0-1a2638a6893f-config-data\") pod \"glance-bc20e-default-external-api-0\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:17.988949 master-0 kubenswrapper[37036]: I0312 14:50:17.986812 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04bda76a-263f-41d9-a5e0-1a2638a6893f-combined-ca-bundle\") pod \"glance-bc20e-default-external-api-0\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:18.026937 master-0 kubenswrapper[37036]: I0312 14:50:18.019529 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbhcm\" (UniqueName: \"kubernetes.io/projected/04bda76a-263f-41d9-a5e0-1a2638a6893f-kube-api-access-zbhcm\") pod \"glance-bc20e-default-external-api-0\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:18.123870 master-0 kubenswrapper[37036]: I0312 14:50:18.123823 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-bc20e-default-internal-api-0"] Mar 12 14:50:18.128506 master-0 kubenswrapper[37036]: I0312 14:50:18.126836 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:18.128506 master-0 kubenswrapper[37036]: I0312 14:50:18.128074 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86748b6cff-htlt4" event={"ID":"284b7914-0d1c-48f7-8d61-c7e0f8f30643","Type":"ContainerDied","Data":"daf50909abed412988f93299ad66da2a815094a54a1d3f8613930a04f026c754"} Mar 12 14:50:18.128506 master-0 kubenswrapper[37036]: I0312 14:50:18.128124 37036 scope.go:117] "RemoveContainer" containerID="c4a08851f0b3233aab59755d11f549f9aa14cecaa062d58a68caf5e529357ffd" Mar 12 14:50:18.128506 master-0 kubenswrapper[37036]: I0312 14:50:18.128174 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86748b6cff-htlt4" Mar 12 14:50:18.130877 master-0 kubenswrapper[37036]: I0312 14:50:18.130723 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-bc20e-default-internal-config-data" Mar 12 14:50:18.131991 master-0 kubenswrapper[37036]: I0312 14:50:18.131936 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 12 14:50:18.132329 master-0 kubenswrapper[37036]: I0312 14:50:18.132236 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-a807-account-create-update-qt766" event={"ID":"65ac62ad-c04b-410b-bdc7-44e1663f6682","Type":"ContainerStarted","Data":"e91312e57504efd2267c7df9b206045079c30b75307ca6334311a0ce41be9490"} Mar 12 14:50:18.134249 master-0 kubenswrapper[37036]: I0312 14:50:18.134214 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-rdgvk"] Mar 12 14:50:18.143167 master-0 kubenswrapper[37036]: I0312 14:50:18.143110 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-db-sync-6bdmp" event={"ID":"75876022-f077-4c9e-95c1-3d0b1dbb61a3","Type":"ContainerStarted","Data":"dc0a4e74bb4a1da0a812e65e26889bbef0cc39857936388772d4142bffe98524"} Mar 12 14:50:18.145649 master-0 kubenswrapper[37036]: I0312 14:50:18.145600 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-79jsx"] Mar 12 14:50:18.148080 master-0 kubenswrapper[37036]: I0312 14:50:18.148005 37036 generic.go:334] "Generic (PLEG): container finished" podID="40b9131a-7cef-4638-8a57-e2109afa2584" containerID="45ffa95991969f884fc509cd559c6c851de4e9af4637d68102f261d195173008" exitCode=0 Mar 12 14:50:18.148198 master-0 kubenswrapper[37036]: I0312 14:50:18.148134 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-579d698c49-9q778" event={"ID":"40b9131a-7cef-4638-8a57-e2109afa2584","Type":"ContainerDied","Data":"45ffa95991969f884fc509cd559c6c851de4e9af4637d68102f261d195173008"} Mar 12 14:50:18.158193 master-0 kubenswrapper[37036]: I0312 14:50:18.158138 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bc20e-default-internal-api-0"] Mar 12 14:50:18.164862 master-0 kubenswrapper[37036]: I0312 14:50:18.164814 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jf2m8" event={"ID":"b5f99014-23e5-4733-a3a7-ed02f994e177","Type":"ContainerStarted","Data":"c53e9b5eca44dbdb711fd9ca31714b12eeb4a562c2a32756e0839e8a22701626"} Mar 12 14:50:18.185935 master-0 kubenswrapper[37036]: I0312 14:50:18.185841 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-5ghhs" event={"ID":"93af238f-2462-4584-872e-6e7c2c98b599","Type":"ContainerStarted","Data":"ef6d6b13fbb8e3d426b95fa7eddb52ff190fbf30607964a639c71d8c5e1d4a3e"} Mar 12 14:50:18.265161 master-0 kubenswrapper[37036]: I0312 14:50:18.264353 37036 scope.go:117] "RemoveContainer" containerID="5318083dd04a34176f30d0fcbbe366237fe435455f783dce408230a66aff40b1" Mar 12 14:50:18.270131 master-0 kubenswrapper[37036]: I0312 14:50:18.269322 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jf2m8" podStartSLOduration=3.269298589 podStartE2EDuration="3.269298589s" podCreationTimestamp="2026-03-12 14:50:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:18.218522909 +0000 UTC m=+877.226263846" watchObservedRunningTime="2026-03-12 14:50:18.269298589 +0000 UTC m=+877.277039526" Mar 12 14:50:18.284116 master-0 kubenswrapper[37036]: I0312 14:50:18.281709 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkwtr\" (UniqueName: \"kubernetes.io/projected/7e4a0f08-e1d5-4490-a696-b402a042ee61-kube-api-access-rkwtr\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:18.284116 master-0 kubenswrapper[37036]: I0312 14:50:18.281774 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e4a0f08-e1d5-4490-a696-b402a042ee61-internal-tls-certs\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:18.284116 master-0 kubenswrapper[37036]: I0312 14:50:18.281807 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c23b5911-fb8a-49ca-a229-b24d2fc68f14\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dbac9550-b4fd-4c80-96a6-54c391bed946\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:18.284116 master-0 kubenswrapper[37036]: I0312 14:50:18.281828 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4a0f08-e1d5-4490-a696-b402a042ee61-combined-ca-bundle\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:18.284116 master-0 kubenswrapper[37036]: I0312 14:50:18.281872 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e4a0f08-e1d5-4490-a696-b402a042ee61-logs\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:18.284116 master-0 kubenswrapper[37036]: I0312 14:50:18.282945 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e4a0f08-e1d5-4490-a696-b402a042ee61-httpd-run\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:18.284116 master-0 kubenswrapper[37036]: I0312 14:50:18.283054 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e4a0f08-e1d5-4490-a696-b402a042ee61-config-data\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:18.284116 master-0 kubenswrapper[37036]: I0312 14:50:18.283149 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e4a0f08-e1d5-4490-a696-b402a042ee61-scripts\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:18.294144 master-0 kubenswrapper[37036]: I0312 14:50:18.293058 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86748b6cff-htlt4"] Mar 12 14:50:18.306390 master-0 kubenswrapper[37036]: I0312 14:50:18.304132 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c468f6c5c-sfk59"] Mar 12 14:50:18.323926 master-0 kubenswrapper[37036]: I0312 14:50:18.319336 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86748b6cff-htlt4"] Mar 12 14:50:18.386145 master-0 kubenswrapper[37036]: I0312 14:50:18.385938 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e4a0f08-e1d5-4490-a696-b402a042ee61-internal-tls-certs\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:18.386145 master-0 kubenswrapper[37036]: I0312 14:50:18.386014 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c23b5911-fb8a-49ca-a229-b24d2fc68f14\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dbac9550-b4fd-4c80-96a6-54c391bed946\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:18.386145 master-0 kubenswrapper[37036]: I0312 14:50:18.386034 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4a0f08-e1d5-4490-a696-b402a042ee61-combined-ca-bundle\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:18.388921 master-0 kubenswrapper[37036]: I0312 14:50:18.386965 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e4a0f08-e1d5-4490-a696-b402a042ee61-logs\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:18.388921 master-0 kubenswrapper[37036]: I0312 14:50:18.387171 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e4a0f08-e1d5-4490-a696-b402a042ee61-httpd-run\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:18.388921 master-0 kubenswrapper[37036]: I0312 14:50:18.387227 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e4a0f08-e1d5-4490-a696-b402a042ee61-config-data\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:18.388921 master-0 kubenswrapper[37036]: I0312 14:50:18.387332 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e4a0f08-e1d5-4490-a696-b402a042ee61-scripts\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:18.388921 master-0 kubenswrapper[37036]: I0312 14:50:18.387541 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkwtr\" (UniqueName: \"kubernetes.io/projected/7e4a0f08-e1d5-4490-a696-b402a042ee61-kube-api-access-rkwtr\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:18.388921 master-0 kubenswrapper[37036]: I0312 14:50:18.387960 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e4a0f08-e1d5-4490-a696-b402a042ee61-httpd-run\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:18.388921 master-0 kubenswrapper[37036]: I0312 14:50:18.388392 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e4a0f08-e1d5-4490-a696-b402a042ee61-logs\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:18.390270 master-0 kubenswrapper[37036]: I0312 14:50:18.390241 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4a0f08-e1d5-4490-a696-b402a042ee61-combined-ca-bundle\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:18.391309 master-0 kubenswrapper[37036]: I0312 14:50:18.391273 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e4a0f08-e1d5-4490-a696-b402a042ee61-scripts\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:18.391405 master-0 kubenswrapper[37036]: I0312 14:50:18.391392 37036 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 14:50:18.391489 master-0 kubenswrapper[37036]: I0312 14:50:18.391472 37036 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c23b5911-fb8a-49ca-a229-b24d2fc68f14\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dbac9550-b4fd-4c80-96a6-54c391bed946\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/49ce0cc1a33b2e89ca28a5f90915fcaf3a1dd141d163d3ea96d25fddb3a57200/globalmount\"" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:18.411078 master-0 kubenswrapper[37036]: I0312 14:50:18.411015 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkwtr\" (UniqueName: \"kubernetes.io/projected/7e4a0f08-e1d5-4490-a696-b402a042ee61-kube-api-access-rkwtr\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:18.426075 master-0 kubenswrapper[37036]: I0312 14:50:18.422195 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e4a0f08-e1d5-4490-a696-b402a042ee61-config-data\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:18.427599 master-0 kubenswrapper[37036]: I0312 14:50:18.427559 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e4a0f08-e1d5-4490-a696-b402a042ee61-internal-tls-certs\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:19.015844 master-0 kubenswrapper[37036]: I0312 14:50:19.015390 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-579d698c49-9q778" Mar 12 14:50:19.116006 master-0 kubenswrapper[37036]: I0312 14:50:19.114549 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-ovsdbserver-nb\") pod \"40b9131a-7cef-4638-8a57-e2109afa2584\" (UID: \"40b9131a-7cef-4638-8a57-e2109afa2584\") " Mar 12 14:50:19.116006 master-0 kubenswrapper[37036]: I0312 14:50:19.114711 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-config\") pod \"40b9131a-7cef-4638-8a57-e2109afa2584\" (UID: \"40b9131a-7cef-4638-8a57-e2109afa2584\") " Mar 12 14:50:19.116006 master-0 kubenswrapper[37036]: I0312 14:50:19.114744 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-dns-svc\") pod \"40b9131a-7cef-4638-8a57-e2109afa2584\" (UID: \"40b9131a-7cef-4638-8a57-e2109afa2584\") " Mar 12 14:50:19.116006 master-0 kubenswrapper[37036]: I0312 14:50:19.114810 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-dns-swift-storage-0\") pod \"40b9131a-7cef-4638-8a57-e2109afa2584\" (UID: \"40b9131a-7cef-4638-8a57-e2109afa2584\") " Mar 12 14:50:19.116006 master-0 kubenswrapper[37036]: I0312 14:50:19.114938 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxxvd\" (UniqueName: \"kubernetes.io/projected/40b9131a-7cef-4638-8a57-e2109afa2584-kube-api-access-gxxvd\") pod \"40b9131a-7cef-4638-8a57-e2109afa2584\" (UID: \"40b9131a-7cef-4638-8a57-e2109afa2584\") " Mar 12 14:50:19.116006 master-0 kubenswrapper[37036]: I0312 14:50:19.114978 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-ovsdbserver-sb\") pod \"40b9131a-7cef-4638-8a57-e2109afa2584\" (UID: \"40b9131a-7cef-4638-8a57-e2109afa2584\") " Mar 12 14:50:19.137233 master-0 kubenswrapper[37036]: I0312 14:50:19.133414 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40b9131a-7cef-4638-8a57-e2109afa2584-kube-api-access-gxxvd" (OuterVolumeSpecName: "kube-api-access-gxxvd") pod "40b9131a-7cef-4638-8a57-e2109afa2584" (UID: "40b9131a-7cef-4638-8a57-e2109afa2584"). InnerVolumeSpecName "kube-api-access-gxxvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:50:19.137233 master-0 kubenswrapper[37036]: I0312 14:50:19.136865 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "40b9131a-7cef-4638-8a57-e2109afa2584" (UID: "40b9131a-7cef-4638-8a57-e2109afa2584"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:19.139087 master-0 kubenswrapper[37036]: I0312 14:50:19.138887 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "40b9131a-7cef-4638-8a57-e2109afa2584" (UID: "40b9131a-7cef-4638-8a57-e2109afa2584"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:19.145636 master-0 kubenswrapper[37036]: I0312 14:50:19.145552 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "40b9131a-7cef-4638-8a57-e2109afa2584" (UID: "40b9131a-7cef-4638-8a57-e2109afa2584"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:19.149948 master-0 kubenswrapper[37036]: I0312 14:50:19.149780 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "40b9131a-7cef-4638-8a57-e2109afa2584" (UID: "40b9131a-7cef-4638-8a57-e2109afa2584"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:19.158065 master-0 kubenswrapper[37036]: I0312 14:50:19.158003 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-config" (OuterVolumeSpecName: "config") pod "40b9131a-7cef-4638-8a57-e2109afa2584" (UID: "40b9131a-7cef-4638-8a57-e2109afa2584"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:19.181272 master-0 kubenswrapper[37036]: I0312 14:50:19.181198 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-bc20e-default-external-api-0"] Mar 12 14:50:19.182221 master-0 kubenswrapper[37036]: E0312 14:50:19.182177 37036 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-bc20e-default-external-api-0" podUID="04bda76a-263f-41d9-a5e0-1a2638a6893f" Mar 12 14:50:19.236192 master-0 kubenswrapper[37036]: I0312 14:50:19.232135 37036 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:19.236192 master-0 kubenswrapper[37036]: I0312 14:50:19.232190 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxxvd\" (UniqueName: \"kubernetes.io/projected/40b9131a-7cef-4638-8a57-e2109afa2584-kube-api-access-gxxvd\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:19.236192 master-0 kubenswrapper[37036]: I0312 14:50:19.232207 37036 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:19.236192 master-0 kubenswrapper[37036]: I0312 14:50:19.232248 37036 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:19.236192 master-0 kubenswrapper[37036]: I0312 14:50:19.232263 37036 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:19.236192 master-0 kubenswrapper[37036]: I0312 14:50:19.232274 37036 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40b9131a-7cef-4638-8a57-e2109afa2584-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:19.239487 master-0 kubenswrapper[37036]: I0312 14:50:19.238876 37036 generic.go:334] "Generic (PLEG): container finished" podID="93af238f-2462-4584-872e-6e7c2c98b599" containerID="6e9b17de302278c8fec52a68ff69b20a8ad7c6affc7c59a16a46e2d2c905794c" exitCode=0 Mar 12 14:50:19.247319 master-0 kubenswrapper[37036]: I0312 14:50:19.246392 37036 generic.go:334] "Generic (PLEG): container finished" podID="4957b7fc-e353-40a6-b5c5-39b608bb366d" containerID="2eaf55a3e6d8bc65c052162f70a6ff505057828559184eddf1ae00f63dba9316" exitCode=0 Mar 12 14:50:19.261052 master-0 kubenswrapper[37036]: I0312 14:50:19.260328 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="284b7914-0d1c-48f7-8d61-c7e0f8f30643" path="/var/lib/kubelet/pods/284b7914-0d1c-48f7-8d61-c7e0f8f30643/volumes" Mar 12 14:50:19.268485 master-0 kubenswrapper[37036]: I0312 14:50:19.261878 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-bc20e-default-internal-api-0"] Mar 12 14:50:19.268485 master-0 kubenswrapper[37036]: I0312 14:50:19.261954 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-5ghhs" event={"ID":"93af238f-2462-4584-872e-6e7c2c98b599","Type":"ContainerDied","Data":"6e9b17de302278c8fec52a68ff69b20a8ad7c6affc7c59a16a46e2d2c905794c"} Mar 12 14:50:19.268485 master-0 kubenswrapper[37036]: I0312 14:50:19.262008 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rdgvk" event={"ID":"36a55e95-783b-40ef-996a-5e29f87dc118","Type":"ContainerStarted","Data":"b22d7c12a7792b3c33f0cbf1150fe1b00dd3aaab44014f8e994eb1d1209b69e1"} Mar 12 14:50:19.268485 master-0 kubenswrapper[37036]: I0312 14:50:19.262022 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" event={"ID":"4957b7fc-e353-40a6-b5c5-39b608bb366d","Type":"ContainerDied","Data":"2eaf55a3e6d8bc65c052162f70a6ff505057828559184eddf1ae00f63dba9316"} Mar 12 14:50:19.268485 master-0 kubenswrapper[37036]: I0312 14:50:19.262033 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" event={"ID":"4957b7fc-e353-40a6-b5c5-39b608bb366d","Type":"ContainerStarted","Data":"4253f189b2496fef804d6e2264bfdfc278708c5336db677d0eee80545c7163e6"} Mar 12 14:50:19.272272 master-0 kubenswrapper[37036]: E0312 14:50:19.272191 37036 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-bc20e-default-internal-api-0" podUID="7e4a0f08-e1d5-4490-a696-b402a042ee61" Mar 12 14:50:19.279631 master-0 kubenswrapper[37036]: I0312 14:50:19.279565 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-79jsx" event={"ID":"fa5c40b0-d90b-4a98-af67-d37503c2c2dc","Type":"ContainerStarted","Data":"5458ce02577ae859b4306e66233e53df494f053609bc16547a7276b6f02a2b9d"} Mar 12 14:50:19.279739 master-0 kubenswrapper[37036]: I0312 14:50:19.279645 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-79jsx" event={"ID":"fa5c40b0-d90b-4a98-af67-d37503c2c2dc","Type":"ContainerStarted","Data":"c9d8ac4498f064b9465dc24667bcc60d00b665c43733f0c97f85baba365ecc58"} Mar 12 14:50:19.287934 master-0 kubenswrapper[37036]: I0312 14:50:19.286471 37036 generic.go:334] "Generic (PLEG): container finished" podID="65ac62ad-c04b-410b-bdc7-44e1663f6682" containerID="bf163ca9685c5918917425a441d865099a4625fa44718848d4c308c0ee3590e2" exitCode=0 Mar 12 14:50:19.287934 master-0 kubenswrapper[37036]: I0312 14:50:19.286747 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-a807-account-create-update-qt766" event={"ID":"65ac62ad-c04b-410b-bdc7-44e1663f6682","Type":"ContainerDied","Data":"bf163ca9685c5918917425a441d865099a4625fa44718848d4c308c0ee3590e2"} Mar 12 14:50:19.290940 master-0 kubenswrapper[37036]: I0312 14:50:19.290888 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:19.291083 master-0 kubenswrapper[37036]: I0312 14:50:19.290979 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-579d698c49-9q778" event={"ID":"40b9131a-7cef-4638-8a57-e2109afa2584","Type":"ContainerDied","Data":"7b4eb8d1634baf2c21a53d70d4589893f90a1eebd332fe7e85e726ffaf0e2a4c"} Mar 12 14:50:19.291083 master-0 kubenswrapper[37036]: I0312 14:50:19.291038 37036 scope.go:117] "RemoveContainer" containerID="45ffa95991969f884fc509cd559c6c851de4e9af4637d68102f261d195173008" Mar 12 14:50:19.291433 master-0 kubenswrapper[37036]: I0312 14:50:19.291345 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-579d698c49-9q778" Mar 12 14:50:19.309703 master-0 kubenswrapper[37036]: I0312 14:50:19.309648 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:19.356188 master-0 kubenswrapper[37036]: I0312 14:50:19.356096 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-79jsx" podStartSLOduration=4.356074874 podStartE2EDuration="4.356074874s" podCreationTimestamp="2026-03-12 14:50:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:19.351047759 +0000 UTC m=+878.358788696" watchObservedRunningTime="2026-03-12 14:50:19.356074874 +0000 UTC m=+878.363815811" Mar 12 14:50:19.437356 master-0 kubenswrapper[37036]: I0312 14:50:19.437301 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04bda76a-263f-41d9-a5e0-1a2638a6893f-combined-ca-bundle\") pod \"04bda76a-263f-41d9-a5e0-1a2638a6893f\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " Mar 12 14:50:19.458772 master-0 kubenswrapper[37036]: I0312 14:50:19.458452 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04bda76a-263f-41d9-a5e0-1a2638a6893f-scripts\") pod \"04bda76a-263f-41d9-a5e0-1a2638a6893f\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " Mar 12 14:50:19.458772 master-0 kubenswrapper[37036]: I0312 14:50:19.458673 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04bda76a-263f-41d9-a5e0-1a2638a6893f-logs\") pod \"04bda76a-263f-41d9-a5e0-1a2638a6893f\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " Mar 12 14:50:19.458772 master-0 kubenswrapper[37036]: I0312 14:50:19.458733 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbhcm\" (UniqueName: \"kubernetes.io/projected/04bda76a-263f-41d9-a5e0-1a2638a6893f-kube-api-access-zbhcm\") pod \"04bda76a-263f-41d9-a5e0-1a2638a6893f\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " Mar 12 14:50:19.458772 master-0 kubenswrapper[37036]: I0312 14:50:19.458772 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04bda76a-263f-41d9-a5e0-1a2638a6893f-public-tls-certs\") pod \"04bda76a-263f-41d9-a5e0-1a2638a6893f\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " Mar 12 14:50:19.459107 master-0 kubenswrapper[37036]: I0312 14:50:19.458845 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04bda76a-263f-41d9-a5e0-1a2638a6893f-config-data\") pod \"04bda76a-263f-41d9-a5e0-1a2638a6893f\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " Mar 12 14:50:19.459107 master-0 kubenswrapper[37036]: I0312 14:50:19.458873 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/04bda76a-263f-41d9-a5e0-1a2638a6893f-httpd-run\") pod \"04bda76a-263f-41d9-a5e0-1a2638a6893f\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " Mar 12 14:50:19.459312 master-0 kubenswrapper[37036]: I0312 14:50:19.459146 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04bda76a-263f-41d9-a5e0-1a2638a6893f-logs" (OuterVolumeSpecName: "logs") pod "04bda76a-263f-41d9-a5e0-1a2638a6893f" (UID: "04bda76a-263f-41d9-a5e0-1a2638a6893f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:50:19.464373 master-0 kubenswrapper[37036]: I0312 14:50:19.463792 37036 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04bda76a-263f-41d9-a5e0-1a2638a6893f-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:19.508261 master-0 kubenswrapper[37036]: I0312 14:50:19.508148 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04bda76a-263f-41d9-a5e0-1a2638a6893f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "04bda76a-263f-41d9-a5e0-1a2638a6893f" (UID: "04bda76a-263f-41d9-a5e0-1a2638a6893f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:50:19.519762 master-0 kubenswrapper[37036]: I0312 14:50:19.519719 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04bda76a-263f-41d9-a5e0-1a2638a6893f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "04bda76a-263f-41d9-a5e0-1a2638a6893f" (UID: "04bda76a-263f-41d9-a5e0-1a2638a6893f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:19.520427 master-0 kubenswrapper[37036]: I0312 14:50:19.520333 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04bda76a-263f-41d9-a5e0-1a2638a6893f-scripts" (OuterVolumeSpecName: "scripts") pod "04bda76a-263f-41d9-a5e0-1a2638a6893f" (UID: "04bda76a-263f-41d9-a5e0-1a2638a6893f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:19.525732 master-0 kubenswrapper[37036]: I0312 14:50:19.525675 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04bda76a-263f-41d9-a5e0-1a2638a6893f-kube-api-access-zbhcm" (OuterVolumeSpecName: "kube-api-access-zbhcm") pod "04bda76a-263f-41d9-a5e0-1a2638a6893f" (UID: "04bda76a-263f-41d9-a5e0-1a2638a6893f"). InnerVolumeSpecName "kube-api-access-zbhcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:50:19.535278 master-0 kubenswrapper[37036]: I0312 14:50:19.535151 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04bda76a-263f-41d9-a5e0-1a2638a6893f-config-data" (OuterVolumeSpecName: "config-data") pod "04bda76a-263f-41d9-a5e0-1a2638a6893f" (UID: "04bda76a-263f-41d9-a5e0-1a2638a6893f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:19.538109 master-0 kubenswrapper[37036]: I0312 14:50:19.538046 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04bda76a-263f-41d9-a5e0-1a2638a6893f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04bda76a-263f-41d9-a5e0-1a2638a6893f" (UID: "04bda76a-263f-41d9-a5e0-1a2638a6893f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:19.566521 master-0 kubenswrapper[37036]: I0312 14:50:19.566467 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbhcm\" (UniqueName: \"kubernetes.io/projected/04bda76a-263f-41d9-a5e0-1a2638a6893f-kube-api-access-zbhcm\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:19.566521 master-0 kubenswrapper[37036]: I0312 14:50:19.566508 37036 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04bda76a-263f-41d9-a5e0-1a2638a6893f-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:19.566521 master-0 kubenswrapper[37036]: I0312 14:50:19.566519 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04bda76a-263f-41d9-a5e0-1a2638a6893f-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:19.566521 master-0 kubenswrapper[37036]: I0312 14:50:19.566529 37036 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/04bda76a-263f-41d9-a5e0-1a2638a6893f-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:19.566521 master-0 kubenswrapper[37036]: I0312 14:50:19.566537 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04bda76a-263f-41d9-a5e0-1a2638a6893f-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:19.566857 master-0 kubenswrapper[37036]: I0312 14:50:19.566547 37036 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04bda76a-263f-41d9-a5e0-1a2638a6893f-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:19.586516 master-0 kubenswrapper[37036]: I0312 14:50:19.586415 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-579d698c49-9q778"] Mar 12 14:50:19.604112 master-0 kubenswrapper[37036]: I0312 14:50:19.604063 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-579d698c49-9q778"] Mar 12 14:50:19.747091 master-0 kubenswrapper[37036]: I0312 14:50:19.745968 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a339e40b-4843-4796-8fbe-a3a0ca45a5a2\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b388bada-a531-4c3f-bf6b-3b84af4376f1\") pod \"glance-bc20e-default-external-api-0\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:19.874650 master-0 kubenswrapper[37036]: I0312 14:50:19.874594 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b388bada-a531-4c3f-bf6b-3b84af4376f1\") pod \"04bda76a-263f-41d9-a5e0-1a2638a6893f\" (UID: \"04bda76a-263f-41d9-a5e0-1a2638a6893f\") " Mar 12 14:50:20.311691 master-0 kubenswrapper[37036]: I0312 14:50:20.311619 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" event={"ID":"4957b7fc-e353-40a6-b5c5-39b608bb366d","Type":"ContainerStarted","Data":"3e29737a7de27a41ff097699281dddf8a3c0641df751cc31550f9a2113f449e5"} Mar 12 14:50:20.314048 master-0 kubenswrapper[37036]: I0312 14:50:20.312755 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" Mar 12 14:50:20.314594 master-0 kubenswrapper[37036]: I0312 14:50:20.314540 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:20.315684 master-0 kubenswrapper[37036]: I0312 14:50:20.314620 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:20.328568 master-0 kubenswrapper[37036]: I0312 14:50:20.328523 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:20.338643 master-0 kubenswrapper[37036]: I0312 14:50:20.338555 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" podStartSLOduration=4.33853737 podStartE2EDuration="4.33853737s" podCreationTimestamp="2026-03-12 14:50:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:20.33168443 +0000 UTC m=+879.339425367" watchObservedRunningTime="2026-03-12 14:50:20.33853737 +0000 UTC m=+879.346278307" Mar 12 14:50:20.402031 master-0 kubenswrapper[37036]: I0312 14:50:20.400472 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4a0f08-e1d5-4490-a696-b402a042ee61-combined-ca-bundle\") pod \"7e4a0f08-e1d5-4490-a696-b402a042ee61\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " Mar 12 14:50:20.402031 master-0 kubenswrapper[37036]: I0312 14:50:20.400572 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e4a0f08-e1d5-4490-a696-b402a042ee61-logs\") pod \"7e4a0f08-e1d5-4490-a696-b402a042ee61\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " Mar 12 14:50:20.402031 master-0 kubenswrapper[37036]: I0312 14:50:20.400651 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkwtr\" (UniqueName: \"kubernetes.io/projected/7e4a0f08-e1d5-4490-a696-b402a042ee61-kube-api-access-rkwtr\") pod \"7e4a0f08-e1d5-4490-a696-b402a042ee61\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " Mar 12 14:50:20.402031 master-0 kubenswrapper[37036]: I0312 14:50:20.400670 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e4a0f08-e1d5-4490-a696-b402a042ee61-scripts\") pod \"7e4a0f08-e1d5-4490-a696-b402a042ee61\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " Mar 12 14:50:20.402031 master-0 kubenswrapper[37036]: I0312 14:50:20.400699 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e4a0f08-e1d5-4490-a696-b402a042ee61-internal-tls-certs\") pod \"7e4a0f08-e1d5-4490-a696-b402a042ee61\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " Mar 12 14:50:20.409786 master-0 kubenswrapper[37036]: I0312 14:50:20.407846 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e4a0f08-e1d5-4490-a696-b402a042ee61-logs" (OuterVolumeSpecName: "logs") pod "7e4a0f08-e1d5-4490-a696-b402a042ee61" (UID: "7e4a0f08-e1d5-4490-a696-b402a042ee61"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:50:20.412020 master-0 kubenswrapper[37036]: I0312 14:50:20.410832 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e4a0f08-e1d5-4490-a696-b402a042ee61-kube-api-access-rkwtr" (OuterVolumeSpecName: "kube-api-access-rkwtr") pod "7e4a0f08-e1d5-4490-a696-b402a042ee61" (UID: "7e4a0f08-e1d5-4490-a696-b402a042ee61"). InnerVolumeSpecName "kube-api-access-rkwtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:50:20.422069 master-0 kubenswrapper[37036]: I0312 14:50:20.413108 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e4a0f08-e1d5-4490-a696-b402a042ee61-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7e4a0f08-e1d5-4490-a696-b402a042ee61" (UID: "7e4a0f08-e1d5-4490-a696-b402a042ee61"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:20.422069 master-0 kubenswrapper[37036]: I0312 14:50:20.413100 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e4a0f08-e1d5-4490-a696-b402a042ee61-scripts" (OuterVolumeSpecName: "scripts") pod "7e4a0f08-e1d5-4490-a696-b402a042ee61" (UID: "7e4a0f08-e1d5-4490-a696-b402a042ee61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:20.422566 master-0 kubenswrapper[37036]: I0312 14:50:20.422503 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e4a0f08-e1d5-4490-a696-b402a042ee61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e4a0f08-e1d5-4490-a696-b402a042ee61" (UID: "7e4a0f08-e1d5-4490-a696-b402a042ee61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:20.502955 master-0 kubenswrapper[37036]: I0312 14:50:20.502226 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e4a0f08-e1d5-4490-a696-b402a042ee61-httpd-run\") pod \"7e4a0f08-e1d5-4490-a696-b402a042ee61\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " Mar 12 14:50:20.502955 master-0 kubenswrapper[37036]: I0312 14:50:20.502276 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e4a0f08-e1d5-4490-a696-b402a042ee61-config-data\") pod \"7e4a0f08-e1d5-4490-a696-b402a042ee61\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " Mar 12 14:50:20.502955 master-0 kubenswrapper[37036]: I0312 14:50:20.502738 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e4a0f08-e1d5-4490-a696-b402a042ee61-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7e4a0f08-e1d5-4490-a696-b402a042ee61" (UID: "7e4a0f08-e1d5-4490-a696-b402a042ee61"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:50:20.503521 master-0 kubenswrapper[37036]: I0312 14:50:20.503452 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkwtr\" (UniqueName: \"kubernetes.io/projected/7e4a0f08-e1d5-4490-a696-b402a042ee61-kube-api-access-rkwtr\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:20.503521 master-0 kubenswrapper[37036]: I0312 14:50:20.503473 37036 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e4a0f08-e1d5-4490-a696-b402a042ee61-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:20.503521 master-0 kubenswrapper[37036]: I0312 14:50:20.503482 37036 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e4a0f08-e1d5-4490-a696-b402a042ee61-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:20.503521 master-0 kubenswrapper[37036]: I0312 14:50:20.503490 37036 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e4a0f08-e1d5-4490-a696-b402a042ee61-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:20.503521 master-0 kubenswrapper[37036]: I0312 14:50:20.503498 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4a0f08-e1d5-4490-a696-b402a042ee61-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:20.503521 master-0 kubenswrapper[37036]: I0312 14:50:20.503507 37036 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e4a0f08-e1d5-4490-a696-b402a042ee61-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:20.507868 master-0 kubenswrapper[37036]: I0312 14:50:20.507755 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e4a0f08-e1d5-4490-a696-b402a042ee61-config-data" (OuterVolumeSpecName: "config-data") pod "7e4a0f08-e1d5-4490-a696-b402a042ee61" (UID: "7e4a0f08-e1d5-4490-a696-b402a042ee61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:20.606877 master-0 kubenswrapper[37036]: I0312 14:50:20.606804 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e4a0f08-e1d5-4490-a696-b402a042ee61-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:21.270622 master-0 kubenswrapper[37036]: I0312 14:50:21.270550 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^b388bada-a531-4c3f-bf6b-3b84af4376f1" (OuterVolumeSpecName: "glance") pod "04bda76a-263f-41d9-a5e0-1a2638a6893f" (UID: "04bda76a-263f-41d9-a5e0-1a2638a6893f"). InnerVolumeSpecName "pvc-a339e40b-4843-4796-8fbe-a3a0ca45a5a2". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 14:50:21.272101 master-0 kubenswrapper[37036]: I0312 14:50:21.271868 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40b9131a-7cef-4638-8a57-e2109afa2584" path="/var/lib/kubelet/pods/40b9131a-7cef-4638-8a57-e2109afa2584/volumes" Mar 12 14:50:21.288369 master-0 kubenswrapper[37036]: I0312 14:50:21.288302 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c23b5911-fb8a-49ca-a229-b24d2fc68f14\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dbac9550-b4fd-4c80-96a6-54c391bed946\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:21.322733 master-0 kubenswrapper[37036]: I0312 14:50:21.322682 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dbac9550-b4fd-4c80-96a6-54c391bed946\") pod \"7e4a0f08-e1d5-4490-a696-b402a042ee61\" (UID: \"7e4a0f08-e1d5-4490-a696-b402a042ee61\") " Mar 12 14:50:21.323639 master-0 kubenswrapper[37036]: I0312 14:50:21.323609 37036 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a339e40b-4843-4796-8fbe-a3a0ca45a5a2\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b388bada-a531-4c3f-bf6b-3b84af4376f1\") on node \"master-0\" " Mar 12 14:50:21.336934 master-0 kubenswrapper[37036]: I0312 14:50:21.336835 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:21.351192 master-0 kubenswrapper[37036]: I0312 14:50:21.351135 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^dbac9550-b4fd-4c80-96a6-54c391bed946" (OuterVolumeSpecName: "glance") pod "7e4a0f08-e1d5-4490-a696-b402a042ee61" (UID: "7e4a0f08-e1d5-4490-a696-b402a042ee61"). InnerVolumeSpecName "pvc-c23b5911-fb8a-49ca-a229-b24d2fc68f14". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 14:50:21.356934 master-0 kubenswrapper[37036]: I0312 14:50:21.356368 37036 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 14:50:21.356934 master-0 kubenswrapper[37036]: I0312 14:50:21.356763 37036 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a339e40b-4843-4796-8fbe-a3a0ca45a5a2" (UniqueName: "kubernetes.io/csi/topolvm.io^b388bada-a531-4c3f-bf6b-3b84af4376f1") on node "master-0" Mar 12 14:50:21.433958 master-0 kubenswrapper[37036]: I0312 14:50:21.433819 37036 reconciler_common.go:293] "Volume detached for volume \"pvc-a339e40b-4843-4796-8fbe-a3a0ca45a5a2\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b388bada-a531-4c3f-bf6b-3b84af4376f1\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:21.434298 master-0 kubenswrapper[37036]: I0312 14:50:21.434269 37036 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c23b5911-fb8a-49ca-a229-b24d2fc68f14\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dbac9550-b4fd-4c80-96a6-54c391bed946\") on node \"master-0\" " Mar 12 14:50:21.477205 master-0 kubenswrapper[37036]: I0312 14:50:21.477156 37036 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 14:50:21.477434 master-0 kubenswrapper[37036]: I0312 14:50:21.477321 37036 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c23b5911-fb8a-49ca-a229-b24d2fc68f14" (UniqueName: "kubernetes.io/csi/topolvm.io^dbac9550-b4fd-4c80-96a6-54c391bed946") on node "master-0" Mar 12 14:50:21.537031 master-0 kubenswrapper[37036]: I0312 14:50:21.536972 37036 reconciler_common.go:293] "Volume detached for volume \"pvc-c23b5911-fb8a-49ca-a229-b24d2fc68f14\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dbac9550-b4fd-4c80-96a6-54c391bed946\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:21.588536 master-0 kubenswrapper[37036]: I0312 14:50:21.588481 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-bc20e-default-external-api-0"] Mar 12 14:50:21.605924 master-0 kubenswrapper[37036]: I0312 14:50:21.605864 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-bc20e-default-external-api-0"] Mar 12 14:50:21.616860 master-0 kubenswrapper[37036]: I0312 14:50:21.615567 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-bc20e-default-external-api-0"] Mar 12 14:50:21.616860 master-0 kubenswrapper[37036]: E0312 14:50:21.616147 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40b9131a-7cef-4638-8a57-e2109afa2584" containerName="init" Mar 12 14:50:21.616860 master-0 kubenswrapper[37036]: I0312 14:50:21.616161 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="40b9131a-7cef-4638-8a57-e2109afa2584" containerName="init" Mar 12 14:50:21.616860 master-0 kubenswrapper[37036]: I0312 14:50:21.616390 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="40b9131a-7cef-4638-8a57-e2109afa2584" containerName="init" Mar 12 14:50:21.619282 master-0 kubenswrapper[37036]: I0312 14:50:21.617569 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:21.634842 master-0 kubenswrapper[37036]: I0312 14:50:21.634799 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-bc20e-default-external-config-data" Mar 12 14:50:21.635088 master-0 kubenswrapper[37036]: I0312 14:50:21.635058 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 12 14:50:21.648920 master-0 kubenswrapper[37036]: I0312 14:50:21.647988 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bc20e-default-external-api-0"] Mar 12 14:50:21.663886 master-0 kubenswrapper[37036]: I0312 14:50:21.662623 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a339e40b-4843-4796-8fbe-a3a0ca45a5a2\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b388bada-a531-4c3f-bf6b-3b84af4376f1\") pod \"glance-bc20e-default-external-api-0\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:21.663886 master-0 kubenswrapper[37036]: I0312 14:50:21.662691 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96j5f\" (UniqueName: \"kubernetes.io/projected/3a5b885c-0466-4883-9af2-c8942c5b700c-kube-api-access-96j5f\") pod \"glance-bc20e-default-external-api-0\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:21.663886 master-0 kubenswrapper[37036]: I0312 14:50:21.662726 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5b885c-0466-4883-9af2-c8942c5b700c-combined-ca-bundle\") pod \"glance-bc20e-default-external-api-0\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:21.663886 master-0 kubenswrapper[37036]: I0312 14:50:21.662755 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a5b885c-0466-4883-9af2-c8942c5b700c-scripts\") pod \"glance-bc20e-default-external-api-0\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:21.663886 master-0 kubenswrapper[37036]: I0312 14:50:21.662790 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a5b885c-0466-4883-9af2-c8942c5b700c-public-tls-certs\") pod \"glance-bc20e-default-external-api-0\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:21.663886 master-0 kubenswrapper[37036]: I0312 14:50:21.662806 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3a5b885c-0466-4883-9af2-c8942c5b700c-httpd-run\") pod \"glance-bc20e-default-external-api-0\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:21.663886 master-0 kubenswrapper[37036]: I0312 14:50:21.662841 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a5b885c-0466-4883-9af2-c8942c5b700c-logs\") pod \"glance-bc20e-default-external-api-0\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:21.663886 master-0 kubenswrapper[37036]: I0312 14:50:21.662985 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5b885c-0466-4883-9af2-c8942c5b700c-config-data\") pod \"glance-bc20e-default-external-api-0\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:21.742987 master-0 kubenswrapper[37036]: I0312 14:50:21.742935 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-bc20e-default-internal-api-0"] Mar 12 14:50:21.771225 master-0 kubenswrapper[37036]: I0312 14:50:21.771157 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96j5f\" (UniqueName: \"kubernetes.io/projected/3a5b885c-0466-4883-9af2-c8942c5b700c-kube-api-access-96j5f\") pod \"glance-bc20e-default-external-api-0\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:21.771507 master-0 kubenswrapper[37036]: I0312 14:50:21.771447 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5b885c-0466-4883-9af2-c8942c5b700c-combined-ca-bundle\") pod \"glance-bc20e-default-external-api-0\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:21.771767 master-0 kubenswrapper[37036]: I0312 14:50:21.771738 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a5b885c-0466-4883-9af2-c8942c5b700c-scripts\") pod \"glance-bc20e-default-external-api-0\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:21.771839 master-0 kubenswrapper[37036]: I0312 14:50:21.771821 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a5b885c-0466-4883-9af2-c8942c5b700c-public-tls-certs\") pod \"glance-bc20e-default-external-api-0\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:21.771884 master-0 kubenswrapper[37036]: I0312 14:50:21.771847 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3a5b885c-0466-4883-9af2-c8942c5b700c-httpd-run\") pod \"glance-bc20e-default-external-api-0\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:21.771983 master-0 kubenswrapper[37036]: I0312 14:50:21.771963 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a5b885c-0466-4883-9af2-c8942c5b700c-logs\") pod \"glance-bc20e-default-external-api-0\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:21.772651 master-0 kubenswrapper[37036]: I0312 14:50:21.772617 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5b885c-0466-4883-9af2-c8942c5b700c-config-data\") pod \"glance-bc20e-default-external-api-0\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:21.772811 master-0 kubenswrapper[37036]: I0312 14:50:21.772778 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a339e40b-4843-4796-8fbe-a3a0ca45a5a2\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b388bada-a531-4c3f-bf6b-3b84af4376f1\") pod \"glance-bc20e-default-external-api-0\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:21.773604 master-0 kubenswrapper[37036]: I0312 14:50:21.773528 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a5b885c-0466-4883-9af2-c8942c5b700c-logs\") pod \"glance-bc20e-default-external-api-0\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:21.773844 master-0 kubenswrapper[37036]: I0312 14:50:21.773821 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3a5b885c-0466-4883-9af2-c8942c5b700c-httpd-run\") pod \"glance-bc20e-default-external-api-0\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:21.796087 master-0 kubenswrapper[37036]: I0312 14:50:21.779662 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5b885c-0466-4883-9af2-c8942c5b700c-combined-ca-bundle\") pod \"glance-bc20e-default-external-api-0\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:21.796087 master-0 kubenswrapper[37036]: I0312 14:50:21.780643 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a5b885c-0466-4883-9af2-c8942c5b700c-scripts\") pod \"glance-bc20e-default-external-api-0\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:21.796087 master-0 kubenswrapper[37036]: I0312 14:50:21.780728 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-bc20e-default-internal-api-0"] Mar 12 14:50:21.796087 master-0 kubenswrapper[37036]: I0312 14:50:21.781223 37036 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 14:50:21.796087 master-0 kubenswrapper[37036]: I0312 14:50:21.781266 37036 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a339e40b-4843-4796-8fbe-a3a0ca45a5a2\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b388bada-a531-4c3f-bf6b-3b84af4376f1\") pod \"glance-bc20e-default-external-api-0\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/6c4df80a2d0bf34399f5f7093642ff3bb6c859672516bc87c6e10e693c5b3679/globalmount\"" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:21.796087 master-0 kubenswrapper[37036]: I0312 14:50:21.794691 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a5b885c-0466-4883-9af2-c8942c5b700c-public-tls-certs\") pod \"glance-bc20e-default-external-api-0\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:21.797614 master-0 kubenswrapper[37036]: I0312 14:50:21.797545 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96j5f\" (UniqueName: \"kubernetes.io/projected/3a5b885c-0466-4883-9af2-c8942c5b700c-kube-api-access-96j5f\") pod \"glance-bc20e-default-external-api-0\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:21.798458 master-0 kubenswrapper[37036]: I0312 14:50:21.798400 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-bc20e-default-internal-api-0"] Mar 12 14:50:21.801012 master-0 kubenswrapper[37036]: I0312 14:50:21.800765 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:21.805801 master-0 kubenswrapper[37036]: I0312 14:50:21.805747 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-bc20e-default-internal-config-data" Mar 12 14:50:21.806055 master-0 kubenswrapper[37036]: I0312 14:50:21.806037 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 12 14:50:21.817927 master-0 kubenswrapper[37036]: I0312 14:50:21.814997 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bc20e-default-internal-api-0"] Mar 12 14:50:21.826089 master-0 kubenswrapper[37036]: I0312 14:50:21.820993 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5b885c-0466-4883-9af2-c8942c5b700c-config-data\") pod \"glance-bc20e-default-external-api-0\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:21.977039 master-0 kubenswrapper[37036]: I0312 14:50:21.976949 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-844f2\" (UniqueName: \"kubernetes.io/projected/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-kube-api-access-844f2\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:21.977039 master-0 kubenswrapper[37036]: I0312 14:50:21.977037 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-config-data\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:21.977387 master-0 kubenswrapper[37036]: I0312 14:50:21.977112 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-internal-tls-certs\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:21.977387 master-0 kubenswrapper[37036]: I0312 14:50:21.977256 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-scripts\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:21.977387 master-0 kubenswrapper[37036]: I0312 14:50:21.977328 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c23b5911-fb8a-49ca-a229-b24d2fc68f14\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dbac9550-b4fd-4c80-96a6-54c391bed946\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:21.977387 master-0 kubenswrapper[37036]: I0312 14:50:21.977348 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-httpd-run\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:21.977569 master-0 kubenswrapper[37036]: I0312 14:50:21.977424 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-logs\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:21.977569 master-0 kubenswrapper[37036]: I0312 14:50:21.977444 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-combined-ca-bundle\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:22.080587 master-0 kubenswrapper[37036]: I0312 14:50:22.080511 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-scripts\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:22.080819 master-0 kubenswrapper[37036]: I0312 14:50:22.080625 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c23b5911-fb8a-49ca-a229-b24d2fc68f14\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dbac9550-b4fd-4c80-96a6-54c391bed946\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:22.080819 master-0 kubenswrapper[37036]: I0312 14:50:22.080662 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-httpd-run\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:22.081381 master-0 kubenswrapper[37036]: I0312 14:50:22.081356 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-logs\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:22.081660 master-0 kubenswrapper[37036]: I0312 14:50:22.081627 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-combined-ca-bundle\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:22.081778 master-0 kubenswrapper[37036]: I0312 14:50:22.081754 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-844f2\" (UniqueName: \"kubernetes.io/projected/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-kube-api-access-844f2\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:22.081863 master-0 kubenswrapper[37036]: I0312 14:50:22.081841 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-config-data\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:22.081863 master-0 kubenswrapper[37036]: I0312 14:50:22.081951 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-logs\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:22.081863 master-0 kubenswrapper[37036]: I0312 14:50:22.081954 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-internal-tls-certs\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:22.082198 master-0 kubenswrapper[37036]: I0312 14:50:22.082096 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-httpd-run\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:22.082501 master-0 kubenswrapper[37036]: I0312 14:50:22.082471 37036 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 14:50:22.082563 master-0 kubenswrapper[37036]: I0312 14:50:22.082498 37036 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c23b5911-fb8a-49ca-a229-b24d2fc68f14\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dbac9550-b4fd-4c80-96a6-54c391bed946\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/49ce0cc1a33b2e89ca28a5f90915fcaf3a1dd141d163d3ea96d25fddb3a57200/globalmount\"" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:22.083747 master-0 kubenswrapper[37036]: I0312 14:50:22.083714 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-scripts\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:22.085588 master-0 kubenswrapper[37036]: I0312 14:50:22.085347 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-internal-tls-certs\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:22.087272 master-0 kubenswrapper[37036]: I0312 14:50:22.087086 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-config-data\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:22.092582 master-0 kubenswrapper[37036]: I0312 14:50:22.092534 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-combined-ca-bundle\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:22.098564 master-0 kubenswrapper[37036]: I0312 14:50:22.098531 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-844f2\" (UniqueName: \"kubernetes.io/projected/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-kube-api-access-844f2\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:23.141488 master-0 kubenswrapper[37036]: I0312 14:50:23.140107 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a339e40b-4843-4796-8fbe-a3a0ca45a5a2\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b388bada-a531-4c3f-bf6b-3b84af4376f1\") pod \"glance-bc20e-default-external-api-0\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:23.143526 master-0 kubenswrapper[37036]: I0312 14:50:23.143438 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-5ghhs" Mar 12 14:50:23.161781 master-0 kubenswrapper[37036]: I0312 14:50:23.161718 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:23.211047 master-0 kubenswrapper[37036]: I0312 14:50:23.209552 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93af238f-2462-4584-872e-6e7c2c98b599-operator-scripts\") pod \"93af238f-2462-4584-872e-6e7c2c98b599\" (UID: \"93af238f-2462-4584-872e-6e7c2c98b599\") " Mar 12 14:50:23.211047 master-0 kubenswrapper[37036]: I0312 14:50:23.209880 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5jft\" (UniqueName: \"kubernetes.io/projected/93af238f-2462-4584-872e-6e7c2c98b599-kube-api-access-s5jft\") pod \"93af238f-2462-4584-872e-6e7c2c98b599\" (UID: \"93af238f-2462-4584-872e-6e7c2c98b599\") " Mar 12 14:50:23.211270 master-0 kubenswrapper[37036]: I0312 14:50:23.211212 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93af238f-2462-4584-872e-6e7c2c98b599-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93af238f-2462-4584-872e-6e7c2c98b599" (UID: "93af238f-2462-4584-872e-6e7c2c98b599"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:23.214211 master-0 kubenswrapper[37036]: I0312 14:50:23.213344 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93af238f-2462-4584-872e-6e7c2c98b599-kube-api-access-s5jft" (OuterVolumeSpecName: "kube-api-access-s5jft") pod "93af238f-2462-4584-872e-6e7c2c98b599" (UID: "93af238f-2462-4584-872e-6e7c2c98b599"). InnerVolumeSpecName "kube-api-access-s5jft". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:50:23.251104 master-0 kubenswrapper[37036]: I0312 14:50:23.251059 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-a807-account-create-update-qt766" Mar 12 14:50:23.255106 master-0 kubenswrapper[37036]: I0312 14:50:23.255055 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04bda76a-263f-41d9-a5e0-1a2638a6893f" path="/var/lib/kubelet/pods/04bda76a-263f-41d9-a5e0-1a2638a6893f/volumes" Mar 12 14:50:23.256002 master-0 kubenswrapper[37036]: I0312 14:50:23.255815 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e4a0f08-e1d5-4490-a696-b402a042ee61" path="/var/lib/kubelet/pods/7e4a0f08-e1d5-4490-a696-b402a042ee61/volumes" Mar 12 14:50:23.325508 master-0 kubenswrapper[37036]: I0312 14:50:23.325439 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5jft\" (UniqueName: \"kubernetes.io/projected/93af238f-2462-4584-872e-6e7c2c98b599-kube-api-access-s5jft\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:23.325508 master-0 kubenswrapper[37036]: I0312 14:50:23.325479 37036 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93af238f-2462-4584-872e-6e7c2c98b599-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:23.395918 master-0 kubenswrapper[37036]: I0312 14:50:23.395829 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-a807-account-create-update-qt766" event={"ID":"65ac62ad-c04b-410b-bdc7-44e1663f6682","Type":"ContainerDied","Data":"e91312e57504efd2267c7df9b206045079c30b75307ca6334311a0ce41be9490"} Mar 12 14:50:23.396145 master-0 kubenswrapper[37036]: I0312 14:50:23.395927 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e91312e57504efd2267c7df9b206045079c30b75307ca6334311a0ce41be9490" Mar 12 14:50:23.396145 master-0 kubenswrapper[37036]: I0312 14:50:23.395983 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-a807-account-create-update-qt766" Mar 12 14:50:23.399032 master-0 kubenswrapper[37036]: I0312 14:50:23.398990 37036 generic.go:334] "Generic (PLEG): container finished" podID="b5f99014-23e5-4733-a3a7-ed02f994e177" containerID="c53e9b5eca44dbdb711fd9ca31714b12eeb4a562c2a32756e0839e8a22701626" exitCode=0 Mar 12 14:50:23.399137 master-0 kubenswrapper[37036]: I0312 14:50:23.399035 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jf2m8" event={"ID":"b5f99014-23e5-4733-a3a7-ed02f994e177","Type":"ContainerDied","Data":"c53e9b5eca44dbdb711fd9ca31714b12eeb4a562c2a32756e0839e8a22701626"} Mar 12 14:50:23.401059 master-0 kubenswrapper[37036]: I0312 14:50:23.401030 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-5ghhs" event={"ID":"93af238f-2462-4584-872e-6e7c2c98b599","Type":"ContainerDied","Data":"ef6d6b13fbb8e3d426b95fa7eddb52ff190fbf30607964a639c71d8c5e1d4a3e"} Mar 12 14:50:23.401059 master-0 kubenswrapper[37036]: I0312 14:50:23.401053 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef6d6b13fbb8e3d426b95fa7eddb52ff190fbf30607964a639c71d8c5e1d4a3e" Mar 12 14:50:23.401320 master-0 kubenswrapper[37036]: I0312 14:50:23.401086 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-5ghhs" Mar 12 14:50:23.414623 master-0 kubenswrapper[37036]: I0312 14:50:23.414470 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-rdgvk" podStartSLOduration=3.435585297 podStartE2EDuration="8.414402407s" podCreationTimestamp="2026-03-12 14:50:15 +0000 UTC" firstStartedPulling="2026-03-12 14:50:18.146012569 +0000 UTC m=+877.153753506" lastFinishedPulling="2026-03-12 14:50:23.124829679 +0000 UTC m=+882.132570616" observedRunningTime="2026-03-12 14:50:23.405666499 +0000 UTC m=+882.413407436" watchObservedRunningTime="2026-03-12 14:50:23.414402407 +0000 UTC m=+882.422143344" Mar 12 14:50:23.426541 master-0 kubenswrapper[37036]: I0312 14:50:23.426488 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pdfd\" (UniqueName: \"kubernetes.io/projected/65ac62ad-c04b-410b-bdc7-44e1663f6682-kube-api-access-5pdfd\") pod \"65ac62ad-c04b-410b-bdc7-44e1663f6682\" (UID: \"65ac62ad-c04b-410b-bdc7-44e1663f6682\") " Mar 12 14:50:23.428009 master-0 kubenswrapper[37036]: I0312 14:50:23.427966 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65ac62ad-c04b-410b-bdc7-44e1663f6682-operator-scripts\") pod \"65ac62ad-c04b-410b-bdc7-44e1663f6682\" (UID: \"65ac62ad-c04b-410b-bdc7-44e1663f6682\") " Mar 12 14:50:23.431637 master-0 kubenswrapper[37036]: I0312 14:50:23.431587 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65ac62ad-c04b-410b-bdc7-44e1663f6682-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "65ac62ad-c04b-410b-bdc7-44e1663f6682" (UID: "65ac62ad-c04b-410b-bdc7-44e1663f6682"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:23.435045 master-0 kubenswrapper[37036]: I0312 14:50:23.435017 37036 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65ac62ad-c04b-410b-bdc7-44e1663f6682-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:23.437500 master-0 kubenswrapper[37036]: I0312 14:50:23.437462 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65ac62ad-c04b-410b-bdc7-44e1663f6682-kube-api-access-5pdfd" (OuterVolumeSpecName: "kube-api-access-5pdfd") pod "65ac62ad-c04b-410b-bdc7-44e1663f6682" (UID: "65ac62ad-c04b-410b-bdc7-44e1663f6682"). InnerVolumeSpecName "kube-api-access-5pdfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:50:23.538416 master-0 kubenswrapper[37036]: I0312 14:50:23.538137 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pdfd\" (UniqueName: \"kubernetes.io/projected/65ac62ad-c04b-410b-bdc7-44e1663f6682-kube-api-access-5pdfd\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:23.749772 master-0 kubenswrapper[37036]: I0312 14:50:23.749703 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bc20e-default-external-api-0"] Mar 12 14:50:24.418928 master-0 kubenswrapper[37036]: I0312 14:50:24.418715 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bc20e-default-external-api-0" event={"ID":"3a5b885c-0466-4883-9af2-c8942c5b700c","Type":"ContainerStarted","Data":"4a5d7cdb26d1dba2275f36ad028c23931eadc88d305215e98e2edefe9cf43015"} Mar 12 14:50:24.418928 master-0 kubenswrapper[37036]: I0312 14:50:24.418774 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bc20e-default-external-api-0" event={"ID":"3a5b885c-0466-4883-9af2-c8942c5b700c","Type":"ContainerStarted","Data":"2532e412223bca47be9e5608f3bc52f131cab2bb7d78bc36893204c82a46cb2d"} Mar 12 14:50:24.424973 master-0 kubenswrapper[37036]: I0312 14:50:24.424867 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rdgvk" event={"ID":"36a55e95-783b-40ef-996a-5e29f87dc118","Type":"ContainerStarted","Data":"a29229b6c6fb21ea70f5454f6978aa17cb585878ac68aa0fe3e747ded57cd934"} Mar 12 14:50:24.519223 master-0 kubenswrapper[37036]: I0312 14:50:24.518213 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c23b5911-fb8a-49ca-a229-b24d2fc68f14\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dbac9550-b4fd-4c80-96a6-54c391bed946\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:24.609941 master-0 kubenswrapper[37036]: I0312 14:50:24.609839 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:24.819598 master-0 kubenswrapper[37036]: I0312 14:50:24.819558 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jf2m8" Mar 12 14:50:24.992551 master-0 kubenswrapper[37036]: I0312 14:50:24.992494 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8wjp\" (UniqueName: \"kubernetes.io/projected/b5f99014-23e5-4733-a3a7-ed02f994e177-kube-api-access-p8wjp\") pod \"b5f99014-23e5-4733-a3a7-ed02f994e177\" (UID: \"b5f99014-23e5-4733-a3a7-ed02f994e177\") " Mar 12 14:50:24.992757 master-0 kubenswrapper[37036]: I0312 14:50:24.992654 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-credential-keys\") pod \"b5f99014-23e5-4733-a3a7-ed02f994e177\" (UID: \"b5f99014-23e5-4733-a3a7-ed02f994e177\") " Mar 12 14:50:24.992757 master-0 kubenswrapper[37036]: I0312 14:50:24.992695 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-fernet-keys\") pod \"b5f99014-23e5-4733-a3a7-ed02f994e177\" (UID: \"b5f99014-23e5-4733-a3a7-ed02f994e177\") " Mar 12 14:50:24.992854 master-0 kubenswrapper[37036]: I0312 14:50:24.992799 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-combined-ca-bundle\") pod \"b5f99014-23e5-4733-a3a7-ed02f994e177\" (UID: \"b5f99014-23e5-4733-a3a7-ed02f994e177\") " Mar 12 14:50:24.992854 master-0 kubenswrapper[37036]: I0312 14:50:24.992840 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-scripts\") pod \"b5f99014-23e5-4733-a3a7-ed02f994e177\" (UID: \"b5f99014-23e5-4733-a3a7-ed02f994e177\") " Mar 12 14:50:24.993020 master-0 kubenswrapper[37036]: I0312 14:50:24.992987 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-config-data\") pod \"b5f99014-23e5-4733-a3a7-ed02f994e177\" (UID: \"b5f99014-23e5-4733-a3a7-ed02f994e177\") " Mar 12 14:50:24.997658 master-0 kubenswrapper[37036]: I0312 14:50:24.997588 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b5f99014-23e5-4733-a3a7-ed02f994e177" (UID: "b5f99014-23e5-4733-a3a7-ed02f994e177"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:24.998150 master-0 kubenswrapper[37036]: I0312 14:50:24.998101 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-scripts" (OuterVolumeSpecName: "scripts") pod "b5f99014-23e5-4733-a3a7-ed02f994e177" (UID: "b5f99014-23e5-4733-a3a7-ed02f994e177"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:24.998233 master-0 kubenswrapper[37036]: I0312 14:50:24.998106 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5f99014-23e5-4733-a3a7-ed02f994e177-kube-api-access-p8wjp" (OuterVolumeSpecName: "kube-api-access-p8wjp") pod "b5f99014-23e5-4733-a3a7-ed02f994e177" (UID: "b5f99014-23e5-4733-a3a7-ed02f994e177"). InnerVolumeSpecName "kube-api-access-p8wjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:50:25.000001 master-0 kubenswrapper[37036]: I0312 14:50:24.999886 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b5f99014-23e5-4733-a3a7-ed02f994e177" (UID: "b5f99014-23e5-4733-a3a7-ed02f994e177"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:25.031977 master-0 kubenswrapper[37036]: I0312 14:50:25.029169 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-config-data" (OuterVolumeSpecName: "config-data") pod "b5f99014-23e5-4733-a3a7-ed02f994e177" (UID: "b5f99014-23e5-4733-a3a7-ed02f994e177"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:25.066184 master-0 kubenswrapper[37036]: I0312 14:50:25.065914 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5f99014-23e5-4733-a3a7-ed02f994e177" (UID: "b5f99014-23e5-4733-a3a7-ed02f994e177"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:25.095876 master-0 kubenswrapper[37036]: I0312 14:50:25.095805 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8wjp\" (UniqueName: \"kubernetes.io/projected/b5f99014-23e5-4733-a3a7-ed02f994e177-kube-api-access-p8wjp\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:25.095876 master-0 kubenswrapper[37036]: I0312 14:50:25.095859 37036 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-credential-keys\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:25.096158 master-0 kubenswrapper[37036]: I0312 14:50:25.095885 37036 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-fernet-keys\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:25.096158 master-0 kubenswrapper[37036]: I0312 14:50:25.095921 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:25.096158 master-0 kubenswrapper[37036]: I0312 14:50:25.095939 37036 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:25.096158 master-0 kubenswrapper[37036]: I0312 14:50:25.095954 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5f99014-23e5-4733-a3a7-ed02f994e177-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:25.437306 master-0 kubenswrapper[37036]: I0312 14:50:25.437227 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bc20e-default-external-api-0" event={"ID":"3a5b885c-0466-4883-9af2-c8942c5b700c","Type":"ContainerStarted","Data":"c8ba044ac56699d5d1fefb52ed073dbfee76f81402b701b3312728e398391369"} Mar 12 14:50:25.439977 master-0 kubenswrapper[37036]: I0312 14:50:25.439928 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jf2m8" event={"ID":"b5f99014-23e5-4733-a3a7-ed02f994e177","Type":"ContainerDied","Data":"af46b1b22523d15b3dd9a6a98d793f5968a2e64866ce1781c5c35a4b616397e0"} Mar 12 14:50:25.439977 master-0 kubenswrapper[37036]: I0312 14:50:25.439969 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af46b1b22523d15b3dd9a6a98d793f5968a2e64866ce1781c5c35a4b616397e0" Mar 12 14:50:25.440227 master-0 kubenswrapper[37036]: I0312 14:50:25.439982 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jf2m8" Mar 12 14:50:25.671326 master-0 kubenswrapper[37036]: I0312 14:50:25.669489 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-bc20e-default-external-api-0" podStartSLOduration=4.669469379 podStartE2EDuration="4.669469379s" podCreationTimestamp="2026-03-12 14:50:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:25.659069492 +0000 UTC m=+884.666810449" watchObservedRunningTime="2026-03-12 14:50:25.669469379 +0000 UTC m=+884.677210306" Mar 12 14:50:25.696587 master-0 kubenswrapper[37036]: I0312 14:50:25.696214 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bc20e-default-internal-api-0"] Mar 12 14:50:25.736776 master-0 kubenswrapper[37036]: I0312 14:50:25.736706 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jf2m8"] Mar 12 14:50:25.748200 master-0 kubenswrapper[37036]: I0312 14:50:25.748152 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jf2m8"] Mar 12 14:50:25.833204 master-0 kubenswrapper[37036]: I0312 14:50:25.829992 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-l6zns"] Mar 12 14:50:25.833204 master-0 kubenswrapper[37036]: E0312 14:50:25.831774 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93af238f-2462-4584-872e-6e7c2c98b599" containerName="mariadb-database-create" Mar 12 14:50:25.833204 master-0 kubenswrapper[37036]: I0312 14:50:25.831812 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="93af238f-2462-4584-872e-6e7c2c98b599" containerName="mariadb-database-create" Mar 12 14:50:25.833204 master-0 kubenswrapper[37036]: E0312 14:50:25.831884 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f99014-23e5-4733-a3a7-ed02f994e177" containerName="keystone-bootstrap" Mar 12 14:50:25.833204 master-0 kubenswrapper[37036]: I0312 14:50:25.831906 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f99014-23e5-4733-a3a7-ed02f994e177" containerName="keystone-bootstrap" Mar 12 14:50:25.833204 master-0 kubenswrapper[37036]: E0312 14:50:25.831942 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ac62ad-c04b-410b-bdc7-44e1663f6682" containerName="mariadb-account-create-update" Mar 12 14:50:25.833204 master-0 kubenswrapper[37036]: I0312 14:50:25.831949 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ac62ad-c04b-410b-bdc7-44e1663f6682" containerName="mariadb-account-create-update" Mar 12 14:50:25.833204 master-0 kubenswrapper[37036]: I0312 14:50:25.832611 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ac62ad-c04b-410b-bdc7-44e1663f6682" containerName="mariadb-account-create-update" Mar 12 14:50:25.833204 master-0 kubenswrapper[37036]: I0312 14:50:25.832659 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f99014-23e5-4733-a3a7-ed02f994e177" containerName="keystone-bootstrap" Mar 12 14:50:25.833204 master-0 kubenswrapper[37036]: I0312 14:50:25.832907 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="93af238f-2462-4584-872e-6e7c2c98b599" containerName="mariadb-database-create" Mar 12 14:50:25.853786 master-0 kubenswrapper[37036]: I0312 14:50:25.836320 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l6zns" Mar 12 14:50:25.853786 master-0 kubenswrapper[37036]: I0312 14:50:25.840145 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 12 14:50:25.853786 master-0 kubenswrapper[37036]: I0312 14:50:25.840807 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 12 14:50:25.853786 master-0 kubenswrapper[37036]: I0312 14:50:25.841443 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 12 14:50:25.853786 master-0 kubenswrapper[37036]: I0312 14:50:25.845740 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 12 14:50:25.882712 master-0 kubenswrapper[37036]: I0312 14:50:25.882658 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l6zns"] Mar 12 14:50:25.926052 master-0 kubenswrapper[37036]: I0312 14:50:25.925855 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-scripts\") pod \"keystone-bootstrap-l6zns\" (UID: \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\") " pod="openstack/keystone-bootstrap-l6zns" Mar 12 14:50:25.926052 master-0 kubenswrapper[37036]: I0312 14:50:25.926011 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-fernet-keys\") pod \"keystone-bootstrap-l6zns\" (UID: \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\") " pod="openstack/keystone-bootstrap-l6zns" Mar 12 14:50:25.926285 master-0 kubenswrapper[37036]: I0312 14:50:25.926227 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-combined-ca-bundle\") pod \"keystone-bootstrap-l6zns\" (UID: \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\") " pod="openstack/keystone-bootstrap-l6zns" Mar 12 14:50:25.926285 master-0 kubenswrapper[37036]: I0312 14:50:25.926251 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-credential-keys\") pod \"keystone-bootstrap-l6zns\" (UID: \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\") " pod="openstack/keystone-bootstrap-l6zns" Mar 12 14:50:25.926418 master-0 kubenswrapper[37036]: I0312 14:50:25.926388 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-config-data\") pod \"keystone-bootstrap-l6zns\" (UID: \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\") " pod="openstack/keystone-bootstrap-l6zns" Mar 12 14:50:25.926467 master-0 kubenswrapper[37036]: I0312 14:50:25.926418 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnvq8\" (UniqueName: \"kubernetes.io/projected/b41a1efb-8f79-468f-a3e1-2d42cba4456a-kube-api-access-gnvq8\") pod \"keystone-bootstrap-l6zns\" (UID: \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\") " pod="openstack/keystone-bootstrap-l6zns" Mar 12 14:50:26.011187 master-0 kubenswrapper[37036]: I0312 14:50:26.010746 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-sync-hzl7q"] Mar 12 14:50:26.012858 master-0 kubenswrapper[37036]: I0312 14:50:26.012797 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-hzl7q" Mar 12 14:50:26.023171 master-0 kubenswrapper[37036]: I0312 14:50:26.022123 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-hzl7q"] Mar 12 14:50:26.030768 master-0 kubenswrapper[37036]: I0312 14:50:26.030574 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-scripts" Mar 12 14:50:26.030768 master-0 kubenswrapper[37036]: I0312 14:50:26.030605 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Mar 12 14:50:26.032036 master-0 kubenswrapper[37036]: I0312 14:50:26.031194 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-config-data\") pod \"keystone-bootstrap-l6zns\" (UID: \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\") " pod="openstack/keystone-bootstrap-l6zns" Mar 12 14:50:26.032036 master-0 kubenswrapper[37036]: I0312 14:50:26.031254 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnvq8\" (UniqueName: \"kubernetes.io/projected/b41a1efb-8f79-468f-a3e1-2d42cba4456a-kube-api-access-gnvq8\") pod \"keystone-bootstrap-l6zns\" (UID: \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\") " pod="openstack/keystone-bootstrap-l6zns" Mar 12 14:50:26.032036 master-0 kubenswrapper[37036]: I0312 14:50:26.031291 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-scripts\") pod \"keystone-bootstrap-l6zns\" (UID: \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\") " pod="openstack/keystone-bootstrap-l6zns" Mar 12 14:50:26.032036 master-0 kubenswrapper[37036]: I0312 14:50:26.031333 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-fernet-keys\") pod \"keystone-bootstrap-l6zns\" (UID: \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\") " pod="openstack/keystone-bootstrap-l6zns" Mar 12 14:50:26.032036 master-0 kubenswrapper[37036]: I0312 14:50:26.031446 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-combined-ca-bundle\") pod \"keystone-bootstrap-l6zns\" (UID: \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\") " pod="openstack/keystone-bootstrap-l6zns" Mar 12 14:50:26.032036 master-0 kubenswrapper[37036]: I0312 14:50:26.031464 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-credential-keys\") pod \"keystone-bootstrap-l6zns\" (UID: \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\") " pod="openstack/keystone-bootstrap-l6zns" Mar 12 14:50:26.055212 master-0 kubenswrapper[37036]: I0312 14:50:26.050844 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-scripts\") pod \"keystone-bootstrap-l6zns\" (UID: \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\") " pod="openstack/keystone-bootstrap-l6zns" Mar 12 14:50:26.055212 master-0 kubenswrapper[37036]: I0312 14:50:26.051379 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-combined-ca-bundle\") pod \"keystone-bootstrap-l6zns\" (UID: \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\") " pod="openstack/keystone-bootstrap-l6zns" Mar 12 14:50:26.056505 master-0 kubenswrapper[37036]: I0312 14:50:26.056446 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-credential-keys\") pod \"keystone-bootstrap-l6zns\" (UID: \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\") " pod="openstack/keystone-bootstrap-l6zns" Mar 12 14:50:26.057202 master-0 kubenswrapper[37036]: I0312 14:50:26.057148 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnvq8\" (UniqueName: \"kubernetes.io/projected/b41a1efb-8f79-468f-a3e1-2d42cba4456a-kube-api-access-gnvq8\") pod \"keystone-bootstrap-l6zns\" (UID: \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\") " pod="openstack/keystone-bootstrap-l6zns" Mar 12 14:50:26.072936 master-0 kubenswrapper[37036]: I0312 14:50:26.071428 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-config-data\") pod \"keystone-bootstrap-l6zns\" (UID: \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\") " pod="openstack/keystone-bootstrap-l6zns" Mar 12 14:50:26.074593 master-0 kubenswrapper[37036]: I0312 14:50:26.074518 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-fernet-keys\") pod \"keystone-bootstrap-l6zns\" (UID: \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\") " pod="openstack/keystone-bootstrap-l6zns" Mar 12 14:50:26.134136 master-0 kubenswrapper[37036]: I0312 14:50:26.134068 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwzj7\" (UniqueName: \"kubernetes.io/projected/67c2c80d-8881-4a05-8d2f-2350b3848b13-kube-api-access-bwzj7\") pod \"ironic-db-sync-hzl7q\" (UID: \"67c2c80d-8881-4a05-8d2f-2350b3848b13\") " pod="openstack/ironic-db-sync-hzl7q" Mar 12 14:50:26.134359 master-0 kubenswrapper[37036]: I0312 14:50:26.134300 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/67c2c80d-8881-4a05-8d2f-2350b3848b13-etc-podinfo\") pod \"ironic-db-sync-hzl7q\" (UID: \"67c2c80d-8881-4a05-8d2f-2350b3848b13\") " pod="openstack/ironic-db-sync-hzl7q" Mar 12 14:50:26.134419 master-0 kubenswrapper[37036]: I0312 14:50:26.134355 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67c2c80d-8881-4a05-8d2f-2350b3848b13-scripts\") pod \"ironic-db-sync-hzl7q\" (UID: \"67c2c80d-8881-4a05-8d2f-2350b3848b13\") " pod="openstack/ironic-db-sync-hzl7q" Mar 12 14:50:26.134555 master-0 kubenswrapper[37036]: I0312 14:50:26.134518 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67c2c80d-8881-4a05-8d2f-2350b3848b13-config-data\") pod \"ironic-db-sync-hzl7q\" (UID: \"67c2c80d-8881-4a05-8d2f-2350b3848b13\") " pod="openstack/ironic-db-sync-hzl7q" Mar 12 14:50:26.134626 master-0 kubenswrapper[37036]: I0312 14:50:26.134566 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/67c2c80d-8881-4a05-8d2f-2350b3848b13-config-data-merged\") pod \"ironic-db-sync-hzl7q\" (UID: \"67c2c80d-8881-4a05-8d2f-2350b3848b13\") " pod="openstack/ironic-db-sync-hzl7q" Mar 12 14:50:26.134626 master-0 kubenswrapper[37036]: I0312 14:50:26.134610 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c2c80d-8881-4a05-8d2f-2350b3848b13-combined-ca-bundle\") pod \"ironic-db-sync-hzl7q\" (UID: \"67c2c80d-8881-4a05-8d2f-2350b3848b13\") " pod="openstack/ironic-db-sync-hzl7q" Mar 12 14:50:26.201339 master-0 kubenswrapper[37036]: I0312 14:50:26.201215 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l6zns" Mar 12 14:50:26.236614 master-0 kubenswrapper[37036]: I0312 14:50:26.236558 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwzj7\" (UniqueName: \"kubernetes.io/projected/67c2c80d-8881-4a05-8d2f-2350b3848b13-kube-api-access-bwzj7\") pod \"ironic-db-sync-hzl7q\" (UID: \"67c2c80d-8881-4a05-8d2f-2350b3848b13\") " pod="openstack/ironic-db-sync-hzl7q" Mar 12 14:50:26.236828 master-0 kubenswrapper[37036]: I0312 14:50:26.236634 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67c2c80d-8881-4a05-8d2f-2350b3848b13-scripts\") pod \"ironic-db-sync-hzl7q\" (UID: \"67c2c80d-8881-4a05-8d2f-2350b3848b13\") " pod="openstack/ironic-db-sync-hzl7q" Mar 12 14:50:26.236828 master-0 kubenswrapper[37036]: I0312 14:50:26.236653 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/67c2c80d-8881-4a05-8d2f-2350b3848b13-etc-podinfo\") pod \"ironic-db-sync-hzl7q\" (UID: \"67c2c80d-8881-4a05-8d2f-2350b3848b13\") " pod="openstack/ironic-db-sync-hzl7q" Mar 12 14:50:26.236828 master-0 kubenswrapper[37036]: I0312 14:50:26.236716 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67c2c80d-8881-4a05-8d2f-2350b3848b13-config-data\") pod \"ironic-db-sync-hzl7q\" (UID: \"67c2c80d-8881-4a05-8d2f-2350b3848b13\") " pod="openstack/ironic-db-sync-hzl7q" Mar 12 14:50:26.236828 master-0 kubenswrapper[37036]: I0312 14:50:26.236734 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/67c2c80d-8881-4a05-8d2f-2350b3848b13-config-data-merged\") pod \"ironic-db-sync-hzl7q\" (UID: \"67c2c80d-8881-4a05-8d2f-2350b3848b13\") " pod="openstack/ironic-db-sync-hzl7q" Mar 12 14:50:26.236828 master-0 kubenswrapper[37036]: I0312 14:50:26.236752 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c2c80d-8881-4a05-8d2f-2350b3848b13-combined-ca-bundle\") pod \"ironic-db-sync-hzl7q\" (UID: \"67c2c80d-8881-4a05-8d2f-2350b3848b13\") " pod="openstack/ironic-db-sync-hzl7q" Mar 12 14:50:26.239385 master-0 kubenswrapper[37036]: I0312 14:50:26.237808 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/67c2c80d-8881-4a05-8d2f-2350b3848b13-config-data-merged\") pod \"ironic-db-sync-hzl7q\" (UID: \"67c2c80d-8881-4a05-8d2f-2350b3848b13\") " pod="openstack/ironic-db-sync-hzl7q" Mar 12 14:50:26.242805 master-0 kubenswrapper[37036]: I0312 14:50:26.242764 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c2c80d-8881-4a05-8d2f-2350b3848b13-combined-ca-bundle\") pod \"ironic-db-sync-hzl7q\" (UID: \"67c2c80d-8881-4a05-8d2f-2350b3848b13\") " pod="openstack/ironic-db-sync-hzl7q" Mar 12 14:50:26.242926 master-0 kubenswrapper[37036]: I0312 14:50:26.242804 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/67c2c80d-8881-4a05-8d2f-2350b3848b13-etc-podinfo\") pod \"ironic-db-sync-hzl7q\" (UID: \"67c2c80d-8881-4a05-8d2f-2350b3848b13\") " pod="openstack/ironic-db-sync-hzl7q" Mar 12 14:50:26.242926 master-0 kubenswrapper[37036]: I0312 14:50:26.242804 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67c2c80d-8881-4a05-8d2f-2350b3848b13-scripts\") pod \"ironic-db-sync-hzl7q\" (UID: \"67c2c80d-8881-4a05-8d2f-2350b3848b13\") " pod="openstack/ironic-db-sync-hzl7q" Mar 12 14:50:26.245690 master-0 kubenswrapper[37036]: I0312 14:50:26.245659 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67c2c80d-8881-4a05-8d2f-2350b3848b13-config-data\") pod \"ironic-db-sync-hzl7q\" (UID: \"67c2c80d-8881-4a05-8d2f-2350b3848b13\") " pod="openstack/ironic-db-sync-hzl7q" Mar 12 14:50:26.259551 master-0 kubenswrapper[37036]: I0312 14:50:26.259245 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwzj7\" (UniqueName: \"kubernetes.io/projected/67c2c80d-8881-4a05-8d2f-2350b3848b13-kube-api-access-bwzj7\") pod \"ironic-db-sync-hzl7q\" (UID: \"67c2c80d-8881-4a05-8d2f-2350b3848b13\") " pod="openstack/ironic-db-sync-hzl7q" Mar 12 14:50:26.470805 master-0 kubenswrapper[37036]: I0312 14:50:26.470767 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-hzl7q" Mar 12 14:50:26.482354 master-0 kubenswrapper[37036]: I0312 14:50:26.482280 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bc20e-default-internal-api-0" event={"ID":"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae","Type":"ContainerStarted","Data":"a4712f26a1b45f8060cab3e25f59a258c36af65889631302c1c3828633456a0f"} Mar 12 14:50:26.482354 master-0 kubenswrapper[37036]: I0312 14:50:26.482350 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bc20e-default-internal-api-0" event={"ID":"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae","Type":"ContainerStarted","Data":"7e0f8571fa2b0b5c693d201f6033339b5542443ef2433e90927fdb2f03e11ec6"} Mar 12 14:50:26.740327 master-0 kubenswrapper[37036]: I0312 14:50:26.740252 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l6zns"] Mar 12 14:50:26.927805 master-0 kubenswrapper[37036]: I0312 14:50:26.927737 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-hzl7q"] Mar 12 14:50:27.215009 master-0 kubenswrapper[37036]: I0312 14:50:27.214097 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" Mar 12 14:50:27.267229 master-0 kubenswrapper[37036]: I0312 14:50:27.261376 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5f99014-23e5-4733-a3a7-ed02f994e177" path="/var/lib/kubelet/pods/b5f99014-23e5-4733-a3a7-ed02f994e177/volumes" Mar 12 14:50:27.325273 master-0 kubenswrapper[37036]: I0312 14:50:27.319529 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b8649b7f9-97zvq"] Mar 12 14:50:27.325273 master-0 kubenswrapper[37036]: I0312 14:50:27.319766 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" podUID="3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9" containerName="dnsmasq-dns" containerID="cri-o://a1c57e3759f1afb00b6229f6176686f107112f53db8e4c19c06fb14a9140a225" gracePeriod=10 Mar 12 14:50:28.341394 master-0 kubenswrapper[37036]: I0312 14:50:28.341317 37036 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" podUID="3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.192:5353: connect: connection refused" Mar 12 14:50:33.162986 master-0 kubenswrapper[37036]: I0312 14:50:33.162883 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:33.162986 master-0 kubenswrapper[37036]: I0312 14:50:33.162979 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:33.200826 master-0 kubenswrapper[37036]: I0312 14:50:33.200770 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:33.210524 master-0 kubenswrapper[37036]: I0312 14:50:33.210478 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:33.340948 master-0 kubenswrapper[37036]: I0312 14:50:33.340867 37036 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" podUID="3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.192:5353: connect: connection refused" Mar 12 14:50:33.604790 master-0 kubenswrapper[37036]: I0312 14:50:33.604735 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:33.604790 master-0 kubenswrapper[37036]: I0312 14:50:33.604784 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:34.697247 master-0 kubenswrapper[37036]: W0312 14:50:34.696088 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67c2c80d_8881_4a05_8d2f_2350b3848b13.slice/crio-a2757b0846a16d24795ea7de5e51ceda67532a048830e0bb73d068ae57aec828 WatchSource:0}: Error finding container a2757b0846a16d24795ea7de5e51ceda67532a048830e0bb73d068ae57aec828: Status 404 returned error can't find the container with id a2757b0846a16d24795ea7de5e51ceda67532a048830e0bb73d068ae57aec828 Mar 12 14:50:34.728372 master-0 kubenswrapper[37036]: W0312 14:50:34.728273 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb41a1efb_8f79_468f_a3e1_2d42cba4456a.slice/crio-2da46379ff99647ba1f792677f5496698462d51add2dde6d9518622341b0b2c4 WatchSource:0}: Error finding container 2da46379ff99647ba1f792677f5496698462d51add2dde6d9518622341b0b2c4: Status 404 returned error can't find the container with id 2da46379ff99647ba1f792677f5496698462d51add2dde6d9518622341b0b2c4 Mar 12 14:50:35.631302 master-0 kubenswrapper[37036]: I0312 14:50:35.631212 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l6zns" event={"ID":"b41a1efb-8f79-468f-a3e1-2d42cba4456a","Type":"ContainerStarted","Data":"2da46379ff99647ba1f792677f5496698462d51add2dde6d9518622341b0b2c4"} Mar 12 14:50:35.632793 master-0 kubenswrapper[37036]: I0312 14:50:35.632713 37036 generic.go:334] "Generic (PLEG): container finished" podID="3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9" containerID="a1c57e3759f1afb00b6229f6176686f107112f53db8e4c19c06fb14a9140a225" exitCode=0 Mar 12 14:50:35.633059 master-0 kubenswrapper[37036]: I0312 14:50:35.632800 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" event={"ID":"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9","Type":"ContainerDied","Data":"a1c57e3759f1afb00b6229f6176686f107112f53db8e4c19c06fb14a9140a225"} Mar 12 14:50:35.640530 master-0 kubenswrapper[37036]: I0312 14:50:35.634708 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-hzl7q" event={"ID":"67c2c80d-8881-4a05-8d2f-2350b3848b13","Type":"ContainerStarted","Data":"a2757b0846a16d24795ea7de5e51ceda67532a048830e0bb73d068ae57aec828"} Mar 12 14:50:35.640530 master-0 kubenswrapper[37036]: I0312 14:50:35.636326 37036 generic.go:334] "Generic (PLEG): container finished" podID="36a55e95-783b-40ef-996a-5e29f87dc118" containerID="a29229b6c6fb21ea70f5454f6978aa17cb585878ac68aa0fe3e747ded57cd934" exitCode=0 Mar 12 14:50:35.640530 master-0 kubenswrapper[37036]: I0312 14:50:35.636426 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rdgvk" event={"ID":"36a55e95-783b-40ef-996a-5e29f87dc118","Type":"ContainerDied","Data":"a29229b6c6fb21ea70f5454f6978aa17cb585878ac68aa0fe3e747ded57cd934"} Mar 12 14:50:35.644086 master-0 kubenswrapper[37036]: I0312 14:50:35.643657 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bc20e-default-internal-api-0" event={"ID":"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae","Type":"ContainerStarted","Data":"b6861f8141728bac76ef239d5809958cef5345452e37261a4216bcbf824f1dba"} Mar 12 14:50:35.742624 master-0 kubenswrapper[37036]: I0312 14:50:35.742442 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:35.742624 master-0 kubenswrapper[37036]: I0312 14:50:35.742552 37036 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 14:50:35.761564 master-0 kubenswrapper[37036]: I0312 14:50:35.760722 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:50:35.777129 master-0 kubenswrapper[37036]: I0312 14:50:35.772937 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-bc20e-default-internal-api-0" podStartSLOduration=14.772906248 podStartE2EDuration="14.772906248s" podCreationTimestamp="2026-03-12 14:50:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:35.748452452 +0000 UTC m=+894.756193409" watchObservedRunningTime="2026-03-12 14:50:35.772906248 +0000 UTC m=+894.780647185" Mar 12 14:50:36.448927 master-0 kubenswrapper[37036]: I0312 14:50:36.448523 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" Mar 12 14:50:36.552927 master-0 kubenswrapper[37036]: I0312 14:50:36.550859 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-ovsdbserver-sb\") pod \"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9\" (UID: \"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9\") " Mar 12 14:50:36.552927 master-0 kubenswrapper[37036]: I0312 14:50:36.551132 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhqfq\" (UniqueName: \"kubernetes.io/projected/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-kube-api-access-nhqfq\") pod \"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9\" (UID: \"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9\") " Mar 12 14:50:36.552927 master-0 kubenswrapper[37036]: I0312 14:50:36.551190 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-dns-svc\") pod \"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9\" (UID: \"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9\") " Mar 12 14:50:36.552927 master-0 kubenswrapper[37036]: I0312 14:50:36.551805 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-config\") pod \"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9\" (UID: \"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9\") " Mar 12 14:50:36.552927 master-0 kubenswrapper[37036]: I0312 14:50:36.551840 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-ovsdbserver-nb\") pod \"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9\" (UID: \"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9\") " Mar 12 14:50:36.565165 master-0 kubenswrapper[37036]: I0312 14:50:36.564875 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-kube-api-access-nhqfq" (OuterVolumeSpecName: "kube-api-access-nhqfq") pod "3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9" (UID: "3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9"). InnerVolumeSpecName "kube-api-access-nhqfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:50:36.614816 master-0 kubenswrapper[37036]: I0312 14:50:36.614289 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9" (UID: "3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:36.657076 master-0 kubenswrapper[37036]: I0312 14:50:36.656979 37036 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:36.657076 master-0 kubenswrapper[37036]: I0312 14:50:36.657016 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhqfq\" (UniqueName: \"kubernetes.io/projected/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-kube-api-access-nhqfq\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:36.659249 master-0 kubenswrapper[37036]: I0312 14:50:36.659191 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9" (UID: "3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:36.659328 master-0 kubenswrapper[37036]: I0312 14:50:36.659263 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" event={"ID":"3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9","Type":"ContainerDied","Data":"04911b8fc1f8573d5632919377c3fa38b18e16640d73857141cf5c58e965df4d"} Mar 12 14:50:36.659366 master-0 kubenswrapper[37036]: I0312 14:50:36.659323 37036 scope.go:117] "RemoveContainer" containerID="a1c57e3759f1afb00b6229f6176686f107112f53db8e4c19c06fb14a9140a225" Mar 12 14:50:36.659667 master-0 kubenswrapper[37036]: I0312 14:50:36.659629 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8649b7f9-97zvq" Mar 12 14:50:36.685926 master-0 kubenswrapper[37036]: I0312 14:50:36.682253 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-config" (OuterVolumeSpecName: "config") pod "3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9" (UID: "3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:36.692089 master-0 kubenswrapper[37036]: I0312 14:50:36.688676 37036 scope.go:117] "RemoveContainer" containerID="da5ff2b7e53cf6d1ea16e443dc92e4e165ca3226c259a471cc05aad87cea66f3" Mar 12 14:50:36.772215 master-0 kubenswrapper[37036]: I0312 14:50:36.759040 37036 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:36.772215 master-0 kubenswrapper[37036]: I0312 14:50:36.759072 37036 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:36.795243 master-0 kubenswrapper[37036]: I0312 14:50:36.795168 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9" (UID: "3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:36.862016 master-0 kubenswrapper[37036]: I0312 14:50:36.861613 37036 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:37.054271 master-0 kubenswrapper[37036]: I0312 14:50:37.054150 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b8649b7f9-97zvq"] Mar 12 14:50:37.099518 master-0 kubenswrapper[37036]: I0312 14:50:37.099324 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b8649b7f9-97zvq"] Mar 12 14:50:37.173343 master-0 kubenswrapper[37036]: I0312 14:50:37.173286 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rdgvk" Mar 12 14:50:37.270934 master-0 kubenswrapper[37036]: I0312 14:50:37.270593 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9" path="/var/lib/kubelet/pods/3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9/volumes" Mar 12 14:50:37.282193 master-0 kubenswrapper[37036]: I0312 14:50:37.281726 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z6d7\" (UniqueName: \"kubernetes.io/projected/36a55e95-783b-40ef-996a-5e29f87dc118-kube-api-access-2z6d7\") pod \"36a55e95-783b-40ef-996a-5e29f87dc118\" (UID: \"36a55e95-783b-40ef-996a-5e29f87dc118\") " Mar 12 14:50:37.282193 master-0 kubenswrapper[37036]: I0312 14:50:37.281834 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36a55e95-783b-40ef-996a-5e29f87dc118-scripts\") pod \"36a55e95-783b-40ef-996a-5e29f87dc118\" (UID: \"36a55e95-783b-40ef-996a-5e29f87dc118\") " Mar 12 14:50:37.282458 master-0 kubenswrapper[37036]: I0312 14:50:37.281874 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36a55e95-783b-40ef-996a-5e29f87dc118-logs\") pod \"36a55e95-783b-40ef-996a-5e29f87dc118\" (UID: \"36a55e95-783b-40ef-996a-5e29f87dc118\") " Mar 12 14:50:37.282458 master-0 kubenswrapper[37036]: I0312 14:50:37.282284 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36a55e95-783b-40ef-996a-5e29f87dc118-logs" (OuterVolumeSpecName: "logs") pod "36a55e95-783b-40ef-996a-5e29f87dc118" (UID: "36a55e95-783b-40ef-996a-5e29f87dc118"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:50:37.282458 master-0 kubenswrapper[37036]: I0312 14:50:37.282361 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36a55e95-783b-40ef-996a-5e29f87dc118-config-data\") pod \"36a55e95-783b-40ef-996a-5e29f87dc118\" (UID: \"36a55e95-783b-40ef-996a-5e29f87dc118\") " Mar 12 14:50:37.283311 master-0 kubenswrapper[37036]: I0312 14:50:37.283268 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36a55e95-783b-40ef-996a-5e29f87dc118-combined-ca-bundle\") pod \"36a55e95-783b-40ef-996a-5e29f87dc118\" (UID: \"36a55e95-783b-40ef-996a-5e29f87dc118\") " Mar 12 14:50:37.285404 master-0 kubenswrapper[37036]: I0312 14:50:37.285334 37036 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36a55e95-783b-40ef-996a-5e29f87dc118-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:37.287277 master-0 kubenswrapper[37036]: I0312 14:50:37.287222 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36a55e95-783b-40ef-996a-5e29f87dc118-scripts" (OuterVolumeSpecName: "scripts") pod "36a55e95-783b-40ef-996a-5e29f87dc118" (UID: "36a55e95-783b-40ef-996a-5e29f87dc118"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:37.287277 master-0 kubenswrapper[37036]: I0312 14:50:37.287249 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36a55e95-783b-40ef-996a-5e29f87dc118-kube-api-access-2z6d7" (OuterVolumeSpecName: "kube-api-access-2z6d7") pod "36a55e95-783b-40ef-996a-5e29f87dc118" (UID: "36a55e95-783b-40ef-996a-5e29f87dc118"). InnerVolumeSpecName "kube-api-access-2z6d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:50:37.308379 master-0 kubenswrapper[37036]: I0312 14:50:37.308242 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36a55e95-783b-40ef-996a-5e29f87dc118-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36a55e95-783b-40ef-996a-5e29f87dc118" (UID: "36a55e95-783b-40ef-996a-5e29f87dc118"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:37.343949 master-0 kubenswrapper[37036]: I0312 14:50:37.343891 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36a55e95-783b-40ef-996a-5e29f87dc118-config-data" (OuterVolumeSpecName: "config-data") pod "36a55e95-783b-40ef-996a-5e29f87dc118" (UID: "36a55e95-783b-40ef-996a-5e29f87dc118"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:37.387743 master-0 kubenswrapper[37036]: I0312 14:50:37.387691 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36a55e95-783b-40ef-996a-5e29f87dc118-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:37.387743 master-0 kubenswrapper[37036]: I0312 14:50:37.387732 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z6d7\" (UniqueName: \"kubernetes.io/projected/36a55e95-783b-40ef-996a-5e29f87dc118-kube-api-access-2z6d7\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:37.387743 master-0 kubenswrapper[37036]: I0312 14:50:37.387745 37036 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36a55e95-783b-40ef-996a-5e29f87dc118-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:37.387743 master-0 kubenswrapper[37036]: I0312 14:50:37.387755 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36a55e95-783b-40ef-996a-5e29f87dc118-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:37.673309 master-0 kubenswrapper[37036]: I0312 14:50:37.673187 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rdgvk" event={"ID":"36a55e95-783b-40ef-996a-5e29f87dc118","Type":"ContainerDied","Data":"b22d7c12a7792b3c33f0cbf1150fe1b00dd3aaab44014f8e994eb1d1209b69e1"} Mar 12 14:50:37.673309 master-0 kubenswrapper[37036]: I0312 14:50:37.673248 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b22d7c12a7792b3c33f0cbf1150fe1b00dd3aaab44014f8e994eb1d1209b69e1" Mar 12 14:50:37.673522 master-0 kubenswrapper[37036]: I0312 14:50:37.673362 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rdgvk" Mar 12 14:50:37.690000 master-0 kubenswrapper[37036]: I0312 14:50:37.689951 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-db-sync-6bdmp" event={"ID":"75876022-f077-4c9e-95c1-3d0b1dbb61a3","Type":"ContainerStarted","Data":"aece55dcd5ca547bc270fd9af22f19103250fe7a8fcb36ebe6f7e2caad82ca26"} Mar 12 14:50:37.702631 master-0 kubenswrapper[37036]: I0312 14:50:37.702572 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l6zns" event={"ID":"b41a1efb-8f79-468f-a3e1-2d42cba4456a","Type":"ContainerStarted","Data":"0b393615d37f64bb9a2f1752b7a46d53905bd86a6ca961d5d64e6dd58f83b09b"} Mar 12 14:50:37.719570 master-0 kubenswrapper[37036]: I0312 14:50:37.719483 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-05598-db-sync-6bdmp" podStartSLOduration=3.626080687 podStartE2EDuration="22.719462642s" podCreationTimestamp="2026-03-12 14:50:15 +0000 UTC" firstStartedPulling="2026-03-12 14:50:17.369024623 +0000 UTC m=+876.376765560" lastFinishedPulling="2026-03-12 14:50:36.462406578 +0000 UTC m=+895.470147515" observedRunningTime="2026-03-12 14:50:37.707576374 +0000 UTC m=+896.715317341" watchObservedRunningTime="2026-03-12 14:50:37.719462642 +0000 UTC m=+896.727203579" Mar 12 14:50:37.738991 master-0 kubenswrapper[37036]: I0312 14:50:37.738882 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-l6zns" podStartSLOduration=12.738862489 podStartE2EDuration="12.738862489s" podCreationTimestamp="2026-03-12 14:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:37.732871495 +0000 UTC m=+896.740612422" watchObservedRunningTime="2026-03-12 14:50:37.738862489 +0000 UTC m=+896.746603426" Mar 12 14:50:38.139035 master-0 kubenswrapper[37036]: I0312 14:50:38.138148 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-b5866567d-h9t4r"] Mar 12 14:50:38.169773 master-0 kubenswrapper[37036]: E0312 14:50:38.169720 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36a55e95-783b-40ef-996a-5e29f87dc118" containerName="placement-db-sync" Mar 12 14:50:38.170047 master-0 kubenswrapper[37036]: I0312 14:50:38.170027 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="36a55e95-783b-40ef-996a-5e29f87dc118" containerName="placement-db-sync" Mar 12 14:50:38.170170 master-0 kubenswrapper[37036]: E0312 14:50:38.170159 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9" containerName="init" Mar 12 14:50:38.170249 master-0 kubenswrapper[37036]: I0312 14:50:38.170237 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9" containerName="init" Mar 12 14:50:38.170341 master-0 kubenswrapper[37036]: E0312 14:50:38.170330 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9" containerName="dnsmasq-dns" Mar 12 14:50:38.170414 master-0 kubenswrapper[37036]: I0312 14:50:38.170404 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9" containerName="dnsmasq-dns" Mar 12 14:50:38.173189 master-0 kubenswrapper[37036]: I0312 14:50:38.173134 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="3095c7fb-7e2c-4a5f-a5e5-0a1876168cd9" containerName="dnsmasq-dns" Mar 12 14:50:38.175018 master-0 kubenswrapper[37036]: I0312 14:50:38.173272 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="36a55e95-783b-40ef-996a-5e29f87dc118" containerName="placement-db-sync" Mar 12 14:50:38.179414 master-0 kubenswrapper[37036]: I0312 14:50:38.179374 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:50:38.183581 master-0 kubenswrapper[37036]: I0312 14:50:38.182713 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 12 14:50:38.183581 master-0 kubenswrapper[37036]: I0312 14:50:38.182840 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 12 14:50:38.183581 master-0 kubenswrapper[37036]: I0312 14:50:38.183455 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 12 14:50:38.184205 master-0 kubenswrapper[37036]: I0312 14:50:38.183595 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 12 14:50:38.187771 master-0 kubenswrapper[37036]: I0312 14:50:38.187719 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b5866567d-h9t4r"] Mar 12 14:50:38.311029 master-0 kubenswrapper[37036]: I0312 14:50:38.310984 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h27k4\" (UniqueName: \"kubernetes.io/projected/78f8830e-f634-424d-b7b7-606453255117-kube-api-access-h27k4\") pod \"placement-b5866567d-h9t4r\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:50:38.311419 master-0 kubenswrapper[37036]: I0312 14:50:38.311348 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-config-data\") pod \"placement-b5866567d-h9t4r\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:50:38.311616 master-0 kubenswrapper[37036]: I0312 14:50:38.311597 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-internal-tls-certs\") pod \"placement-b5866567d-h9t4r\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:50:38.311736 master-0 kubenswrapper[37036]: I0312 14:50:38.311722 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78f8830e-f634-424d-b7b7-606453255117-logs\") pod \"placement-b5866567d-h9t4r\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:50:38.311851 master-0 kubenswrapper[37036]: I0312 14:50:38.311838 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-combined-ca-bundle\") pod \"placement-b5866567d-h9t4r\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:50:38.312026 master-0 kubenswrapper[37036]: I0312 14:50:38.312012 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-public-tls-certs\") pod \"placement-b5866567d-h9t4r\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:50:38.312150 master-0 kubenswrapper[37036]: I0312 14:50:38.312132 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-scripts\") pod \"placement-b5866567d-h9t4r\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:50:38.413779 master-0 kubenswrapper[37036]: I0312 14:50:38.413668 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-internal-tls-certs\") pod \"placement-b5866567d-h9t4r\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:50:38.413779 master-0 kubenswrapper[37036]: I0312 14:50:38.413745 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78f8830e-f634-424d-b7b7-606453255117-logs\") pod \"placement-b5866567d-h9t4r\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:50:38.413779 master-0 kubenswrapper[37036]: I0312 14:50:38.413782 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-combined-ca-bundle\") pod \"placement-b5866567d-h9t4r\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:50:38.414041 master-0 kubenswrapper[37036]: I0312 14:50:38.413803 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-public-tls-certs\") pod \"placement-b5866567d-h9t4r\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:50:38.414041 master-0 kubenswrapper[37036]: I0312 14:50:38.413827 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-scripts\") pod \"placement-b5866567d-h9t4r\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:50:38.414041 master-0 kubenswrapper[37036]: I0312 14:50:38.413880 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h27k4\" (UniqueName: \"kubernetes.io/projected/78f8830e-f634-424d-b7b7-606453255117-kube-api-access-h27k4\") pod \"placement-b5866567d-h9t4r\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:50:38.414041 master-0 kubenswrapper[37036]: I0312 14:50:38.413938 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-config-data\") pod \"placement-b5866567d-h9t4r\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:50:38.415723 master-0 kubenswrapper[37036]: I0312 14:50:38.415685 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78f8830e-f634-424d-b7b7-606453255117-logs\") pod \"placement-b5866567d-h9t4r\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:50:38.417865 master-0 kubenswrapper[37036]: I0312 14:50:38.417829 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-config-data\") pod \"placement-b5866567d-h9t4r\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:50:38.418726 master-0 kubenswrapper[37036]: I0312 14:50:38.418675 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-scripts\") pod \"placement-b5866567d-h9t4r\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:50:38.419162 master-0 kubenswrapper[37036]: I0312 14:50:38.419125 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-combined-ca-bundle\") pod \"placement-b5866567d-h9t4r\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:50:38.419547 master-0 kubenswrapper[37036]: I0312 14:50:38.419497 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-public-tls-certs\") pod \"placement-b5866567d-h9t4r\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:50:38.430576 master-0 kubenswrapper[37036]: I0312 14:50:38.430460 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-internal-tls-certs\") pod \"placement-b5866567d-h9t4r\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:50:38.433437 master-0 kubenswrapper[37036]: I0312 14:50:38.432608 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h27k4\" (UniqueName: \"kubernetes.io/projected/78f8830e-f634-424d-b7b7-606453255117-kube-api-access-h27k4\") pod \"placement-b5866567d-h9t4r\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:50:38.507115 master-0 kubenswrapper[37036]: I0312 14:50:38.507058 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:50:40.137226 master-0 kubenswrapper[37036]: E0312 14:50:40.137151 37036 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb41a1efb_8f79_468f_a3e1_2d42cba4456a.slice/crio-conmon-0b393615d37f64bb9a2f1752b7a46d53905bd86a6ca961d5d64e6dd58f83b09b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb41a1efb_8f79_468f_a3e1_2d42cba4456a.slice/crio-0b393615d37f64bb9a2f1752b7a46d53905bd86a6ca961d5d64e6dd58f83b09b.scope\": RecentStats: unable to find data in memory cache]" Mar 12 14:50:41.800460 master-0 kubenswrapper[37036]: I0312 14:50:41.799570 37036 generic.go:334] "Generic (PLEG): container finished" podID="b41a1efb-8f79-468f-a3e1-2d42cba4456a" containerID="0b393615d37f64bb9a2f1752b7a46d53905bd86a6ca961d5d64e6dd58f83b09b" exitCode=0 Mar 12 14:50:41.800460 master-0 kubenswrapper[37036]: I0312 14:50:41.799629 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l6zns" event={"ID":"b41a1efb-8f79-468f-a3e1-2d42cba4456a","Type":"ContainerDied","Data":"0b393615d37f64bb9a2f1752b7a46d53905bd86a6ca961d5d64e6dd58f83b09b"} Mar 12 14:50:42.166671 master-0 kubenswrapper[37036]: I0312 14:50:42.166483 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b5866567d-h9t4r"] Mar 12 14:50:42.194807 master-0 kubenswrapper[37036]: W0312 14:50:42.194634 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78f8830e_f634_424d_b7b7_606453255117.slice/crio-173400c1b05979fc25f005d7ed422981b58f01bfb55887d568e9b58239906968 WatchSource:0}: Error finding container 173400c1b05979fc25f005d7ed422981b58f01bfb55887d568e9b58239906968: Status 404 returned error can't find the container with id 173400c1b05979fc25f005d7ed422981b58f01bfb55887d568e9b58239906968 Mar 12 14:50:42.823997 master-0 kubenswrapper[37036]: I0312 14:50:42.823940 37036 generic.go:334] "Generic (PLEG): container finished" podID="67c2c80d-8881-4a05-8d2f-2350b3848b13" containerID="30c26a106477cd540f89eb213e1764346847b2dfed1514fc16942d60963bc415" exitCode=0 Mar 12 14:50:42.825196 master-0 kubenswrapper[37036]: I0312 14:50:42.825137 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-hzl7q" event={"ID":"67c2c80d-8881-4a05-8d2f-2350b3848b13","Type":"ContainerDied","Data":"30c26a106477cd540f89eb213e1764346847b2dfed1514fc16942d60963bc415"} Mar 12 14:50:42.827911 master-0 kubenswrapper[37036]: I0312 14:50:42.827848 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b5866567d-h9t4r" event={"ID":"78f8830e-f634-424d-b7b7-606453255117","Type":"ContainerStarted","Data":"968ac1a419ce83c9df0b7935a1d2d00b1c77d46fefb605ce697a12a530f04a12"} Mar 12 14:50:42.827911 master-0 kubenswrapper[37036]: I0312 14:50:42.827909 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b5866567d-h9t4r" event={"ID":"78f8830e-f634-424d-b7b7-606453255117","Type":"ContainerStarted","Data":"493a3f7dd30049dbc9300fbe1793f1c92e24f73ae60232387d925f9a1db82115"} Mar 12 14:50:42.828052 master-0 kubenswrapper[37036]: I0312 14:50:42.827920 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b5866567d-h9t4r" event={"ID":"78f8830e-f634-424d-b7b7-606453255117","Type":"ContainerStarted","Data":"173400c1b05979fc25f005d7ed422981b58f01bfb55887d568e9b58239906968"} Mar 12 14:50:42.892210 master-0 kubenswrapper[37036]: I0312 14:50:42.892129 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-b5866567d-h9t4r" podStartSLOduration=4.892108813 podStartE2EDuration="4.892108813s" podCreationTimestamp="2026-03-12 14:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:42.886452284 +0000 UTC m=+901.894193221" watchObservedRunningTime="2026-03-12 14:50:42.892108813 +0000 UTC m=+901.899849750" Mar 12 14:50:43.312065 master-0 kubenswrapper[37036]: I0312 14:50:43.312010 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l6zns" Mar 12 14:50:43.433504 master-0 kubenswrapper[37036]: I0312 14:50:43.433413 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-fernet-keys\") pod \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\" (UID: \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\") " Mar 12 14:50:43.433739 master-0 kubenswrapper[37036]: I0312 14:50:43.433529 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-config-data\") pod \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\" (UID: \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\") " Mar 12 14:50:43.433739 master-0 kubenswrapper[37036]: I0312 14:50:43.433672 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-credential-keys\") pod \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\" (UID: \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\") " Mar 12 14:50:43.433838 master-0 kubenswrapper[37036]: I0312 14:50:43.433746 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-combined-ca-bundle\") pod \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\" (UID: \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\") " Mar 12 14:50:43.433838 master-0 kubenswrapper[37036]: I0312 14:50:43.433802 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnvq8\" (UniqueName: \"kubernetes.io/projected/b41a1efb-8f79-468f-a3e1-2d42cba4456a-kube-api-access-gnvq8\") pod \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\" (UID: \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\") " Mar 12 14:50:43.433956 master-0 kubenswrapper[37036]: I0312 14:50:43.433918 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-scripts\") pod \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\" (UID: \"b41a1efb-8f79-468f-a3e1-2d42cba4456a\") " Mar 12 14:50:43.436712 master-0 kubenswrapper[37036]: I0312 14:50:43.436643 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b41a1efb-8f79-468f-a3e1-2d42cba4456a-kube-api-access-gnvq8" (OuterVolumeSpecName: "kube-api-access-gnvq8") pod "b41a1efb-8f79-468f-a3e1-2d42cba4456a" (UID: "b41a1efb-8f79-468f-a3e1-2d42cba4456a"). InnerVolumeSpecName "kube-api-access-gnvq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:50:43.437072 master-0 kubenswrapper[37036]: I0312 14:50:43.437030 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b41a1efb-8f79-468f-a3e1-2d42cba4456a" (UID: "b41a1efb-8f79-468f-a3e1-2d42cba4456a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:43.437923 master-0 kubenswrapper[37036]: I0312 14:50:43.437860 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b41a1efb-8f79-468f-a3e1-2d42cba4456a" (UID: "b41a1efb-8f79-468f-a3e1-2d42cba4456a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:43.438254 master-0 kubenswrapper[37036]: I0312 14:50:43.438196 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-scripts" (OuterVolumeSpecName: "scripts") pod "b41a1efb-8f79-468f-a3e1-2d42cba4456a" (UID: "b41a1efb-8f79-468f-a3e1-2d42cba4456a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:43.460787 master-0 kubenswrapper[37036]: I0312 14:50:43.460700 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-config-data" (OuterVolumeSpecName: "config-data") pod "b41a1efb-8f79-468f-a3e1-2d42cba4456a" (UID: "b41a1efb-8f79-468f-a3e1-2d42cba4456a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:43.467200 master-0 kubenswrapper[37036]: I0312 14:50:43.467135 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b41a1efb-8f79-468f-a3e1-2d42cba4456a" (UID: "b41a1efb-8f79-468f-a3e1-2d42cba4456a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:43.536560 master-0 kubenswrapper[37036]: I0312 14:50:43.536363 37036 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-fernet-keys\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:43.536560 master-0 kubenswrapper[37036]: I0312 14:50:43.536487 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:43.536560 master-0 kubenswrapper[37036]: I0312 14:50:43.536505 37036 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-credential-keys\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:43.536560 master-0 kubenswrapper[37036]: I0312 14:50:43.536517 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:43.536560 master-0 kubenswrapper[37036]: I0312 14:50:43.536531 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnvq8\" (UniqueName: \"kubernetes.io/projected/b41a1efb-8f79-468f-a3e1-2d42cba4456a-kube-api-access-gnvq8\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:43.536560 master-0 kubenswrapper[37036]: I0312 14:50:43.536543 37036 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b41a1efb-8f79-468f-a3e1-2d42cba4456a-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:43.843372 master-0 kubenswrapper[37036]: I0312 14:50:43.843235 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l6zns" Mar 12 14:50:43.843372 master-0 kubenswrapper[37036]: I0312 14:50:43.843285 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l6zns" event={"ID":"b41a1efb-8f79-468f-a3e1-2d42cba4456a","Type":"ContainerDied","Data":"2da46379ff99647ba1f792677f5496698462d51add2dde6d9518622341b0b2c4"} Mar 12 14:50:43.844024 master-0 kubenswrapper[37036]: I0312 14:50:43.843380 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2da46379ff99647ba1f792677f5496698462d51add2dde6d9518622341b0b2c4" Mar 12 14:50:43.846200 master-0 kubenswrapper[37036]: I0312 14:50:43.846144 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-hzl7q" event={"ID":"67c2c80d-8881-4a05-8d2f-2350b3848b13","Type":"ContainerStarted","Data":"3ffba879105a8fc866bb0739663862a8fdb342c2ae9dfc87fa49036a1948f8a7"} Mar 12 14:50:43.846548 master-0 kubenswrapper[37036]: I0312 14:50:43.846522 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:50:43.846606 master-0 kubenswrapper[37036]: I0312 14:50:43.846552 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:50:44.167996 master-0 kubenswrapper[37036]: I0312 14:50:44.167647 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-sync-hzl7q" podStartSLOduration=12.172217021 podStartE2EDuration="19.167617848s" podCreationTimestamp="2026-03-12 14:50:25 +0000 UTC" firstStartedPulling="2026-03-12 14:50:34.697848747 +0000 UTC m=+893.705589684" lastFinishedPulling="2026-03-12 14:50:41.693249584 +0000 UTC m=+900.700990511" observedRunningTime="2026-03-12 14:50:44.160055937 +0000 UTC m=+903.167796874" watchObservedRunningTime="2026-03-12 14:50:44.167617848 +0000 UTC m=+903.175358795" Mar 12 14:50:44.649125 master-0 kubenswrapper[37036]: I0312 14:50:44.610968 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:44.649125 master-0 kubenswrapper[37036]: I0312 14:50:44.611014 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:44.658801 master-0 kubenswrapper[37036]: I0312 14:50:44.658734 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:44.663702 master-0 kubenswrapper[37036]: I0312 14:50:44.663615 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:45.464654 master-0 kubenswrapper[37036]: I0312 14:50:45.464137 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:45.464654 master-0 kubenswrapper[37036]: I0312 14:50:45.464191 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:45.733047 master-0 kubenswrapper[37036]: I0312 14:50:45.732858 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-679985d476-m2lh7"] Mar 12 14:50:45.733821 master-0 kubenswrapper[37036]: E0312 14:50:45.733794 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b41a1efb-8f79-468f-a3e1-2d42cba4456a" containerName="keystone-bootstrap" Mar 12 14:50:45.733821 master-0 kubenswrapper[37036]: I0312 14:50:45.733819 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41a1efb-8f79-468f-a3e1-2d42cba4456a" containerName="keystone-bootstrap" Mar 12 14:50:45.734129 master-0 kubenswrapper[37036]: I0312 14:50:45.734105 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="b41a1efb-8f79-468f-a3e1-2d42cba4456a" containerName="keystone-bootstrap" Mar 12 14:50:45.734877 master-0 kubenswrapper[37036]: I0312 14:50:45.734856 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:45.737702 master-0 kubenswrapper[37036]: I0312 14:50:45.737656 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 12 14:50:45.738197 master-0 kubenswrapper[37036]: I0312 14:50:45.738171 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 12 14:50:45.738389 master-0 kubenswrapper[37036]: I0312 14:50:45.738371 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 12 14:50:45.738526 master-0 kubenswrapper[37036]: I0312 14:50:45.738505 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 12 14:50:45.741200 master-0 kubenswrapper[37036]: I0312 14:50:45.741161 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 12 14:50:45.748971 master-0 kubenswrapper[37036]: I0312 14:50:45.748873 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-679985d476-m2lh7"] Mar 12 14:50:45.793928 master-0 kubenswrapper[37036]: I0312 14:50:45.793042 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1-internal-tls-certs\") pod \"keystone-679985d476-m2lh7\" (UID: \"04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1\") " pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:45.793928 master-0 kubenswrapper[37036]: I0312 14:50:45.793093 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kczgc\" (UniqueName: \"kubernetes.io/projected/04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1-kube-api-access-kczgc\") pod \"keystone-679985d476-m2lh7\" (UID: \"04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1\") " pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:45.793928 master-0 kubenswrapper[37036]: I0312 14:50:45.793122 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1-credential-keys\") pod \"keystone-679985d476-m2lh7\" (UID: \"04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1\") " pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:45.793928 master-0 kubenswrapper[37036]: I0312 14:50:45.793183 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1-scripts\") pod \"keystone-679985d476-m2lh7\" (UID: \"04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1\") " pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:45.793928 master-0 kubenswrapper[37036]: I0312 14:50:45.793205 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1-config-data\") pod \"keystone-679985d476-m2lh7\" (UID: \"04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1\") " pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:45.793928 master-0 kubenswrapper[37036]: I0312 14:50:45.793234 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1-fernet-keys\") pod \"keystone-679985d476-m2lh7\" (UID: \"04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1\") " pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:45.793928 master-0 kubenswrapper[37036]: I0312 14:50:45.793290 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1-combined-ca-bundle\") pod \"keystone-679985d476-m2lh7\" (UID: \"04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1\") " pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:45.793928 master-0 kubenswrapper[37036]: I0312 14:50:45.793375 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1-public-tls-certs\") pod \"keystone-679985d476-m2lh7\" (UID: \"04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1\") " pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:45.895926 master-0 kubenswrapper[37036]: I0312 14:50:45.895332 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1-internal-tls-certs\") pod \"keystone-679985d476-m2lh7\" (UID: \"04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1\") " pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:45.895926 master-0 kubenswrapper[37036]: I0312 14:50:45.895386 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kczgc\" (UniqueName: \"kubernetes.io/projected/04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1-kube-api-access-kczgc\") pod \"keystone-679985d476-m2lh7\" (UID: \"04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1\") " pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:45.895926 master-0 kubenswrapper[37036]: I0312 14:50:45.895423 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1-credential-keys\") pod \"keystone-679985d476-m2lh7\" (UID: \"04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1\") " pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:45.895926 master-0 kubenswrapper[37036]: I0312 14:50:45.895504 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1-scripts\") pod \"keystone-679985d476-m2lh7\" (UID: \"04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1\") " pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:45.895926 master-0 kubenswrapper[37036]: I0312 14:50:45.895529 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1-config-data\") pod \"keystone-679985d476-m2lh7\" (UID: \"04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1\") " pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:45.895926 master-0 kubenswrapper[37036]: I0312 14:50:45.895547 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1-fernet-keys\") pod \"keystone-679985d476-m2lh7\" (UID: \"04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1\") " pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:45.895926 master-0 kubenswrapper[37036]: I0312 14:50:45.895592 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1-combined-ca-bundle\") pod \"keystone-679985d476-m2lh7\" (UID: \"04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1\") " pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:45.895926 master-0 kubenswrapper[37036]: I0312 14:50:45.895646 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1-public-tls-certs\") pod \"keystone-679985d476-m2lh7\" (UID: \"04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1\") " pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:45.903921 master-0 kubenswrapper[37036]: I0312 14:50:45.901235 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1-public-tls-certs\") pod \"keystone-679985d476-m2lh7\" (UID: \"04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1\") " pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:45.903921 master-0 kubenswrapper[37036]: I0312 14:50:45.901486 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1-scripts\") pod \"keystone-679985d476-m2lh7\" (UID: \"04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1\") " pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:45.907922 master-0 kubenswrapper[37036]: I0312 14:50:45.905840 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1-combined-ca-bundle\") pod \"keystone-679985d476-m2lh7\" (UID: \"04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1\") " pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:45.907922 master-0 kubenswrapper[37036]: I0312 14:50:45.906557 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1-fernet-keys\") pod \"keystone-679985d476-m2lh7\" (UID: \"04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1\") " pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:45.907922 master-0 kubenswrapper[37036]: I0312 14:50:45.907335 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1-internal-tls-certs\") pod \"keystone-679985d476-m2lh7\" (UID: \"04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1\") " pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:45.907922 master-0 kubenswrapper[37036]: I0312 14:50:45.907535 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1-config-data\") pod \"keystone-679985d476-m2lh7\" (UID: \"04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1\") " pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:45.909645 master-0 kubenswrapper[37036]: I0312 14:50:45.908885 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1-credential-keys\") pod \"keystone-679985d476-m2lh7\" (UID: \"04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1\") " pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:45.931926 master-0 kubenswrapper[37036]: I0312 14:50:45.928779 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kczgc\" (UniqueName: \"kubernetes.io/projected/04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1-kube-api-access-kczgc\") pod \"keystone-679985d476-m2lh7\" (UID: \"04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1\") " pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:46.069236 master-0 kubenswrapper[37036]: I0312 14:50:46.069152 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:46.636957 master-0 kubenswrapper[37036]: I0312 14:50:46.636881 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-679985d476-m2lh7"] Mar 12 14:50:47.489170 master-0 kubenswrapper[37036]: I0312 14:50:47.489109 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-679985d476-m2lh7" event={"ID":"04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1","Type":"ContainerStarted","Data":"8fb82180c3a37ff204a70c581ccda10aa06b6be959940ae97d773d6f44d75437"} Mar 12 14:50:47.489170 master-0 kubenswrapper[37036]: I0312 14:50:47.489173 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-679985d476-m2lh7" event={"ID":"04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1","Type":"ContainerStarted","Data":"8b1983a1a9123c5e52f93bb6d5b8a659a7c64291832cabc556db8383ad9c966f"} Mar 12 14:50:47.489449 master-0 kubenswrapper[37036]: I0312 14:50:47.489181 37036 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 14:50:47.489449 master-0 kubenswrapper[37036]: I0312 14:50:47.489206 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:50:47.489449 master-0 kubenswrapper[37036]: I0312 14:50:47.489208 37036 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 14:50:47.542774 master-0 kubenswrapper[37036]: I0312 14:50:47.542671 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-679985d476-m2lh7" podStartSLOduration=2.5426375119999998 podStartE2EDuration="2.542637512s" podCreationTimestamp="2026-03-12 14:50:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:47.506655336 +0000 UTC m=+906.514396283" watchObservedRunningTime="2026-03-12 14:50:47.542637512 +0000 UTC m=+906.550378459" Mar 12 14:50:47.760839 master-0 kubenswrapper[37036]: I0312 14:50:47.760712 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:47.796840 master-0 kubenswrapper[37036]: I0312 14:50:47.786955 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:50:48.499347 master-0 kubenswrapper[37036]: I0312 14:50:48.499302 37036 generic.go:334] "Generic (PLEG): container finished" podID="75876022-f077-4c9e-95c1-3d0b1dbb61a3" containerID="aece55dcd5ca547bc270fd9af22f19103250fe7a8fcb36ebe6f7e2caad82ca26" exitCode=0 Mar 12 14:50:48.499551 master-0 kubenswrapper[37036]: I0312 14:50:48.499398 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-db-sync-6bdmp" event={"ID":"75876022-f077-4c9e-95c1-3d0b1dbb61a3","Type":"ContainerDied","Data":"aece55dcd5ca547bc270fd9af22f19103250fe7a8fcb36ebe6f7e2caad82ca26"} Mar 12 14:50:49.873721 master-0 kubenswrapper[37036]: I0312 14:50:49.873663 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-db-sync-6bdmp" Mar 12 14:50:49.997846 master-0 kubenswrapper[37036]: I0312 14:50:49.997784 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75876022-f077-4c9e-95c1-3d0b1dbb61a3-config-data\") pod \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\" (UID: \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\") " Mar 12 14:50:49.998132 master-0 kubenswrapper[37036]: I0312 14:50:49.998107 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75876022-f077-4c9e-95c1-3d0b1dbb61a3-etc-machine-id\") pod \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\" (UID: \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\") " Mar 12 14:50:49.998330 master-0 kubenswrapper[37036]: I0312 14:50:49.998179 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75876022-f077-4c9e-95c1-3d0b1dbb61a3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "75876022-f077-4c9e-95c1-3d0b1dbb61a3" (UID: "75876022-f077-4c9e-95c1-3d0b1dbb61a3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:50:49.998330 master-0 kubenswrapper[37036]: I0312 14:50:49.998248 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75876022-f077-4c9e-95c1-3d0b1dbb61a3-scripts\") pod \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\" (UID: \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\") " Mar 12 14:50:49.998427 master-0 kubenswrapper[37036]: I0312 14:50:49.998342 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75876022-f077-4c9e-95c1-3d0b1dbb61a3-db-sync-config-data\") pod \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\" (UID: \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\") " Mar 12 14:50:49.998467 master-0 kubenswrapper[37036]: I0312 14:50:49.998442 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlggb\" (UniqueName: \"kubernetes.io/projected/75876022-f077-4c9e-95c1-3d0b1dbb61a3-kube-api-access-nlggb\") pod \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\" (UID: \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\") " Mar 12 14:50:49.998523 master-0 kubenswrapper[37036]: I0312 14:50:49.998500 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75876022-f077-4c9e-95c1-3d0b1dbb61a3-combined-ca-bundle\") pod \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\" (UID: \"75876022-f077-4c9e-95c1-3d0b1dbb61a3\") " Mar 12 14:50:49.999508 master-0 kubenswrapper[37036]: I0312 14:50:49.999478 37036 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75876022-f077-4c9e-95c1-3d0b1dbb61a3-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:50.001152 master-0 kubenswrapper[37036]: I0312 14:50:50.001118 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75876022-f077-4c9e-95c1-3d0b1dbb61a3-scripts" (OuterVolumeSpecName: "scripts") pod "75876022-f077-4c9e-95c1-3d0b1dbb61a3" (UID: "75876022-f077-4c9e-95c1-3d0b1dbb61a3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:50.001664 master-0 kubenswrapper[37036]: I0312 14:50:50.001604 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75876022-f077-4c9e-95c1-3d0b1dbb61a3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "75876022-f077-4c9e-95c1-3d0b1dbb61a3" (UID: "75876022-f077-4c9e-95c1-3d0b1dbb61a3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:50.002044 master-0 kubenswrapper[37036]: I0312 14:50:50.002002 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75876022-f077-4c9e-95c1-3d0b1dbb61a3-kube-api-access-nlggb" (OuterVolumeSpecName: "kube-api-access-nlggb") pod "75876022-f077-4c9e-95c1-3d0b1dbb61a3" (UID: "75876022-f077-4c9e-95c1-3d0b1dbb61a3"). InnerVolumeSpecName "kube-api-access-nlggb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:50:50.025405 master-0 kubenswrapper[37036]: I0312 14:50:50.025357 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75876022-f077-4c9e-95c1-3d0b1dbb61a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75876022-f077-4c9e-95c1-3d0b1dbb61a3" (UID: "75876022-f077-4c9e-95c1-3d0b1dbb61a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:50.041373 master-0 kubenswrapper[37036]: I0312 14:50:50.041273 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75876022-f077-4c9e-95c1-3d0b1dbb61a3-config-data" (OuterVolumeSpecName: "config-data") pod "75876022-f077-4c9e-95c1-3d0b1dbb61a3" (UID: "75876022-f077-4c9e-95c1-3d0b1dbb61a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:50.101512 master-0 kubenswrapper[37036]: I0312 14:50:50.101448 37036 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75876022-f077-4c9e-95c1-3d0b1dbb61a3-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:50.101512 master-0 kubenswrapper[37036]: I0312 14:50:50.101515 37036 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75876022-f077-4c9e-95c1-3d0b1dbb61a3-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:50.101512 master-0 kubenswrapper[37036]: I0312 14:50:50.101534 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlggb\" (UniqueName: \"kubernetes.io/projected/75876022-f077-4c9e-95c1-3d0b1dbb61a3-kube-api-access-nlggb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:50.101789 master-0 kubenswrapper[37036]: I0312 14:50:50.101543 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75876022-f077-4c9e-95c1-3d0b1dbb61a3-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:50.101789 master-0 kubenswrapper[37036]: I0312 14:50:50.101553 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75876022-f077-4c9e-95c1-3d0b1dbb61a3-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:50.526577 master-0 kubenswrapper[37036]: I0312 14:50:50.526520 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-db-sync-6bdmp" event={"ID":"75876022-f077-4c9e-95c1-3d0b1dbb61a3","Type":"ContainerDied","Data":"dc0a4e74bb4a1da0a812e65e26889bbef0cc39857936388772d4142bffe98524"} Mar 12 14:50:50.526577 master-0 kubenswrapper[37036]: I0312 14:50:50.526573 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc0a4e74bb4a1da0a812e65e26889bbef0cc39857936388772d4142bffe98524" Mar 12 14:50:50.526832 master-0 kubenswrapper[37036]: I0312 14:50:50.526634 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-db-sync-6bdmp" Mar 12 14:50:50.870623 master-0 kubenswrapper[37036]: I0312 14:50:50.870526 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-05598-scheduler-0"] Mar 12 14:50:50.880640 master-0 kubenswrapper[37036]: E0312 14:50:50.880592 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75876022-f077-4c9e-95c1-3d0b1dbb61a3" containerName="cinder-05598-db-sync" Mar 12 14:50:50.886218 master-0 kubenswrapper[37036]: I0312 14:50:50.886182 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="75876022-f077-4c9e-95c1-3d0b1dbb61a3" containerName="cinder-05598-db-sync" Mar 12 14:50:50.886828 master-0 kubenswrapper[37036]: I0312 14:50:50.886811 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="75876022-f077-4c9e-95c1-3d0b1dbb61a3" containerName="cinder-05598-db-sync" Mar 12 14:50:50.888457 master-0 kubenswrapper[37036]: I0312 14:50:50.888436 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-05598-scheduler-0"] Mar 12 14:50:50.888628 master-0 kubenswrapper[37036]: I0312 14:50:50.888613 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-scheduler-0" Mar 12 14:50:50.892607 master-0 kubenswrapper[37036]: I0312 14:50:50.892553 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-05598-config-data" Mar 12 14:50:50.892891 master-0 kubenswrapper[37036]: I0312 14:50:50.892865 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-05598-scripts" Mar 12 14:50:50.901670 master-0 kubenswrapper[37036]: I0312 14:50:50.898142 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-05598-scheduler-config-data" Mar 12 14:50:50.962231 master-0 kubenswrapper[37036]: I0312 14:50:50.959573 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-05598-volume-lvm-iscsi-0"] Mar 12 14:50:50.964158 master-0 kubenswrapper[37036]: I0312 14:50:50.963009 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.038924 master-0 kubenswrapper[37036]: I0312 14:50:51.015586 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-05598-volume-lvm-iscsi-config-data" Mar 12 14:50:51.062923 master-0 kubenswrapper[37036]: I0312 14:50:51.047576 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-var-lib-cinder\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.062923 master-0 kubenswrapper[37036]: I0312 14:50:51.047659 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-config-data-custom\") pod \"cinder-05598-scheduler-0\" (UID: \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:50:51.062923 master-0 kubenswrapper[37036]: I0312 14:50:51.047688 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-config-data\") pod \"cinder-05598-scheduler-0\" (UID: \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:50:51.062923 master-0 kubenswrapper[37036]: I0312 14:50:51.047720 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3229786-cb19-4355-b538-ac9bbaedc4b3-scripts\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.062923 master-0 kubenswrapper[37036]: I0312 14:50:51.047834 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-scripts\") pod \"cinder-05598-scheduler-0\" (UID: \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:50:51.062923 master-0 kubenswrapper[37036]: I0312 14:50:51.047858 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-etc-machine-id\") pod \"cinder-05598-scheduler-0\" (UID: \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:50:51.062923 master-0 kubenswrapper[37036]: I0312 14:50:51.047889 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-etc-iscsi\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.062923 master-0 kubenswrapper[37036]: I0312 14:50:51.048084 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-lib-modules\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.062923 master-0 kubenswrapper[37036]: I0312 14:50:51.048168 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3229786-cb19-4355-b538-ac9bbaedc4b3-config-data-custom\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.062923 master-0 kubenswrapper[37036]: I0312 14:50:51.048321 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-run\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.062923 master-0 kubenswrapper[37036]: I0312 14:50:51.048360 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-var-locks-brick\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.062923 master-0 kubenswrapper[37036]: I0312 14:50:51.048410 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-var-locks-cinder\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.062923 master-0 kubenswrapper[37036]: I0312 14:50:51.048439 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pqfq\" (UniqueName: \"kubernetes.io/projected/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-kube-api-access-7pqfq\") pod \"cinder-05598-scheduler-0\" (UID: \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:50:51.062923 master-0 kubenswrapper[37036]: I0312 14:50:51.048473 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3229786-cb19-4355-b538-ac9bbaedc4b3-combined-ca-bundle\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.062923 master-0 kubenswrapper[37036]: I0312 14:50:51.048586 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-dev\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.062923 master-0 kubenswrapper[37036]: I0312 14:50:51.048625 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-etc-machine-id\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.062923 master-0 kubenswrapper[37036]: I0312 14:50:51.048700 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-combined-ca-bundle\") pod \"cinder-05598-scheduler-0\" (UID: \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:50:51.062923 master-0 kubenswrapper[37036]: I0312 14:50:51.048733 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x294q\" (UniqueName: \"kubernetes.io/projected/f3229786-cb19-4355-b538-ac9bbaedc4b3-kube-api-access-x294q\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.062923 master-0 kubenswrapper[37036]: I0312 14:50:51.048806 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-sys\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.062923 master-0 kubenswrapper[37036]: I0312 14:50:51.048855 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-etc-nvme\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.062923 master-0 kubenswrapper[37036]: I0312 14:50:51.048915 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3229786-cb19-4355-b538-ac9bbaedc4b3-config-data\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.064500 master-0 kubenswrapper[37036]: I0312 14:50:51.064452 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-05598-volume-lvm-iscsi-0"] Mar 12 14:50:51.125007 master-0 kubenswrapper[37036]: I0312 14:50:51.118132 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ddf45ffb9-dpmhn"] Mar 12 14:50:51.125007 master-0 kubenswrapper[37036]: I0312 14:50:51.123651 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.148407 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ddf45ffb9-dpmhn"] Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.163677 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-dev\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.163735 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-etc-machine-id\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.163800 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-combined-ca-bundle\") pod \"cinder-05598-scheduler-0\" (UID: \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.163825 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x294q\" (UniqueName: \"kubernetes.io/projected/f3229786-cb19-4355-b538-ac9bbaedc4b3-kube-api-access-x294q\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.163831 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-dev\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.163940 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-etc-machine-id\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.163951 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-sys\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.164061 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-etc-nvme\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.164105 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3229786-cb19-4355-b538-ac9bbaedc4b3-config-data\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.164181 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-var-lib-cinder\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.164213 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-config-data-custom\") pod \"cinder-05598-scheduler-0\" (UID: \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.164243 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-config-data\") pod \"cinder-05598-scheduler-0\" (UID: \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.169031 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-sys\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.169037 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3229786-cb19-4355-b538-ac9bbaedc4b3-scripts\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.169134 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-scripts\") pod \"cinder-05598-scheduler-0\" (UID: \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.169176 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-etc-machine-id\") pod \"cinder-05598-scheduler-0\" (UID: \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.169218 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-etc-iscsi\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.169314 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-lib-modules\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.169347 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3229786-cb19-4355-b538-ac9bbaedc4b3-config-data-custom\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.169467 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-run\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.169488 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-var-locks-brick\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.169532 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-var-locks-cinder\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.169563 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pqfq\" (UniqueName: \"kubernetes.io/projected/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-kube-api-access-7pqfq\") pod \"cinder-05598-scheduler-0\" (UID: \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.169591 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3229786-cb19-4355-b538-ac9bbaedc4b3-combined-ca-bundle\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.178630 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-etc-machine-id\") pod \"cinder-05598-scheduler-0\" (UID: \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.178765 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-lib-modules\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.180253 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-var-locks-brick\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.180348 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-var-locks-cinder\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.180543 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-config-data\") pod \"cinder-05598-scheduler-0\" (UID: \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.180578 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-etc-nvme\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.181929 master-0 kubenswrapper[37036]: I0312 14:50:51.181756 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-var-lib-cinder\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.205993 master-0 kubenswrapper[37036]: I0312 14:50:51.189968 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-etc-iscsi\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.205993 master-0 kubenswrapper[37036]: I0312 14:50:51.191181 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3229786-cb19-4355-b538-ac9bbaedc4b3-config-data\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.205993 master-0 kubenswrapper[37036]: I0312 14:50:51.194027 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-run\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.226934 master-0 kubenswrapper[37036]: I0312 14:50:51.208413 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x294q\" (UniqueName: \"kubernetes.io/projected/f3229786-cb19-4355-b538-ac9bbaedc4b3-kube-api-access-x294q\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.226934 master-0 kubenswrapper[37036]: I0312 14:50:51.212028 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-combined-ca-bundle\") pod \"cinder-05598-scheduler-0\" (UID: \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:50:51.226934 master-0 kubenswrapper[37036]: I0312 14:50:51.212638 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3229786-cb19-4355-b538-ac9bbaedc4b3-config-data-custom\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.226934 master-0 kubenswrapper[37036]: I0312 14:50:51.216947 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-scripts\") pod \"cinder-05598-scheduler-0\" (UID: \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:50:51.226934 master-0 kubenswrapper[37036]: I0312 14:50:51.220075 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-05598-backup-0"] Mar 12 14:50:51.226934 master-0 kubenswrapper[37036]: I0312 14:50:51.224569 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.226934 master-0 kubenswrapper[37036]: I0312 14:50:51.225581 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3229786-cb19-4355-b538-ac9bbaedc4b3-scripts\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.226934 master-0 kubenswrapper[37036]: I0312 14:50:51.226048 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3229786-cb19-4355-b538-ac9bbaedc4b3-combined-ca-bundle\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.238923 master-0 kubenswrapper[37036]: I0312 14:50:51.228729 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-05598-backup-config-data" Mar 12 14:50:51.259953 master-0 kubenswrapper[37036]: I0312 14:50:51.257238 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-config-data-custom\") pod \"cinder-05598-scheduler-0\" (UID: \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:50:51.276986 master-0 kubenswrapper[37036]: I0312 14:50:51.270326 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pqfq\" (UniqueName: \"kubernetes.io/projected/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-kube-api-access-7pqfq\") pod \"cinder-05598-scheduler-0\" (UID: \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:50:51.276986 master-0 kubenswrapper[37036]: I0312 14:50:51.271490 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddf45ffb9-dpmhn\" (UID: \"5ba06b21-ba2e-4104-aa8e-f23391c08939\") " pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" Mar 12 14:50:51.276986 master-0 kubenswrapper[37036]: I0312 14:50:51.271519 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf9jl\" (UniqueName: \"kubernetes.io/projected/5ba06b21-ba2e-4104-aa8e-f23391c08939-kube-api-access-tf9jl\") pod \"dnsmasq-dns-5ddf45ffb9-dpmhn\" (UID: \"5ba06b21-ba2e-4104-aa8e-f23391c08939\") " pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" Mar 12 14:50:51.276986 master-0 kubenswrapper[37036]: I0312 14:50:51.271598 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-dns-svc\") pod \"dnsmasq-dns-5ddf45ffb9-dpmhn\" (UID: \"5ba06b21-ba2e-4104-aa8e-f23391c08939\") " pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" Mar 12 14:50:51.276986 master-0 kubenswrapper[37036]: I0312 14:50:51.271644 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddf45ffb9-dpmhn\" (UID: \"5ba06b21-ba2e-4104-aa8e-f23391c08939\") " pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" Mar 12 14:50:51.276986 master-0 kubenswrapper[37036]: I0312 14:50:51.271662 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddf45ffb9-dpmhn\" (UID: \"5ba06b21-ba2e-4104-aa8e-f23391c08939\") " pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" Mar 12 14:50:51.276986 master-0 kubenswrapper[37036]: I0312 14:50:51.271709 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-config\") pod \"dnsmasq-dns-5ddf45ffb9-dpmhn\" (UID: \"5ba06b21-ba2e-4104-aa8e-f23391c08939\") " pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" Mar 12 14:50:51.312927 master-0 kubenswrapper[37036]: I0312 14:50:51.312550 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-05598-backup-0"] Mar 12 14:50:51.384998 master-0 kubenswrapper[37036]: I0312 14:50:51.378516 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddf45ffb9-dpmhn\" (UID: \"5ba06b21-ba2e-4104-aa8e-f23391c08939\") " pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" Mar 12 14:50:51.384998 master-0 kubenswrapper[37036]: I0312 14:50:51.378579 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3fda052-1aaa-41a2-80a1-0917c2494c02-config-data-custom\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.384998 master-0 kubenswrapper[37036]: I0312 14:50:51.378615 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf9jl\" (UniqueName: \"kubernetes.io/projected/5ba06b21-ba2e-4104-aa8e-f23391c08939-kube-api-access-tf9jl\") pod \"dnsmasq-dns-5ddf45ffb9-dpmhn\" (UID: \"5ba06b21-ba2e-4104-aa8e-f23391c08939\") " pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" Mar 12 14:50:51.384998 master-0 kubenswrapper[37036]: I0312 14:50:51.378659 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-run\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.384998 master-0 kubenswrapper[37036]: I0312 14:50:51.378759 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-dev\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.384998 master-0 kubenswrapper[37036]: I0312 14:50:51.378859 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-var-locks-cinder\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.384998 master-0 kubenswrapper[37036]: I0312 14:50:51.378925 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-var-lib-cinder\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.384998 master-0 kubenswrapper[37036]: I0312 14:50:51.378976 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-etc-machine-id\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.384998 master-0 kubenswrapper[37036]: I0312 14:50:51.379005 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-dns-svc\") pod \"dnsmasq-dns-5ddf45ffb9-dpmhn\" (UID: \"5ba06b21-ba2e-4104-aa8e-f23391c08939\") " pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" Mar 12 14:50:51.384998 master-0 kubenswrapper[37036]: I0312 14:50:51.379051 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs2lt\" (UniqueName: \"kubernetes.io/projected/f3fda052-1aaa-41a2-80a1-0917c2494c02-kube-api-access-fs2lt\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.384998 master-0 kubenswrapper[37036]: I0312 14:50:51.379078 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-lib-modules\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.384998 master-0 kubenswrapper[37036]: I0312 14:50:51.379118 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-var-locks-brick\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.384998 master-0 kubenswrapper[37036]: I0312 14:50:51.379163 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddf45ffb9-dpmhn\" (UID: \"5ba06b21-ba2e-4104-aa8e-f23391c08939\") " pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" Mar 12 14:50:51.384998 master-0 kubenswrapper[37036]: I0312 14:50:51.379188 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddf45ffb9-dpmhn\" (UID: \"5ba06b21-ba2e-4104-aa8e-f23391c08939\") " pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" Mar 12 14:50:51.384998 master-0 kubenswrapper[37036]: I0312 14:50:51.379281 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-etc-iscsi\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.384998 master-0 kubenswrapper[37036]: I0312 14:50:51.379318 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-config\") pod \"dnsmasq-dns-5ddf45ffb9-dpmhn\" (UID: \"5ba06b21-ba2e-4104-aa8e-f23391c08939\") " pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" Mar 12 14:50:51.384998 master-0 kubenswrapper[37036]: I0312 14:50:51.379355 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-etc-nvme\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.384998 master-0 kubenswrapper[37036]: I0312 14:50:51.379402 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-sys\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.384998 master-0 kubenswrapper[37036]: I0312 14:50:51.379424 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3fda052-1aaa-41a2-80a1-0917c2494c02-scripts\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.384998 master-0 kubenswrapper[37036]: I0312 14:50:51.379468 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3fda052-1aaa-41a2-80a1-0917c2494c02-combined-ca-bundle\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.384998 master-0 kubenswrapper[37036]: I0312 14:50:51.379495 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3fda052-1aaa-41a2-80a1-0917c2494c02-config-data\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.384998 master-0 kubenswrapper[37036]: I0312 14:50:51.380127 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:51.399082 master-0 kubenswrapper[37036]: I0312 14:50:51.395735 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddf45ffb9-dpmhn\" (UID: \"5ba06b21-ba2e-4104-aa8e-f23391c08939\") " pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" Mar 12 14:50:51.399082 master-0 kubenswrapper[37036]: I0312 14:50:51.396098 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-dns-svc\") pod \"dnsmasq-dns-5ddf45ffb9-dpmhn\" (UID: \"5ba06b21-ba2e-4104-aa8e-f23391c08939\") " pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" Mar 12 14:50:51.399082 master-0 kubenswrapper[37036]: I0312 14:50:51.396471 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddf45ffb9-dpmhn\" (UID: \"5ba06b21-ba2e-4104-aa8e-f23391c08939\") " pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" Mar 12 14:50:51.399082 master-0 kubenswrapper[37036]: I0312 14:50:51.397209 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-config\") pod \"dnsmasq-dns-5ddf45ffb9-dpmhn\" (UID: \"5ba06b21-ba2e-4104-aa8e-f23391c08939\") " pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" Mar 12 14:50:51.399082 master-0 kubenswrapper[37036]: I0312 14:50:51.397866 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddf45ffb9-dpmhn\" (UID: \"5ba06b21-ba2e-4104-aa8e-f23391c08939\") " pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" Mar 12 14:50:51.421253 master-0 kubenswrapper[37036]: I0312 14:50:51.421193 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf9jl\" (UniqueName: \"kubernetes.io/projected/5ba06b21-ba2e-4104-aa8e-f23391c08939-kube-api-access-tf9jl\") pod \"dnsmasq-dns-5ddf45ffb9-dpmhn\" (UID: \"5ba06b21-ba2e-4104-aa8e-f23391c08939\") " pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.497838 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-etc-nvme\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.497943 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-sys\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.497972 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3fda052-1aaa-41a2-80a1-0917c2494c02-scripts\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.498015 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3fda052-1aaa-41a2-80a1-0917c2494c02-combined-ca-bundle\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.498042 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3fda052-1aaa-41a2-80a1-0917c2494c02-config-data\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.498107 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3fda052-1aaa-41a2-80a1-0917c2494c02-config-data-custom\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.498176 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-run\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.498228 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-dev\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.498294 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-var-locks-cinder\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.498335 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-var-lib-cinder\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.498379 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-etc-machine-id\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.498429 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs2lt\" (UniqueName: \"kubernetes.io/projected/f3fda052-1aaa-41a2-80a1-0917c2494c02-kube-api-access-fs2lt\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.498457 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-lib-modules\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.498489 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-var-locks-brick\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.498589 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-etc-iscsi\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.498703 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-etc-iscsi\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.499131 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-etc-nvme\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.499408 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-sys\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.502461 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-var-lib-cinder\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.503106 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-var-locks-brick\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.503448 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-var-locks-cinder\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.503871 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-dev\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.503948 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-run\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.504282 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-lib-modules\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.504525 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3fda052-1aaa-41a2-80a1-0917c2494c02-scripts\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.504538 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-etc-machine-id\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.504657 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3fda052-1aaa-41a2-80a1-0917c2494c02-combined-ca-bundle\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.506957 master-0 kubenswrapper[37036]: I0312 14:50:51.505524 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3fda052-1aaa-41a2-80a1-0917c2494c02-config-data\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.511916 master-0 kubenswrapper[37036]: I0312 14:50:51.511744 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3fda052-1aaa-41a2-80a1-0917c2494c02-config-data-custom\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.512167 master-0 kubenswrapper[37036]: I0312 14:50:51.512144 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-05598-api-0"] Mar 12 14:50:51.514094 master-0 kubenswrapper[37036]: I0312 14:50:51.514072 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-api-0" Mar 12 14:50:51.518941 master-0 kubenswrapper[37036]: I0312 14:50:51.517852 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-05598-api-config-data" Mar 12 14:50:51.556101 master-0 kubenswrapper[37036]: I0312 14:50:51.545111 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-05598-api-0"] Mar 12 14:50:51.560862 master-0 kubenswrapper[37036]: I0312 14:50:51.560558 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-scheduler-0" Mar 12 14:50:51.626103 master-0 kubenswrapper[37036]: I0312 14:50:51.624846 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs2lt\" (UniqueName: \"kubernetes.io/projected/f3fda052-1aaa-41a2-80a1-0917c2494c02-kube-api-access-fs2lt\") pod \"cinder-05598-backup-0\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.715286 master-0 kubenswrapper[37036]: I0312 14:50:51.715233 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/865aa6c9-1fad-481d-a029-977159f15829-etc-machine-id\") pod \"cinder-05598-api-0\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:51.720086 master-0 kubenswrapper[37036]: I0312 14:50:51.715309 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/865aa6c9-1fad-481d-a029-977159f15829-config-data\") pod \"cinder-05598-api-0\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:51.720086 master-0 kubenswrapper[37036]: I0312 14:50:51.715367 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/865aa6c9-1fad-481d-a029-977159f15829-logs\") pod \"cinder-05598-api-0\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:51.720086 master-0 kubenswrapper[37036]: I0312 14:50:51.715461 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/865aa6c9-1fad-481d-a029-977159f15829-config-data-custom\") pod \"cinder-05598-api-0\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:51.720086 master-0 kubenswrapper[37036]: I0312 14:50:51.715538 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/865aa6c9-1fad-481d-a029-977159f15829-scripts\") pod \"cinder-05598-api-0\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:51.720086 master-0 kubenswrapper[37036]: I0312 14:50:51.715588 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljwl4\" (UniqueName: \"kubernetes.io/projected/865aa6c9-1fad-481d-a029-977159f15829-kube-api-access-ljwl4\") pod \"cinder-05598-api-0\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:51.720086 master-0 kubenswrapper[37036]: I0312 14:50:51.715629 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865aa6c9-1fad-481d-a029-977159f15829-combined-ca-bundle\") pod \"cinder-05598-api-0\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:51.721596 master-0 kubenswrapper[37036]: I0312 14:50:51.721530 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" Mar 12 14:50:51.817576 master-0 kubenswrapper[37036]: I0312 14:50:51.817510 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/865aa6c9-1fad-481d-a029-977159f15829-logs\") pod \"cinder-05598-api-0\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:51.817754 master-0 kubenswrapper[37036]: I0312 14:50:51.817613 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/865aa6c9-1fad-481d-a029-977159f15829-config-data-custom\") pod \"cinder-05598-api-0\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:51.817815 master-0 kubenswrapper[37036]: I0312 14:50:51.817758 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/865aa6c9-1fad-481d-a029-977159f15829-scripts\") pod \"cinder-05598-api-0\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:51.817856 master-0 kubenswrapper[37036]: I0312 14:50:51.817823 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljwl4\" (UniqueName: \"kubernetes.io/projected/865aa6c9-1fad-481d-a029-977159f15829-kube-api-access-ljwl4\") pod \"cinder-05598-api-0\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:51.817916 master-0 kubenswrapper[37036]: I0312 14:50:51.817881 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865aa6c9-1fad-481d-a029-977159f15829-combined-ca-bundle\") pod \"cinder-05598-api-0\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:51.818026 master-0 kubenswrapper[37036]: I0312 14:50:51.817957 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/865aa6c9-1fad-481d-a029-977159f15829-etc-machine-id\") pod \"cinder-05598-api-0\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:51.818078 master-0 kubenswrapper[37036]: I0312 14:50:51.818053 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/865aa6c9-1fad-481d-a029-977159f15829-config-data\") pod \"cinder-05598-api-0\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:51.819785 master-0 kubenswrapper[37036]: I0312 14:50:51.819753 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/865aa6c9-1fad-481d-a029-977159f15829-etc-machine-id\") pod \"cinder-05598-api-0\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:51.828892 master-0 kubenswrapper[37036]: I0312 14:50:51.825542 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/865aa6c9-1fad-481d-a029-977159f15829-config-data\") pod \"cinder-05598-api-0\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:51.828892 master-0 kubenswrapper[37036]: I0312 14:50:51.825943 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/865aa6c9-1fad-481d-a029-977159f15829-scripts\") pod \"cinder-05598-api-0\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:51.828892 master-0 kubenswrapper[37036]: I0312 14:50:51.826807 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/865aa6c9-1fad-481d-a029-977159f15829-logs\") pod \"cinder-05598-api-0\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:51.878623 master-0 kubenswrapper[37036]: I0312 14:50:51.878577 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865aa6c9-1fad-481d-a029-977159f15829-combined-ca-bundle\") pod \"cinder-05598-api-0\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:51.885494 master-0 kubenswrapper[37036]: I0312 14:50:51.885291 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljwl4\" (UniqueName: \"kubernetes.io/projected/865aa6c9-1fad-481d-a029-977159f15829-kube-api-access-ljwl4\") pod \"cinder-05598-api-0\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:51.887839 master-0 kubenswrapper[37036]: I0312 14:50:51.887790 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/865aa6c9-1fad-481d-a029-977159f15829-config-data-custom\") pod \"cinder-05598-api-0\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:51.908141 master-0 kubenswrapper[37036]: I0312 14:50:51.904611 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-backup-0" Mar 12 14:50:51.932036 master-0 kubenswrapper[37036]: I0312 14:50:51.931916 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-api-0" Mar 12 14:50:52.048145 master-0 kubenswrapper[37036]: I0312 14:50:52.039298 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-05598-volume-lvm-iscsi-0"] Mar 12 14:50:52.185154 master-0 kubenswrapper[37036]: W0312 14:50:52.185104 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd7b2ee2_8dd2_4588_aa3f_cf22774d3ff7.slice/crio-31ae4d9822a7931bc1aadbda22285a8b3e0599ef3fbbaa0541eac585a8dbb91f WatchSource:0}: Error finding container 31ae4d9822a7931bc1aadbda22285a8b3e0599ef3fbbaa0541eac585a8dbb91f: Status 404 returned error can't find the container with id 31ae4d9822a7931bc1aadbda22285a8b3e0599ef3fbbaa0541eac585a8dbb91f Mar 12 14:50:52.199574 master-0 kubenswrapper[37036]: I0312 14:50:52.196987 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-05598-scheduler-0"] Mar 12 14:50:52.336037 master-0 kubenswrapper[37036]: I0312 14:50:52.335835 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ddf45ffb9-dpmhn"] Mar 12 14:50:52.354246 master-0 kubenswrapper[37036]: W0312 14:50:52.354190 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ba06b21_ba2e_4104_aa8e_f23391c08939.slice/crio-5aa42dd484db8bbfad17ec549c6106e1f0ac759d404d7aedaf01828ffd9b4140 WatchSource:0}: Error finding container 5aa42dd484db8bbfad17ec549c6106e1f0ac759d404d7aedaf01828ffd9b4140: Status 404 returned error can't find the container with id 5aa42dd484db8bbfad17ec549c6106e1f0ac759d404d7aedaf01828ffd9b4140 Mar 12 14:50:52.515075 master-0 kubenswrapper[37036]: I0312 14:50:52.515003 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-05598-api-0"] Mar 12 14:50:52.523747 master-0 kubenswrapper[37036]: W0312 14:50:52.523041 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod865aa6c9_1fad_481d_a029_977159f15829.slice/crio-0d5bc36f2a66d7e19464f800408a9c5be12fb5927d949f606bfc82864de199e0 WatchSource:0}: Error finding container 0d5bc36f2a66d7e19464f800408a9c5be12fb5927d949f606bfc82864de199e0: Status 404 returned error can't find the container with id 0d5bc36f2a66d7e19464f800408a9c5be12fb5927d949f606bfc82864de199e0 Mar 12 14:50:52.576581 master-0 kubenswrapper[37036]: I0312 14:50:52.576521 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-volume-lvm-iscsi-0" event={"ID":"f3229786-cb19-4355-b538-ac9bbaedc4b3","Type":"ContainerStarted","Data":"2cef4745b40ae14ecc080e42b727f7bdbcb62567da720ac71e610e705673b596"} Mar 12 14:50:52.584146 master-0 kubenswrapper[37036]: I0312 14:50:52.584045 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" event={"ID":"5ba06b21-ba2e-4104-aa8e-f23391c08939","Type":"ContainerStarted","Data":"5aa42dd484db8bbfad17ec549c6106e1f0ac759d404d7aedaf01828ffd9b4140"} Mar 12 14:50:52.587091 master-0 kubenswrapper[37036]: I0312 14:50:52.587042 37036 generic.go:334] "Generic (PLEG): container finished" podID="fa5c40b0-d90b-4a98-af67-d37503c2c2dc" containerID="5458ce02577ae859b4306e66233e53df494f053609bc16547a7276b6f02a2b9d" exitCode=0 Mar 12 14:50:52.587209 master-0 kubenswrapper[37036]: I0312 14:50:52.587121 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-79jsx" event={"ID":"fa5c40b0-d90b-4a98-af67-d37503c2c2dc","Type":"ContainerDied","Data":"5458ce02577ae859b4306e66233e53df494f053609bc16547a7276b6f02a2b9d"} Mar 12 14:50:52.589764 master-0 kubenswrapper[37036]: I0312 14:50:52.589710 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-scheduler-0" event={"ID":"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7","Type":"ContainerStarted","Data":"31ae4d9822a7931bc1aadbda22285a8b3e0599ef3fbbaa0541eac585a8dbb91f"} Mar 12 14:50:52.594473 master-0 kubenswrapper[37036]: I0312 14:50:52.594429 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-api-0" event={"ID":"865aa6c9-1fad-481d-a029-977159f15829","Type":"ContainerStarted","Data":"0d5bc36f2a66d7e19464f800408a9c5be12fb5927d949f606bfc82864de199e0"} Mar 12 14:50:52.723232 master-0 kubenswrapper[37036]: I0312 14:50:52.723070 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-05598-backup-0"] Mar 12 14:50:52.774144 master-0 kubenswrapper[37036]: W0312 14:50:52.774052 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3fda052_1aaa_41a2_80a1_0917c2494c02.slice/crio-2c9a12334196292e01eb5f9d62751f426dea592252ad7974c923d14988307dd5 WatchSource:0}: Error finding container 2c9a12334196292e01eb5f9d62751f426dea592252ad7974c923d14988307dd5: Status 404 returned error can't find the container with id 2c9a12334196292e01eb5f9d62751f426dea592252ad7974c923d14988307dd5 Mar 12 14:50:53.618334 master-0 kubenswrapper[37036]: I0312 14:50:53.612989 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-api-0" event={"ID":"865aa6c9-1fad-481d-a029-977159f15829","Type":"ContainerStarted","Data":"c4afb87adb993421552628e7ef36787da876356b0ebf62e7eae5b6b343edc288"} Mar 12 14:50:53.624249 master-0 kubenswrapper[37036]: I0312 14:50:53.624194 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-backup-0" event={"ID":"f3fda052-1aaa-41a2-80a1-0917c2494c02","Type":"ContainerStarted","Data":"2c9a12334196292e01eb5f9d62751f426dea592252ad7974c923d14988307dd5"} Mar 12 14:50:53.627800 master-0 kubenswrapper[37036]: I0312 14:50:53.627729 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-volume-lvm-iscsi-0" event={"ID":"f3229786-cb19-4355-b538-ac9bbaedc4b3","Type":"ContainerStarted","Data":"cc012d715203f799e02dbf073e8bbf4b7fa3a3de5b2b96876d2846fc6aea2746"} Mar 12 14:50:53.642067 master-0 kubenswrapper[37036]: I0312 14:50:53.634811 37036 generic.go:334] "Generic (PLEG): container finished" podID="5ba06b21-ba2e-4104-aa8e-f23391c08939" containerID="a957490eba9eafb2507b57e06262377dbbd9d6a2e8d086ab00bdeafd312a90a6" exitCode=0 Mar 12 14:50:53.642067 master-0 kubenswrapper[37036]: I0312 14:50:53.634891 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" event={"ID":"5ba06b21-ba2e-4104-aa8e-f23391c08939","Type":"ContainerDied","Data":"a957490eba9eafb2507b57e06262377dbbd9d6a2e8d086ab00bdeafd312a90a6"} Mar 12 14:50:53.896094 master-0 kubenswrapper[37036]: I0312 14:50:53.896035 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-05598-api-0"] Mar 12 14:50:54.378613 master-0 kubenswrapper[37036]: I0312 14:50:54.378553 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-79jsx" Mar 12 14:50:54.453344 master-0 kubenswrapper[37036]: I0312 14:50:54.453274 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fa5c40b0-d90b-4a98-af67-d37503c2c2dc-config\") pod \"fa5c40b0-d90b-4a98-af67-d37503c2c2dc\" (UID: \"fa5c40b0-d90b-4a98-af67-d37503c2c2dc\") " Mar 12 14:50:54.454176 master-0 kubenswrapper[37036]: I0312 14:50:54.454065 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa5c40b0-d90b-4a98-af67-d37503c2c2dc-combined-ca-bundle\") pod \"fa5c40b0-d90b-4a98-af67-d37503c2c2dc\" (UID: \"fa5c40b0-d90b-4a98-af67-d37503c2c2dc\") " Mar 12 14:50:54.454653 master-0 kubenswrapper[37036]: I0312 14:50:54.454581 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sjxl\" (UniqueName: \"kubernetes.io/projected/fa5c40b0-d90b-4a98-af67-d37503c2c2dc-kube-api-access-8sjxl\") pod \"fa5c40b0-d90b-4a98-af67-d37503c2c2dc\" (UID: \"fa5c40b0-d90b-4a98-af67-d37503c2c2dc\") " Mar 12 14:50:54.461169 master-0 kubenswrapper[37036]: I0312 14:50:54.461117 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa5c40b0-d90b-4a98-af67-d37503c2c2dc-kube-api-access-8sjxl" (OuterVolumeSpecName: "kube-api-access-8sjxl") pod "fa5c40b0-d90b-4a98-af67-d37503c2c2dc" (UID: "fa5c40b0-d90b-4a98-af67-d37503c2c2dc"). InnerVolumeSpecName "kube-api-access-8sjxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:50:54.513082 master-0 kubenswrapper[37036]: I0312 14:50:54.513015 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa5c40b0-d90b-4a98-af67-d37503c2c2dc-config" (OuterVolumeSpecName: "config") pod "fa5c40b0-d90b-4a98-af67-d37503c2c2dc" (UID: "fa5c40b0-d90b-4a98-af67-d37503c2c2dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:54.556809 master-0 kubenswrapper[37036]: I0312 14:50:54.556748 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa5c40b0-d90b-4a98-af67-d37503c2c2dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa5c40b0-d90b-4a98-af67-d37503c2c2dc" (UID: "fa5c40b0-d90b-4a98-af67-d37503c2c2dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:54.560466 master-0 kubenswrapper[37036]: I0312 14:50:54.560421 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sjxl\" (UniqueName: \"kubernetes.io/projected/fa5c40b0-d90b-4a98-af67-d37503c2c2dc-kube-api-access-8sjxl\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:54.560769 master-0 kubenswrapper[37036]: I0312 14:50:54.560756 37036 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fa5c40b0-d90b-4a98-af67-d37503c2c2dc-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:54.560840 master-0 kubenswrapper[37036]: I0312 14:50:54.560829 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa5c40b0-d90b-4a98-af67-d37503c2c2dc-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:54.687017 master-0 kubenswrapper[37036]: I0312 14:50:54.686846 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-volume-lvm-iscsi-0" event={"ID":"f3229786-cb19-4355-b538-ac9bbaedc4b3","Type":"ContainerStarted","Data":"0ae3e32dde61cfc9bffc5b18348f8686afde317becb2f0a984cb35b540adce74"} Mar 12 14:50:54.697318 master-0 kubenswrapper[37036]: I0312 14:50:54.696272 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" event={"ID":"5ba06b21-ba2e-4104-aa8e-f23391c08939","Type":"ContainerStarted","Data":"85fc6e16a5e14cfdaba511a0405cf29bbd3e615ba3cb9141c87f406997a176e9"} Mar 12 14:50:54.697318 master-0 kubenswrapper[37036]: I0312 14:50:54.697275 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" Mar 12 14:50:54.703411 master-0 kubenswrapper[37036]: I0312 14:50:54.703292 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-79jsx" event={"ID":"fa5c40b0-d90b-4a98-af67-d37503c2c2dc","Type":"ContainerDied","Data":"c9d8ac4498f064b9465dc24667bcc60d00b665c43733f0c97f85baba365ecc58"} Mar 12 14:50:54.703411 master-0 kubenswrapper[37036]: I0312 14:50:54.703340 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9d8ac4498f064b9465dc24667bcc60d00b665c43733f0c97f85baba365ecc58" Mar 12 14:50:54.703880 master-0 kubenswrapper[37036]: I0312 14:50:54.703815 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-79jsx" Mar 12 14:50:54.715184 master-0 kubenswrapper[37036]: I0312 14:50:54.715092 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-05598-volume-lvm-iscsi-0" podStartSLOduration=3.694882951 podStartE2EDuration="4.715076876s" podCreationTimestamp="2026-03-12 14:50:50 +0000 UTC" firstStartedPulling="2026-03-12 14:50:52.065069289 +0000 UTC m=+911.072810226" lastFinishedPulling="2026-03-12 14:50:53.085263224 +0000 UTC m=+912.093004151" observedRunningTime="2026-03-12 14:50:54.707953571 +0000 UTC m=+913.715694508" watchObservedRunningTime="2026-03-12 14:50:54.715076876 +0000 UTC m=+913.722817813" Mar 12 14:50:54.725023 master-0 kubenswrapper[37036]: I0312 14:50:54.724449 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-scheduler-0" event={"ID":"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7","Type":"ContainerStarted","Data":"d821634c4b364f62a644cc88a3dc597d65362ac07795102176632645f9dca271"} Mar 12 14:50:54.745601 master-0 kubenswrapper[37036]: I0312 14:50:54.745166 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-api-0" event={"ID":"865aa6c9-1fad-481d-a029-977159f15829","Type":"ContainerStarted","Data":"0cd35e8931e93386ca83e32e657bf0e3490a89c45add5028b769ed512f1a803b"} Mar 12 14:50:54.745601 master-0 kubenswrapper[37036]: I0312 14:50:54.745336 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-05598-api-0" podUID="865aa6c9-1fad-481d-a029-977159f15829" containerName="cinder-05598-api-log" containerID="cri-o://c4afb87adb993421552628e7ef36787da876356b0ebf62e7eae5b6b343edc288" gracePeriod=30 Mar 12 14:50:54.745601 master-0 kubenswrapper[37036]: I0312 14:50:54.745427 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-05598-api-0" podUID="865aa6c9-1fad-481d-a029-977159f15829" containerName="cinder-api" containerID="cri-o://0cd35e8931e93386ca83e32e657bf0e3490a89c45add5028b769ed512f1a803b" gracePeriod=30 Mar 12 14:50:54.745601 master-0 kubenswrapper[37036]: I0312 14:50:54.745549 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-05598-api-0" Mar 12 14:50:54.764790 master-0 kubenswrapper[37036]: I0312 14:50:54.763599 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-backup-0" event={"ID":"f3fda052-1aaa-41a2-80a1-0917c2494c02","Type":"ContainerStarted","Data":"508ccd5b5bd99366be885a72f52be840ab7faa8850dc8956610dd979f03cc6d6"} Mar 12 14:50:54.764790 master-0 kubenswrapper[37036]: I0312 14:50:54.763650 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-backup-0" event={"ID":"f3fda052-1aaa-41a2-80a1-0917c2494c02","Type":"ContainerStarted","Data":"b4cc022785839865aedd2c73e9f5aa6cfda959189b88b8bb581e5cd617b459f0"} Mar 12 14:50:54.780849 master-0 kubenswrapper[37036]: I0312 14:50:54.776385 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" podStartSLOduration=4.776364354 podStartE2EDuration="4.776364354s" podCreationTimestamp="2026-03-12 14:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:54.744561589 +0000 UTC m=+913.752302526" watchObservedRunningTime="2026-03-12 14:50:54.776364354 +0000 UTC m=+913.784105291" Mar 12 14:50:54.966416 master-0 kubenswrapper[37036]: I0312 14:50:54.955032 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-05598-api-0" podStartSLOduration=3.955008098 podStartE2EDuration="3.955008098s" podCreationTimestamp="2026-03-12 14:50:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:54.798816625 +0000 UTC m=+913.806557562" watchObservedRunningTime="2026-03-12 14:50:54.955008098 +0000 UTC m=+913.962749025" Mar 12 14:50:55.068943 master-0 kubenswrapper[37036]: I0312 14:50:55.067375 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-05598-backup-0" podStartSLOduration=3.2328278089999998 podStartE2EDuration="4.067348267s" podCreationTimestamp="2026-03-12 14:50:51 +0000 UTC" firstStartedPulling="2026-03-12 14:50:52.776600834 +0000 UTC m=+911.784341781" lastFinishedPulling="2026-03-12 14:50:53.611121302 +0000 UTC m=+912.618862239" observedRunningTime="2026-03-12 14:50:54.842538748 +0000 UTC m=+913.850279685" watchObservedRunningTime="2026-03-12 14:50:55.067348267 +0000 UTC m=+914.075089204" Mar 12 14:50:55.212862 master-0 kubenswrapper[37036]: I0312 14:50:55.211919 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-865fc75fb8-6hmpx"] Mar 12 14:50:55.212862 master-0 kubenswrapper[37036]: E0312 14:50:55.212615 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa5c40b0-d90b-4a98-af67-d37503c2c2dc" containerName="neutron-db-sync" Mar 12 14:50:55.212862 master-0 kubenswrapper[37036]: I0312 14:50:55.212634 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa5c40b0-d90b-4a98-af67-d37503c2c2dc" containerName="neutron-db-sync" Mar 12 14:50:55.213055 master-0 kubenswrapper[37036]: I0312 14:50:55.212976 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa5c40b0-d90b-4a98-af67-d37503c2c2dc" containerName="neutron-db-sync" Mar 12 14:50:55.214710 master-0 kubenswrapper[37036]: I0312 14:50:55.214498 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-865fc75fb8-6hmpx" Mar 12 14:50:55.233104 master-0 kubenswrapper[37036]: I0312 14:50:55.224852 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 12 14:50:55.233104 master-0 kubenswrapper[37036]: I0312 14:50:55.227471 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 12 14:50:55.240003 master-0 kubenswrapper[37036]: I0312 14:50:55.239071 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 12 14:50:55.325934 master-0 kubenswrapper[37036]: I0312 14:50:55.325470 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-865fc75fb8-6hmpx"] Mar 12 14:50:55.335920 master-0 kubenswrapper[37036]: I0312 14:50:55.333289 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ddf45ffb9-dpmhn"] Mar 12 14:50:55.356919 master-0 kubenswrapper[37036]: I0312 14:50:55.351961 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0289ee73-116b-4f34-ae6e-5560906a2df8-config\") pod \"neutron-865fc75fb8-6hmpx\" (UID: \"0289ee73-116b-4f34-ae6e-5560906a2df8\") " pod="openstack/neutron-865fc75fb8-6hmpx" Mar 12 14:50:55.356919 master-0 kubenswrapper[37036]: I0312 14:50:55.352021 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0289ee73-116b-4f34-ae6e-5560906a2df8-ovndb-tls-certs\") pod \"neutron-865fc75fb8-6hmpx\" (UID: \"0289ee73-116b-4f34-ae6e-5560906a2df8\") " pod="openstack/neutron-865fc75fb8-6hmpx" Mar 12 14:50:55.356919 master-0 kubenswrapper[37036]: I0312 14:50:55.352129 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xttbh\" (UniqueName: \"kubernetes.io/projected/0289ee73-116b-4f34-ae6e-5560906a2df8-kube-api-access-xttbh\") pod \"neutron-865fc75fb8-6hmpx\" (UID: \"0289ee73-116b-4f34-ae6e-5560906a2df8\") " pod="openstack/neutron-865fc75fb8-6hmpx" Mar 12 14:50:55.356919 master-0 kubenswrapper[37036]: I0312 14:50:55.352307 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0289ee73-116b-4f34-ae6e-5560906a2df8-httpd-config\") pod \"neutron-865fc75fb8-6hmpx\" (UID: \"0289ee73-116b-4f34-ae6e-5560906a2df8\") " pod="openstack/neutron-865fc75fb8-6hmpx" Mar 12 14:50:55.356919 master-0 kubenswrapper[37036]: I0312 14:50:55.352496 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0289ee73-116b-4f34-ae6e-5560906a2df8-combined-ca-bundle\") pod \"neutron-865fc75fb8-6hmpx\" (UID: \"0289ee73-116b-4f34-ae6e-5560906a2df8\") " pod="openstack/neutron-865fc75fb8-6hmpx" Mar 12 14:50:55.356919 master-0 kubenswrapper[37036]: I0312 14:50:55.354941 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95846d9c5-hjsmg"] Mar 12 14:50:55.359754 master-0 kubenswrapper[37036]: I0312 14:50:55.357559 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" Mar 12 14:50:55.394441 master-0 kubenswrapper[37036]: I0312 14:50:55.394402 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95846d9c5-hjsmg"] Mar 12 14:50:55.463134 master-0 kubenswrapper[37036]: I0312 14:50:55.461857 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0289ee73-116b-4f34-ae6e-5560906a2df8-config\") pod \"neutron-865fc75fb8-6hmpx\" (UID: \"0289ee73-116b-4f34-ae6e-5560906a2df8\") " pod="openstack/neutron-865fc75fb8-6hmpx" Mar 12 14:50:55.463134 master-0 kubenswrapper[37036]: I0312 14:50:55.461932 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0289ee73-116b-4f34-ae6e-5560906a2df8-ovndb-tls-certs\") pod \"neutron-865fc75fb8-6hmpx\" (UID: \"0289ee73-116b-4f34-ae6e-5560906a2df8\") " pod="openstack/neutron-865fc75fb8-6hmpx" Mar 12 14:50:55.463134 master-0 kubenswrapper[37036]: I0312 14:50:55.461960 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94l5g\" (UniqueName: \"kubernetes.io/projected/00cfce92-4961-4a84-a59e-b1b979f29a35-kube-api-access-94l5g\") pod \"dnsmasq-dns-95846d9c5-hjsmg\" (UID: \"00cfce92-4961-4a84-a59e-b1b979f29a35\") " pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" Mar 12 14:50:55.463134 master-0 kubenswrapper[37036]: I0312 14:50:55.462017 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xttbh\" (UniqueName: \"kubernetes.io/projected/0289ee73-116b-4f34-ae6e-5560906a2df8-kube-api-access-xttbh\") pod \"neutron-865fc75fb8-6hmpx\" (UID: \"0289ee73-116b-4f34-ae6e-5560906a2df8\") " pod="openstack/neutron-865fc75fb8-6hmpx" Mar 12 14:50:55.463134 master-0 kubenswrapper[37036]: I0312 14:50:55.462106 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-dns-swift-storage-0\") pod \"dnsmasq-dns-95846d9c5-hjsmg\" (UID: \"00cfce92-4961-4a84-a59e-b1b979f29a35\") " pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" Mar 12 14:50:55.463134 master-0 kubenswrapper[37036]: I0312 14:50:55.462213 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-ovsdbserver-nb\") pod \"dnsmasq-dns-95846d9c5-hjsmg\" (UID: \"00cfce92-4961-4a84-a59e-b1b979f29a35\") " pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" Mar 12 14:50:55.463134 master-0 kubenswrapper[37036]: I0312 14:50:55.462280 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0289ee73-116b-4f34-ae6e-5560906a2df8-httpd-config\") pod \"neutron-865fc75fb8-6hmpx\" (UID: \"0289ee73-116b-4f34-ae6e-5560906a2df8\") " pod="openstack/neutron-865fc75fb8-6hmpx" Mar 12 14:50:55.463134 master-0 kubenswrapper[37036]: I0312 14:50:55.462347 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-dns-svc\") pod \"dnsmasq-dns-95846d9c5-hjsmg\" (UID: \"00cfce92-4961-4a84-a59e-b1b979f29a35\") " pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" Mar 12 14:50:55.463134 master-0 kubenswrapper[37036]: I0312 14:50:55.462446 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-config\") pod \"dnsmasq-dns-95846d9c5-hjsmg\" (UID: \"00cfce92-4961-4a84-a59e-b1b979f29a35\") " pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" Mar 12 14:50:55.463134 master-0 kubenswrapper[37036]: I0312 14:50:55.462472 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0289ee73-116b-4f34-ae6e-5560906a2df8-combined-ca-bundle\") pod \"neutron-865fc75fb8-6hmpx\" (UID: \"0289ee73-116b-4f34-ae6e-5560906a2df8\") " pod="openstack/neutron-865fc75fb8-6hmpx" Mar 12 14:50:55.463134 master-0 kubenswrapper[37036]: I0312 14:50:55.462509 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-ovsdbserver-sb\") pod \"dnsmasq-dns-95846d9c5-hjsmg\" (UID: \"00cfce92-4961-4a84-a59e-b1b979f29a35\") " pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" Mar 12 14:50:55.490645 master-0 kubenswrapper[37036]: I0312 14:50:55.479495 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0289ee73-116b-4f34-ae6e-5560906a2df8-ovndb-tls-certs\") pod \"neutron-865fc75fb8-6hmpx\" (UID: \"0289ee73-116b-4f34-ae6e-5560906a2df8\") " pod="openstack/neutron-865fc75fb8-6hmpx" Mar 12 14:50:55.490645 master-0 kubenswrapper[37036]: I0312 14:50:55.479495 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0289ee73-116b-4f34-ae6e-5560906a2df8-httpd-config\") pod \"neutron-865fc75fb8-6hmpx\" (UID: \"0289ee73-116b-4f34-ae6e-5560906a2df8\") " pod="openstack/neutron-865fc75fb8-6hmpx" Mar 12 14:50:55.490645 master-0 kubenswrapper[37036]: I0312 14:50:55.485757 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0289ee73-116b-4f34-ae6e-5560906a2df8-config\") pod \"neutron-865fc75fb8-6hmpx\" (UID: \"0289ee73-116b-4f34-ae6e-5560906a2df8\") " pod="openstack/neutron-865fc75fb8-6hmpx" Mar 12 14:50:55.493746 master-0 kubenswrapper[37036]: I0312 14:50:55.491643 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0289ee73-116b-4f34-ae6e-5560906a2df8-combined-ca-bundle\") pod \"neutron-865fc75fb8-6hmpx\" (UID: \"0289ee73-116b-4f34-ae6e-5560906a2df8\") " pod="openstack/neutron-865fc75fb8-6hmpx" Mar 12 14:50:55.504392 master-0 kubenswrapper[37036]: I0312 14:50:55.495746 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xttbh\" (UniqueName: \"kubernetes.io/projected/0289ee73-116b-4f34-ae6e-5560906a2df8-kube-api-access-xttbh\") pod \"neutron-865fc75fb8-6hmpx\" (UID: \"0289ee73-116b-4f34-ae6e-5560906a2df8\") " pod="openstack/neutron-865fc75fb8-6hmpx" Mar 12 14:50:55.567907 master-0 kubenswrapper[37036]: I0312 14:50:55.565302 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-config\") pod \"dnsmasq-dns-95846d9c5-hjsmg\" (UID: \"00cfce92-4961-4a84-a59e-b1b979f29a35\") " pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" Mar 12 14:50:55.567907 master-0 kubenswrapper[37036]: I0312 14:50:55.565375 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-ovsdbserver-sb\") pod \"dnsmasq-dns-95846d9c5-hjsmg\" (UID: \"00cfce92-4961-4a84-a59e-b1b979f29a35\") " pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" Mar 12 14:50:55.567907 master-0 kubenswrapper[37036]: I0312 14:50:55.565502 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94l5g\" (UniqueName: \"kubernetes.io/projected/00cfce92-4961-4a84-a59e-b1b979f29a35-kube-api-access-94l5g\") pod \"dnsmasq-dns-95846d9c5-hjsmg\" (UID: \"00cfce92-4961-4a84-a59e-b1b979f29a35\") " pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" Mar 12 14:50:55.567907 master-0 kubenswrapper[37036]: I0312 14:50:55.565557 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-dns-swift-storage-0\") pod \"dnsmasq-dns-95846d9c5-hjsmg\" (UID: \"00cfce92-4961-4a84-a59e-b1b979f29a35\") " pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" Mar 12 14:50:55.567907 master-0 kubenswrapper[37036]: I0312 14:50:55.565670 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-ovsdbserver-nb\") pod \"dnsmasq-dns-95846d9c5-hjsmg\" (UID: \"00cfce92-4961-4a84-a59e-b1b979f29a35\") " pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" Mar 12 14:50:55.567907 master-0 kubenswrapper[37036]: I0312 14:50:55.565765 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-dns-svc\") pod \"dnsmasq-dns-95846d9c5-hjsmg\" (UID: \"00cfce92-4961-4a84-a59e-b1b979f29a35\") " pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" Mar 12 14:50:55.567907 master-0 kubenswrapper[37036]: I0312 14:50:55.566998 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-dns-svc\") pod \"dnsmasq-dns-95846d9c5-hjsmg\" (UID: \"00cfce92-4961-4a84-a59e-b1b979f29a35\") " pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" Mar 12 14:50:55.572209 master-0 kubenswrapper[37036]: I0312 14:50:55.571092 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-config\") pod \"dnsmasq-dns-95846d9c5-hjsmg\" (UID: \"00cfce92-4961-4a84-a59e-b1b979f29a35\") " pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" Mar 12 14:50:55.572209 master-0 kubenswrapper[37036]: I0312 14:50:55.571495 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-ovsdbserver-nb\") pod \"dnsmasq-dns-95846d9c5-hjsmg\" (UID: \"00cfce92-4961-4a84-a59e-b1b979f29a35\") " pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" Mar 12 14:50:55.577914 master-0 kubenswrapper[37036]: I0312 14:50:55.572509 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-dns-swift-storage-0\") pod \"dnsmasq-dns-95846d9c5-hjsmg\" (UID: \"00cfce92-4961-4a84-a59e-b1b979f29a35\") " pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" Mar 12 14:50:55.577914 master-0 kubenswrapper[37036]: I0312 14:50:55.574666 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-ovsdbserver-sb\") pod \"dnsmasq-dns-95846d9c5-hjsmg\" (UID: \"00cfce92-4961-4a84-a59e-b1b979f29a35\") " pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" Mar 12 14:50:55.598985 master-0 kubenswrapper[37036]: I0312 14:50:55.595924 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94l5g\" (UniqueName: \"kubernetes.io/projected/00cfce92-4961-4a84-a59e-b1b979f29a35-kube-api-access-94l5g\") pod \"dnsmasq-dns-95846d9c5-hjsmg\" (UID: \"00cfce92-4961-4a84-a59e-b1b979f29a35\") " pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" Mar 12 14:50:55.604026 master-0 kubenswrapper[37036]: I0312 14:50:55.603815 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-865fc75fb8-6hmpx" Mar 12 14:50:55.722958 master-0 kubenswrapper[37036]: I0312 14:50:55.721248 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" Mar 12 14:50:55.781358 master-0 kubenswrapper[37036]: I0312 14:50:55.780393 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-scheduler-0" event={"ID":"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7","Type":"ContainerStarted","Data":"696b0917d16a65a000c7418fcc3f3586e0ddfb0f68db7e45eb59396b6b67b5b6"} Mar 12 14:50:55.789117 master-0 kubenswrapper[37036]: I0312 14:50:55.787720 37036 generic.go:334] "Generic (PLEG): container finished" podID="865aa6c9-1fad-481d-a029-977159f15829" containerID="c4afb87adb993421552628e7ef36787da876356b0ebf62e7eae5b6b343edc288" exitCode=143 Mar 12 14:50:55.789117 master-0 kubenswrapper[37036]: I0312 14:50:55.788059 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-api-0" event={"ID":"865aa6c9-1fad-481d-a029-977159f15829","Type":"ContainerDied","Data":"c4afb87adb993421552628e7ef36787da876356b0ebf62e7eae5b6b343edc288"} Mar 12 14:50:55.834413 master-0 kubenswrapper[37036]: I0312 14:50:55.834347 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-05598-scheduler-0" podStartSLOduration=4.724448709 podStartE2EDuration="5.833885719s" podCreationTimestamp="2026-03-12 14:50:50 +0000 UTC" firstStartedPulling="2026-03-12 14:50:52.190302103 +0000 UTC m=+911.198043040" lastFinishedPulling="2026-03-12 14:50:53.299739113 +0000 UTC m=+912.307480050" observedRunningTime="2026-03-12 14:50:55.818541702 +0000 UTC m=+914.826282639" watchObservedRunningTime="2026-03-12 14:50:55.833885719 +0000 UTC m=+914.841626656" Mar 12 14:50:56.392323 master-0 kubenswrapper[37036]: I0312 14:50:56.392256 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:50:56.422041 master-0 kubenswrapper[37036]: I0312 14:50:56.421948 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95846d9c5-hjsmg"] Mar 12 14:50:56.472022 master-0 kubenswrapper[37036]: I0312 14:50:56.459843 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-865fc75fb8-6hmpx"] Mar 12 14:50:56.564011 master-0 kubenswrapper[37036]: I0312 14:50:56.563960 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-05598-scheduler-0" Mar 12 14:50:56.826926 master-0 kubenswrapper[37036]: I0312 14:50:56.824154 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" event={"ID":"00cfce92-4961-4a84-a59e-b1b979f29a35","Type":"ContainerStarted","Data":"72548b0e888eb9b502240039b5beb76972e493ea0aa39524a2e19b00e6f160af"} Mar 12 14:50:56.844923 master-0 kubenswrapper[37036]: I0312 14:50:56.839593 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-865fc75fb8-6hmpx" event={"ID":"0289ee73-116b-4f34-ae6e-5560906a2df8","Type":"ContainerStarted","Data":"4bf401abfc578e15d896145696115cc76e0aede234da331dda161d47ea4abc22"} Mar 12 14:50:56.864920 master-0 kubenswrapper[37036]: I0312 14:50:56.864212 37036 generic.go:334] "Generic (PLEG): container finished" podID="865aa6c9-1fad-481d-a029-977159f15829" containerID="0cd35e8931e93386ca83e32e657bf0e3490a89c45add5028b769ed512f1a803b" exitCode=0 Mar 12 14:50:56.868161 master-0 kubenswrapper[37036]: I0312 14:50:56.865585 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-api-0" event={"ID":"865aa6c9-1fad-481d-a029-977159f15829","Type":"ContainerDied","Data":"0cd35e8931e93386ca83e32e657bf0e3490a89c45add5028b769ed512f1a803b"} Mar 12 14:50:56.868161 master-0 kubenswrapper[37036]: I0312 14:50:56.866619 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" podUID="5ba06b21-ba2e-4104-aa8e-f23391c08939" containerName="dnsmasq-dns" containerID="cri-o://85fc6e16a5e14cfdaba511a0405cf29bbd3e615ba3cb9141c87f406997a176e9" gracePeriod=10 Mar 12 14:50:56.908928 master-0 kubenswrapper[37036]: I0312 14:50:56.907639 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-05598-backup-0" Mar 12 14:50:57.019582 master-0 kubenswrapper[37036]: I0312 14:50:57.019540 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-api-0" Mar 12 14:50:57.136120 master-0 kubenswrapper[37036]: I0312 14:50:57.135765 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/865aa6c9-1fad-481d-a029-977159f15829-config-data-custom\") pod \"865aa6c9-1fad-481d-a029-977159f15829\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " Mar 12 14:50:57.136120 master-0 kubenswrapper[37036]: I0312 14:50:57.135828 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/865aa6c9-1fad-481d-a029-977159f15829-logs\") pod \"865aa6c9-1fad-481d-a029-977159f15829\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " Mar 12 14:50:57.136120 master-0 kubenswrapper[37036]: I0312 14:50:57.135853 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865aa6c9-1fad-481d-a029-977159f15829-combined-ca-bundle\") pod \"865aa6c9-1fad-481d-a029-977159f15829\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " Mar 12 14:50:57.136120 master-0 kubenswrapper[37036]: I0312 14:50:57.135977 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljwl4\" (UniqueName: \"kubernetes.io/projected/865aa6c9-1fad-481d-a029-977159f15829-kube-api-access-ljwl4\") pod \"865aa6c9-1fad-481d-a029-977159f15829\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " Mar 12 14:50:57.136120 master-0 kubenswrapper[37036]: I0312 14:50:57.136090 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/865aa6c9-1fad-481d-a029-977159f15829-etc-machine-id\") pod \"865aa6c9-1fad-481d-a029-977159f15829\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " Mar 12 14:50:57.136120 master-0 kubenswrapper[37036]: I0312 14:50:57.136118 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/865aa6c9-1fad-481d-a029-977159f15829-config-data\") pod \"865aa6c9-1fad-481d-a029-977159f15829\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " Mar 12 14:50:57.136557 master-0 kubenswrapper[37036]: I0312 14:50:57.136145 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/865aa6c9-1fad-481d-a029-977159f15829-scripts\") pod \"865aa6c9-1fad-481d-a029-977159f15829\" (UID: \"865aa6c9-1fad-481d-a029-977159f15829\") " Mar 12 14:50:57.136649 master-0 kubenswrapper[37036]: I0312 14:50:57.136573 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/865aa6c9-1fad-481d-a029-977159f15829-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "865aa6c9-1fad-481d-a029-977159f15829" (UID: "865aa6c9-1fad-481d-a029-977159f15829"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:50:57.137151 master-0 kubenswrapper[37036]: I0312 14:50:57.137123 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/865aa6c9-1fad-481d-a029-977159f15829-logs" (OuterVolumeSpecName: "logs") pod "865aa6c9-1fad-481d-a029-977159f15829" (UID: "865aa6c9-1fad-481d-a029-977159f15829"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:50:57.145094 master-0 kubenswrapper[37036]: I0312 14:50:57.144161 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/865aa6c9-1fad-481d-a029-977159f15829-kube-api-access-ljwl4" (OuterVolumeSpecName: "kube-api-access-ljwl4") pod "865aa6c9-1fad-481d-a029-977159f15829" (UID: "865aa6c9-1fad-481d-a029-977159f15829"). InnerVolumeSpecName "kube-api-access-ljwl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:50:57.163265 master-0 kubenswrapper[37036]: I0312 14:50:57.163211 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865aa6c9-1fad-481d-a029-977159f15829-scripts" (OuterVolumeSpecName: "scripts") pod "865aa6c9-1fad-481d-a029-977159f15829" (UID: "865aa6c9-1fad-481d-a029-977159f15829"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:57.201973 master-0 kubenswrapper[37036]: I0312 14:50:57.199310 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865aa6c9-1fad-481d-a029-977159f15829-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "865aa6c9-1fad-481d-a029-977159f15829" (UID: "865aa6c9-1fad-481d-a029-977159f15829"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:57.243922 master-0 kubenswrapper[37036]: I0312 14:50:57.240122 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljwl4\" (UniqueName: \"kubernetes.io/projected/865aa6c9-1fad-481d-a029-977159f15829-kube-api-access-ljwl4\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:57.243922 master-0 kubenswrapper[37036]: I0312 14:50:57.240167 37036 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/865aa6c9-1fad-481d-a029-977159f15829-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:57.243922 master-0 kubenswrapper[37036]: I0312 14:50:57.240177 37036 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/865aa6c9-1fad-481d-a029-977159f15829-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:57.243922 master-0 kubenswrapper[37036]: I0312 14:50:57.240189 37036 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/865aa6c9-1fad-481d-a029-977159f15829-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:57.243922 master-0 kubenswrapper[37036]: I0312 14:50:57.240201 37036 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/865aa6c9-1fad-481d-a029-977159f15829-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:57.288179 master-0 kubenswrapper[37036]: I0312 14:50:57.288089 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865aa6c9-1fad-481d-a029-977159f15829-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "865aa6c9-1fad-481d-a029-977159f15829" (UID: "865aa6c9-1fad-481d-a029-977159f15829"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:57.343690 master-0 kubenswrapper[37036]: I0312 14:50:57.343633 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/865aa6c9-1fad-481d-a029-977159f15829-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:57.350370 master-0 kubenswrapper[37036]: I0312 14:50:57.350317 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865aa6c9-1fad-481d-a029-977159f15829-config-data" (OuterVolumeSpecName: "config-data") pod "865aa6c9-1fad-481d-a029-977159f15829" (UID: "865aa6c9-1fad-481d-a029-977159f15829"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:50:57.461729 master-0 kubenswrapper[37036]: I0312 14:50:57.461655 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/865aa6c9-1fad-481d-a029-977159f15829-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:57.761918 master-0 kubenswrapper[37036]: I0312 14:50:57.761266 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" Mar 12 14:50:57.893960 master-0 kubenswrapper[37036]: I0312 14:50:57.889136 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-dns-svc\") pod \"5ba06b21-ba2e-4104-aa8e-f23391c08939\" (UID: \"5ba06b21-ba2e-4104-aa8e-f23391c08939\") " Mar 12 14:50:57.893960 master-0 kubenswrapper[37036]: I0312 14:50:57.889248 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf9jl\" (UniqueName: \"kubernetes.io/projected/5ba06b21-ba2e-4104-aa8e-f23391c08939-kube-api-access-tf9jl\") pod \"5ba06b21-ba2e-4104-aa8e-f23391c08939\" (UID: \"5ba06b21-ba2e-4104-aa8e-f23391c08939\") " Mar 12 14:50:57.893960 master-0 kubenswrapper[37036]: I0312 14:50:57.889319 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-dns-swift-storage-0\") pod \"5ba06b21-ba2e-4104-aa8e-f23391c08939\" (UID: \"5ba06b21-ba2e-4104-aa8e-f23391c08939\") " Mar 12 14:50:57.893960 master-0 kubenswrapper[37036]: I0312 14:50:57.889389 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-ovsdbserver-nb\") pod \"5ba06b21-ba2e-4104-aa8e-f23391c08939\" (UID: \"5ba06b21-ba2e-4104-aa8e-f23391c08939\") " Mar 12 14:50:57.893960 master-0 kubenswrapper[37036]: I0312 14:50:57.889466 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-config\") pod \"5ba06b21-ba2e-4104-aa8e-f23391c08939\" (UID: \"5ba06b21-ba2e-4104-aa8e-f23391c08939\") " Mar 12 14:50:57.893960 master-0 kubenswrapper[37036]: I0312 14:50:57.889536 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-ovsdbserver-sb\") pod \"5ba06b21-ba2e-4104-aa8e-f23391c08939\" (UID: \"5ba06b21-ba2e-4104-aa8e-f23391c08939\") " Mar 12 14:50:57.902935 master-0 kubenswrapper[37036]: I0312 14:50:57.900752 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-api-0" event={"ID":"865aa6c9-1fad-481d-a029-977159f15829","Type":"ContainerDied","Data":"0d5bc36f2a66d7e19464f800408a9c5be12fb5927d949f606bfc82864de199e0"} Mar 12 14:50:57.902935 master-0 kubenswrapper[37036]: I0312 14:50:57.900812 37036 scope.go:117] "RemoveContainer" containerID="0cd35e8931e93386ca83e32e657bf0e3490a89c45add5028b769ed512f1a803b" Mar 12 14:50:57.902935 master-0 kubenswrapper[37036]: I0312 14:50:57.900956 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-api-0" Mar 12 14:50:57.964992 master-0 kubenswrapper[37036]: I0312 14:50:57.964933 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ba06b21-ba2e-4104-aa8e-f23391c08939-kube-api-access-tf9jl" (OuterVolumeSpecName: "kube-api-access-tf9jl") pod "5ba06b21-ba2e-4104-aa8e-f23391c08939" (UID: "5ba06b21-ba2e-4104-aa8e-f23391c08939"). InnerVolumeSpecName "kube-api-access-tf9jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:50:57.992921 master-0 kubenswrapper[37036]: I0312 14:50:57.973375 37036 generic.go:334] "Generic (PLEG): container finished" podID="00cfce92-4961-4a84-a59e-b1b979f29a35" containerID="ca8b737ce527a754ad4e301d625c526e66031c4a6b5979a7660b0fca39d46665" exitCode=0 Mar 12 14:50:57.992921 master-0 kubenswrapper[37036]: I0312 14:50:57.973478 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" event={"ID":"00cfce92-4961-4a84-a59e-b1b979f29a35","Type":"ContainerDied","Data":"ca8b737ce527a754ad4e301d625c526e66031c4a6b5979a7660b0fca39d46665"} Mar 12 14:50:57.994944 master-0 kubenswrapper[37036]: I0312 14:50:57.994888 37036 generic.go:334] "Generic (PLEG): container finished" podID="5ba06b21-ba2e-4104-aa8e-f23391c08939" containerID="85fc6e16a5e14cfdaba511a0405cf29bbd3e615ba3cb9141c87f406997a176e9" exitCode=0 Mar 12 14:50:57.995118 master-0 kubenswrapper[37036]: I0312 14:50:57.995082 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" Mar 12 14:50:57.995210 master-0 kubenswrapper[37036]: I0312 14:50:57.995187 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" event={"ID":"5ba06b21-ba2e-4104-aa8e-f23391c08939","Type":"ContainerDied","Data":"85fc6e16a5e14cfdaba511a0405cf29bbd3e615ba3cb9141c87f406997a176e9"} Mar 12 14:50:57.995289 master-0 kubenswrapper[37036]: I0312 14:50:57.995275 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddf45ffb9-dpmhn" event={"ID":"5ba06b21-ba2e-4104-aa8e-f23391c08939","Type":"ContainerDied","Data":"5aa42dd484db8bbfad17ec549c6106e1f0ac759d404d7aedaf01828ffd9b4140"} Mar 12 14:50:58.002083 master-0 kubenswrapper[37036]: I0312 14:50:58.000799 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-865fc75fb8-6hmpx" event={"ID":"0289ee73-116b-4f34-ae6e-5560906a2df8","Type":"ContainerStarted","Data":"fd5bbad93f3b715cb2cff75c5354ceb537717061278c7c8765fe906f2526900e"} Mar 12 14:50:58.002083 master-0 kubenswrapper[37036]: I0312 14:50:58.000847 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-865fc75fb8-6hmpx" Mar 12 14:50:58.002083 master-0 kubenswrapper[37036]: I0312 14:50:58.000859 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-865fc75fb8-6hmpx" event={"ID":"0289ee73-116b-4f34-ae6e-5560906a2df8","Type":"ContainerStarted","Data":"4bbe3b4a0e9688f41597323db7c8d29bbb53026d0fbd65feee38b96c8e042453"} Mar 12 14:50:58.005260 master-0 kubenswrapper[37036]: I0312 14:50:58.005215 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf9jl\" (UniqueName: \"kubernetes.io/projected/5ba06b21-ba2e-4104-aa8e-f23391c08939-kube-api-access-tf9jl\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:58.034276 master-0 kubenswrapper[37036]: I0312 14:50:58.034235 37036 scope.go:117] "RemoveContainer" containerID="c4afb87adb993421552628e7ef36787da876356b0ebf62e7eae5b6b343edc288" Mar 12 14:50:58.056000 master-0 kubenswrapper[37036]: I0312 14:50:58.053777 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5ba06b21-ba2e-4104-aa8e-f23391c08939" (UID: "5ba06b21-ba2e-4104-aa8e-f23391c08939"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:58.065769 master-0 kubenswrapper[37036]: I0312 14:50:58.065691 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5ba06b21-ba2e-4104-aa8e-f23391c08939" (UID: "5ba06b21-ba2e-4104-aa8e-f23391c08939"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:58.084954 master-0 kubenswrapper[37036]: I0312 14:50:58.083766 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-config" (OuterVolumeSpecName: "config") pod "5ba06b21-ba2e-4104-aa8e-f23391c08939" (UID: "5ba06b21-ba2e-4104-aa8e-f23391c08939"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:58.085548 master-0 kubenswrapper[37036]: I0312 14:50:58.085475 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5ba06b21-ba2e-4104-aa8e-f23391c08939" (UID: "5ba06b21-ba2e-4104-aa8e-f23391c08939"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:58.085548 master-0 kubenswrapper[37036]: I0312 14:50:58.085492 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5ba06b21-ba2e-4104-aa8e-f23391c08939" (UID: "5ba06b21-ba2e-4104-aa8e-f23391c08939"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:50:58.116493 master-0 kubenswrapper[37036]: I0312 14:50:58.114087 37036 scope.go:117] "RemoveContainer" containerID="85fc6e16a5e14cfdaba511a0405cf29bbd3e615ba3cb9141c87f406997a176e9" Mar 12 14:50:58.116493 master-0 kubenswrapper[37036]: I0312 14:50:58.114767 37036 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:58.116493 master-0 kubenswrapper[37036]: I0312 14:50:58.114804 37036 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:58.116493 master-0 kubenswrapper[37036]: I0312 14:50:58.114813 37036 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:58.116493 master-0 kubenswrapper[37036]: I0312 14:50:58.114826 37036 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:58.116493 master-0 kubenswrapper[37036]: I0312 14:50:58.114835 37036 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ba06b21-ba2e-4104-aa8e-f23391c08939-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:50:58.117851 master-0 kubenswrapper[37036]: I0312 14:50:58.117804 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-05598-api-0"] Mar 12 14:50:58.147094 master-0 kubenswrapper[37036]: I0312 14:50:58.145290 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-05598-api-0"] Mar 12 14:50:58.163707 master-0 kubenswrapper[37036]: I0312 14:50:58.163438 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-865fc75fb8-6hmpx" podStartSLOduration=4.163415499 podStartE2EDuration="4.163415499s" podCreationTimestamp="2026-03-12 14:50:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:58.114851552 +0000 UTC m=+917.122592489" watchObservedRunningTime="2026-03-12 14:50:58.163415499 +0000 UTC m=+917.171156426" Mar 12 14:50:58.216002 master-0 kubenswrapper[37036]: I0312 14:50:58.213965 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-05598-api-0"] Mar 12 14:50:58.216002 master-0 kubenswrapper[37036]: E0312 14:50:58.214719 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ba06b21-ba2e-4104-aa8e-f23391c08939" containerName="dnsmasq-dns" Mar 12 14:50:58.216002 master-0 kubenswrapper[37036]: I0312 14:50:58.214734 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ba06b21-ba2e-4104-aa8e-f23391c08939" containerName="dnsmasq-dns" Mar 12 14:50:58.216002 master-0 kubenswrapper[37036]: E0312 14:50:58.214756 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ba06b21-ba2e-4104-aa8e-f23391c08939" containerName="init" Mar 12 14:50:58.216002 master-0 kubenswrapper[37036]: I0312 14:50:58.214763 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ba06b21-ba2e-4104-aa8e-f23391c08939" containerName="init" Mar 12 14:50:58.216002 master-0 kubenswrapper[37036]: E0312 14:50:58.214781 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="865aa6c9-1fad-481d-a029-977159f15829" containerName="cinder-api" Mar 12 14:50:58.216002 master-0 kubenswrapper[37036]: I0312 14:50:58.214787 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="865aa6c9-1fad-481d-a029-977159f15829" containerName="cinder-api" Mar 12 14:50:58.216002 master-0 kubenswrapper[37036]: E0312 14:50:58.214796 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="865aa6c9-1fad-481d-a029-977159f15829" containerName="cinder-05598-api-log" Mar 12 14:50:58.216002 master-0 kubenswrapper[37036]: I0312 14:50:58.214802 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="865aa6c9-1fad-481d-a029-977159f15829" containerName="cinder-05598-api-log" Mar 12 14:50:58.216002 master-0 kubenswrapper[37036]: I0312 14:50:58.215031 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="865aa6c9-1fad-481d-a029-977159f15829" containerName="cinder-05598-api-log" Mar 12 14:50:58.216002 master-0 kubenswrapper[37036]: I0312 14:50:58.215062 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ba06b21-ba2e-4104-aa8e-f23391c08939" containerName="dnsmasq-dns" Mar 12 14:50:58.216002 master-0 kubenswrapper[37036]: I0312 14:50:58.215070 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="865aa6c9-1fad-481d-a029-977159f15829" containerName="cinder-api" Mar 12 14:50:58.216587 master-0 kubenswrapper[37036]: I0312 14:50:58.216182 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.226207 master-0 kubenswrapper[37036]: I0312 14:50:58.225546 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 12 14:50:58.226207 master-0 kubenswrapper[37036]: I0312 14:50:58.225649 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f760056-b25d-4261-9ad5-66ba2dc8e046-combined-ca-bundle\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.226207 master-0 kubenswrapper[37036]: I0312 14:50:58.225698 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f760056-b25d-4261-9ad5-66ba2dc8e046-public-tls-certs\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.226207 master-0 kubenswrapper[37036]: I0312 14:50:58.225736 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 12 14:50:58.226207 master-0 kubenswrapper[37036]: I0312 14:50:58.225740 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f760056-b25d-4261-9ad5-66ba2dc8e046-scripts\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.226207 master-0 kubenswrapper[37036]: I0312 14:50:58.225784 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btmwv\" (UniqueName: \"kubernetes.io/projected/3f760056-b25d-4261-9ad5-66ba2dc8e046-kube-api-access-btmwv\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.226207 master-0 kubenswrapper[37036]: I0312 14:50:58.225849 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f760056-b25d-4261-9ad5-66ba2dc8e046-logs\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.226207 master-0 kubenswrapper[37036]: I0312 14:50:58.225875 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-05598-api-config-data" Mar 12 14:50:58.226207 master-0 kubenswrapper[37036]: I0312 14:50:58.225876 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f760056-b25d-4261-9ad5-66ba2dc8e046-internal-tls-certs\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.245921 master-0 kubenswrapper[37036]: I0312 14:50:58.243242 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f760056-b25d-4261-9ad5-66ba2dc8e046-config-data-custom\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.245921 master-0 kubenswrapper[37036]: I0312 14:50:58.243362 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f760056-b25d-4261-9ad5-66ba2dc8e046-config-data\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.245921 master-0 kubenswrapper[37036]: I0312 14:50:58.243391 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f760056-b25d-4261-9ad5-66ba2dc8e046-etc-machine-id\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.284996 master-0 kubenswrapper[37036]: I0312 14:50:58.274258 37036 scope.go:117] "RemoveContainer" containerID="a957490eba9eafb2507b57e06262377dbbd9d6a2e8d086ab00bdeafd312a90a6" Mar 12 14:50:58.297989 master-0 kubenswrapper[37036]: I0312 14:50:58.292212 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-05598-api-0"] Mar 12 14:50:58.362324 master-0 kubenswrapper[37036]: I0312 14:50:58.362284 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f760056-b25d-4261-9ad5-66ba2dc8e046-logs\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.362530 master-0 kubenswrapper[37036]: I0312 14:50:58.362346 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f760056-b25d-4261-9ad5-66ba2dc8e046-internal-tls-certs\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.362530 master-0 kubenswrapper[37036]: I0312 14:50:58.362374 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f760056-b25d-4261-9ad5-66ba2dc8e046-config-data-custom\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.362530 master-0 kubenswrapper[37036]: I0312 14:50:58.362399 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f760056-b25d-4261-9ad5-66ba2dc8e046-config-data\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.362530 master-0 kubenswrapper[37036]: I0312 14:50:58.362418 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f760056-b25d-4261-9ad5-66ba2dc8e046-etc-machine-id\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.362684 master-0 kubenswrapper[37036]: I0312 14:50:58.362666 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f760056-b25d-4261-9ad5-66ba2dc8e046-combined-ca-bundle\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.362723 master-0 kubenswrapper[37036]: I0312 14:50:58.362708 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f760056-b25d-4261-9ad5-66ba2dc8e046-public-tls-certs\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.362925 master-0 kubenswrapper[37036]: I0312 14:50:58.362885 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f760056-b25d-4261-9ad5-66ba2dc8e046-scripts\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.362987 master-0 kubenswrapper[37036]: I0312 14:50:58.362965 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btmwv\" (UniqueName: \"kubernetes.io/projected/3f760056-b25d-4261-9ad5-66ba2dc8e046-kube-api-access-btmwv\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.365559 master-0 kubenswrapper[37036]: I0312 14:50:58.365410 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f760056-b25d-4261-9ad5-66ba2dc8e046-etc-machine-id\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.369383 master-0 kubenswrapper[37036]: I0312 14:50:58.368985 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f760056-b25d-4261-9ad5-66ba2dc8e046-logs\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.370623 master-0 kubenswrapper[37036]: I0312 14:50:58.370582 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f760056-b25d-4261-9ad5-66ba2dc8e046-config-data-custom\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.374788 master-0 kubenswrapper[37036]: I0312 14:50:58.374744 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f760056-b25d-4261-9ad5-66ba2dc8e046-scripts\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.375525 master-0 kubenswrapper[37036]: I0312 14:50:58.375479 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f760056-b25d-4261-9ad5-66ba2dc8e046-combined-ca-bundle\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.377505 master-0 kubenswrapper[37036]: I0312 14:50:58.377478 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f760056-b25d-4261-9ad5-66ba2dc8e046-public-tls-certs\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.380821 master-0 kubenswrapper[37036]: I0312 14:50:58.380798 37036 scope.go:117] "RemoveContainer" containerID="85fc6e16a5e14cfdaba511a0405cf29bbd3e615ba3cb9141c87f406997a176e9" Mar 12 14:50:58.382126 master-0 kubenswrapper[37036]: I0312 14:50:58.382097 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f760056-b25d-4261-9ad5-66ba2dc8e046-config-data\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.383353 master-0 kubenswrapper[37036]: E0312 14:50:58.382407 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85fc6e16a5e14cfdaba511a0405cf29bbd3e615ba3cb9141c87f406997a176e9\": container with ID starting with 85fc6e16a5e14cfdaba511a0405cf29bbd3e615ba3cb9141c87f406997a176e9 not found: ID does not exist" containerID="85fc6e16a5e14cfdaba511a0405cf29bbd3e615ba3cb9141c87f406997a176e9" Mar 12 14:50:58.383353 master-0 kubenswrapper[37036]: I0312 14:50:58.382442 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85fc6e16a5e14cfdaba511a0405cf29bbd3e615ba3cb9141c87f406997a176e9"} err="failed to get container status \"85fc6e16a5e14cfdaba511a0405cf29bbd3e615ba3cb9141c87f406997a176e9\": rpc error: code = NotFound desc = could not find container \"85fc6e16a5e14cfdaba511a0405cf29bbd3e615ba3cb9141c87f406997a176e9\": container with ID starting with 85fc6e16a5e14cfdaba511a0405cf29bbd3e615ba3cb9141c87f406997a176e9 not found: ID does not exist" Mar 12 14:50:58.383353 master-0 kubenswrapper[37036]: I0312 14:50:58.382464 37036 scope.go:117] "RemoveContainer" containerID="a957490eba9eafb2507b57e06262377dbbd9d6a2e8d086ab00bdeafd312a90a6" Mar 12 14:50:58.383353 master-0 kubenswrapper[37036]: I0312 14:50:58.383305 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f760056-b25d-4261-9ad5-66ba2dc8e046-internal-tls-certs\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.385199 master-0 kubenswrapper[37036]: E0312 14:50:58.384799 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a957490eba9eafb2507b57e06262377dbbd9d6a2e8d086ab00bdeafd312a90a6\": container with ID starting with a957490eba9eafb2507b57e06262377dbbd9d6a2e8d086ab00bdeafd312a90a6 not found: ID does not exist" containerID="a957490eba9eafb2507b57e06262377dbbd9d6a2e8d086ab00bdeafd312a90a6" Mar 12 14:50:58.385199 master-0 kubenswrapper[37036]: I0312 14:50:58.384831 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a957490eba9eafb2507b57e06262377dbbd9d6a2e8d086ab00bdeafd312a90a6"} err="failed to get container status \"a957490eba9eafb2507b57e06262377dbbd9d6a2e8d086ab00bdeafd312a90a6\": rpc error: code = NotFound desc = could not find container \"a957490eba9eafb2507b57e06262377dbbd9d6a2e8d086ab00bdeafd312a90a6\": container with ID starting with a957490eba9eafb2507b57e06262377dbbd9d6a2e8d086ab00bdeafd312a90a6 not found: ID does not exist" Mar 12 14:50:58.398071 master-0 kubenswrapper[37036]: I0312 14:50:58.398021 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btmwv\" (UniqueName: \"kubernetes.io/projected/3f760056-b25d-4261-9ad5-66ba2dc8e046-kube-api-access-btmwv\") pod \"cinder-05598-api-0\" (UID: \"3f760056-b25d-4261-9ad5-66ba2dc8e046\") " pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.508281 master-0 kubenswrapper[37036]: I0312 14:50:58.508227 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-api-0" Mar 12 14:50:58.516094 master-0 kubenswrapper[37036]: I0312 14:50:58.516027 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ddf45ffb9-dpmhn"] Mar 12 14:50:58.543237 master-0 kubenswrapper[37036]: I0312 14:50:58.538058 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ddf45ffb9-dpmhn"] Mar 12 14:50:58.758102 master-0 kubenswrapper[37036]: I0312 14:50:58.756821 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-65b67d6cc7-hwxhs"] Mar 12 14:50:58.759025 master-0 kubenswrapper[37036]: I0312 14:50:58.758607 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-65b67d6cc7-hwxhs" Mar 12 14:50:58.766440 master-0 kubenswrapper[37036]: I0312 14:50:58.766373 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 12 14:50:58.766679 master-0 kubenswrapper[37036]: I0312 14:50:58.766600 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 12 14:50:58.807777 master-0 kubenswrapper[37036]: I0312 14:50:58.807424 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-65b67d6cc7-hwxhs"] Mar 12 14:50:58.882230 master-0 kubenswrapper[37036]: I0312 14:50:58.882164 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62ff79c8-5b73-4f55-a1d5-6288146d42f7-httpd-config\") pod \"neutron-65b67d6cc7-hwxhs\" (UID: \"62ff79c8-5b73-4f55-a1d5-6288146d42f7\") " pod="openstack/neutron-65b67d6cc7-hwxhs" Mar 12 14:50:58.882468 master-0 kubenswrapper[37036]: I0312 14:50:58.882318 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rdfk\" (UniqueName: \"kubernetes.io/projected/62ff79c8-5b73-4f55-a1d5-6288146d42f7-kube-api-access-6rdfk\") pod \"neutron-65b67d6cc7-hwxhs\" (UID: \"62ff79c8-5b73-4f55-a1d5-6288146d42f7\") " pod="openstack/neutron-65b67d6cc7-hwxhs" Mar 12 14:50:58.882468 master-0 kubenswrapper[37036]: I0312 14:50:58.882349 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62ff79c8-5b73-4f55-a1d5-6288146d42f7-combined-ca-bundle\") pod \"neutron-65b67d6cc7-hwxhs\" (UID: \"62ff79c8-5b73-4f55-a1d5-6288146d42f7\") " pod="openstack/neutron-65b67d6cc7-hwxhs" Mar 12 14:50:58.882468 master-0 kubenswrapper[37036]: I0312 14:50:58.882387 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ff79c8-5b73-4f55-a1d5-6288146d42f7-internal-tls-certs\") pod \"neutron-65b67d6cc7-hwxhs\" (UID: \"62ff79c8-5b73-4f55-a1d5-6288146d42f7\") " pod="openstack/neutron-65b67d6cc7-hwxhs" Mar 12 14:50:58.882468 master-0 kubenswrapper[37036]: I0312 14:50:58.882456 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ff79c8-5b73-4f55-a1d5-6288146d42f7-public-tls-certs\") pod \"neutron-65b67d6cc7-hwxhs\" (UID: \"62ff79c8-5b73-4f55-a1d5-6288146d42f7\") " pod="openstack/neutron-65b67d6cc7-hwxhs" Mar 12 14:50:58.882761 master-0 kubenswrapper[37036]: I0312 14:50:58.882489 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/62ff79c8-5b73-4f55-a1d5-6288146d42f7-config\") pod \"neutron-65b67d6cc7-hwxhs\" (UID: \"62ff79c8-5b73-4f55-a1d5-6288146d42f7\") " pod="openstack/neutron-65b67d6cc7-hwxhs" Mar 12 14:50:58.882761 master-0 kubenswrapper[37036]: I0312 14:50:58.882519 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ff79c8-5b73-4f55-a1d5-6288146d42f7-ovndb-tls-certs\") pod \"neutron-65b67d6cc7-hwxhs\" (UID: \"62ff79c8-5b73-4f55-a1d5-6288146d42f7\") " pod="openstack/neutron-65b67d6cc7-hwxhs" Mar 12 14:50:58.994185 master-0 kubenswrapper[37036]: I0312 14:50:58.991133 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ff79c8-5b73-4f55-a1d5-6288146d42f7-public-tls-certs\") pod \"neutron-65b67d6cc7-hwxhs\" (UID: \"62ff79c8-5b73-4f55-a1d5-6288146d42f7\") " pod="openstack/neutron-65b67d6cc7-hwxhs" Mar 12 14:50:58.994185 master-0 kubenswrapper[37036]: I0312 14:50:58.992035 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/62ff79c8-5b73-4f55-a1d5-6288146d42f7-config\") pod \"neutron-65b67d6cc7-hwxhs\" (UID: \"62ff79c8-5b73-4f55-a1d5-6288146d42f7\") " pod="openstack/neutron-65b67d6cc7-hwxhs" Mar 12 14:50:58.994185 master-0 kubenswrapper[37036]: I0312 14:50:58.992115 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ff79c8-5b73-4f55-a1d5-6288146d42f7-ovndb-tls-certs\") pod \"neutron-65b67d6cc7-hwxhs\" (UID: \"62ff79c8-5b73-4f55-a1d5-6288146d42f7\") " pod="openstack/neutron-65b67d6cc7-hwxhs" Mar 12 14:50:58.994185 master-0 kubenswrapper[37036]: I0312 14:50:58.992958 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62ff79c8-5b73-4f55-a1d5-6288146d42f7-httpd-config\") pod \"neutron-65b67d6cc7-hwxhs\" (UID: \"62ff79c8-5b73-4f55-a1d5-6288146d42f7\") " pod="openstack/neutron-65b67d6cc7-hwxhs" Mar 12 14:50:58.994185 master-0 kubenswrapper[37036]: I0312 14:50:58.993158 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rdfk\" (UniqueName: \"kubernetes.io/projected/62ff79c8-5b73-4f55-a1d5-6288146d42f7-kube-api-access-6rdfk\") pod \"neutron-65b67d6cc7-hwxhs\" (UID: \"62ff79c8-5b73-4f55-a1d5-6288146d42f7\") " pod="openstack/neutron-65b67d6cc7-hwxhs" Mar 12 14:50:58.994185 master-0 kubenswrapper[37036]: I0312 14:50:58.993189 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62ff79c8-5b73-4f55-a1d5-6288146d42f7-combined-ca-bundle\") pod \"neutron-65b67d6cc7-hwxhs\" (UID: \"62ff79c8-5b73-4f55-a1d5-6288146d42f7\") " pod="openstack/neutron-65b67d6cc7-hwxhs" Mar 12 14:50:58.994185 master-0 kubenswrapper[37036]: I0312 14:50:58.993263 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ff79c8-5b73-4f55-a1d5-6288146d42f7-internal-tls-certs\") pod \"neutron-65b67d6cc7-hwxhs\" (UID: \"62ff79c8-5b73-4f55-a1d5-6288146d42f7\") " pod="openstack/neutron-65b67d6cc7-hwxhs" Mar 12 14:50:58.998922 master-0 kubenswrapper[37036]: I0312 14:50:58.995525 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ff79c8-5b73-4f55-a1d5-6288146d42f7-public-tls-certs\") pod \"neutron-65b67d6cc7-hwxhs\" (UID: \"62ff79c8-5b73-4f55-a1d5-6288146d42f7\") " pod="openstack/neutron-65b67d6cc7-hwxhs" Mar 12 14:50:58.998922 master-0 kubenswrapper[37036]: I0312 14:50:58.998586 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/62ff79c8-5b73-4f55-a1d5-6288146d42f7-config\") pod \"neutron-65b67d6cc7-hwxhs\" (UID: \"62ff79c8-5b73-4f55-a1d5-6288146d42f7\") " pod="openstack/neutron-65b67d6cc7-hwxhs" Mar 12 14:50:59.002572 master-0 kubenswrapper[37036]: I0312 14:50:59.002528 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62ff79c8-5b73-4f55-a1d5-6288146d42f7-combined-ca-bundle\") pod \"neutron-65b67d6cc7-hwxhs\" (UID: \"62ff79c8-5b73-4f55-a1d5-6288146d42f7\") " pod="openstack/neutron-65b67d6cc7-hwxhs" Mar 12 14:50:59.004729 master-0 kubenswrapper[37036]: I0312 14:50:59.004657 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ff79c8-5b73-4f55-a1d5-6288146d42f7-internal-tls-certs\") pod \"neutron-65b67d6cc7-hwxhs\" (UID: \"62ff79c8-5b73-4f55-a1d5-6288146d42f7\") " pod="openstack/neutron-65b67d6cc7-hwxhs" Mar 12 14:50:59.010005 master-0 kubenswrapper[37036]: I0312 14:50:59.008209 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ff79c8-5b73-4f55-a1d5-6288146d42f7-ovndb-tls-certs\") pod \"neutron-65b67d6cc7-hwxhs\" (UID: \"62ff79c8-5b73-4f55-a1d5-6288146d42f7\") " pod="openstack/neutron-65b67d6cc7-hwxhs" Mar 12 14:50:59.013930 master-0 kubenswrapper[37036]: I0312 14:50:59.012577 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62ff79c8-5b73-4f55-a1d5-6288146d42f7-httpd-config\") pod \"neutron-65b67d6cc7-hwxhs\" (UID: \"62ff79c8-5b73-4f55-a1d5-6288146d42f7\") " pod="openstack/neutron-65b67d6cc7-hwxhs" Mar 12 14:50:59.030923 master-0 kubenswrapper[37036]: I0312 14:50:59.026699 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" event={"ID":"00cfce92-4961-4a84-a59e-b1b979f29a35","Type":"ContainerStarted","Data":"e012b91d96d153c570bd70db1b78442cffd6781ba2df9b6f987377ce7dbc2fd8"} Mar 12 14:50:59.030923 master-0 kubenswrapper[37036]: I0312 14:50:59.028635 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" Mar 12 14:50:59.030923 master-0 kubenswrapper[37036]: I0312 14:50:59.029257 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rdfk\" (UniqueName: \"kubernetes.io/projected/62ff79c8-5b73-4f55-a1d5-6288146d42f7-kube-api-access-6rdfk\") pod \"neutron-65b67d6cc7-hwxhs\" (UID: \"62ff79c8-5b73-4f55-a1d5-6288146d42f7\") " pod="openstack/neutron-65b67d6cc7-hwxhs" Mar 12 14:50:59.075685 master-0 kubenswrapper[37036]: I0312 14:50:59.075339 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" podStartSLOduration=5.075317805 podStartE2EDuration="5.075317805s" podCreationTimestamp="2026-03-12 14:50:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:50:59.054161277 +0000 UTC m=+918.061902214" watchObservedRunningTime="2026-03-12 14:50:59.075317805 +0000 UTC m=+918.083058742" Mar 12 14:50:59.119547 master-0 kubenswrapper[37036]: I0312 14:50:59.119479 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-05598-api-0"] Mar 12 14:50:59.152773 master-0 kubenswrapper[37036]: I0312 14:50:59.152712 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-65b67d6cc7-hwxhs" Mar 12 14:50:59.265121 master-0 kubenswrapper[37036]: I0312 14:50:59.262877 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ba06b21-ba2e-4104-aa8e-f23391c08939" path="/var/lib/kubelet/pods/5ba06b21-ba2e-4104-aa8e-f23391c08939/volumes" Mar 12 14:50:59.265121 master-0 kubenswrapper[37036]: I0312 14:50:59.263682 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="865aa6c9-1fad-481d-a029-977159f15829" path="/var/lib/kubelet/pods/865aa6c9-1fad-481d-a029-977159f15829/volumes" Mar 12 14:50:59.758520 master-0 kubenswrapper[37036]: I0312 14:50:59.758392 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-65b67d6cc7-hwxhs"] Mar 12 14:51:00.050528 master-0 kubenswrapper[37036]: I0312 14:51:00.050128 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-api-0" event={"ID":"3f760056-b25d-4261-9ad5-66ba2dc8e046","Type":"ContainerStarted","Data":"723ecbcb4daf7d6351af143f220bad7a98a1f467cc70a528984bf0ec0f06a7a9"} Mar 12 14:51:00.050528 master-0 kubenswrapper[37036]: I0312 14:51:00.050190 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-api-0" event={"ID":"3f760056-b25d-4261-9ad5-66ba2dc8e046","Type":"ContainerStarted","Data":"d61a2d38a68ebaba375f9665023be26d7fab74a3c4510e7a834b3e9ff0b2e683"} Mar 12 14:51:00.054739 master-0 kubenswrapper[37036]: I0312 14:51:00.053098 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65b67d6cc7-hwxhs" event={"ID":"62ff79c8-5b73-4f55-a1d5-6288146d42f7","Type":"ContainerStarted","Data":"65ce59740c02c9462bb7743d789e15c248b8c592825bc1aee1749a899b05b42e"} Mar 12 14:51:00.054739 master-0 kubenswrapper[37036]: I0312 14:51:00.053160 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65b67d6cc7-hwxhs" event={"ID":"62ff79c8-5b73-4f55-a1d5-6288146d42f7","Type":"ContainerStarted","Data":"4bc6b35b52d3dcf1be3dd24838476c427763436b51b8709c2b088bbc1a732843"} Mar 12 14:51:01.085214 master-0 kubenswrapper[37036]: I0312 14:51:01.085027 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-api-0" event={"ID":"3f760056-b25d-4261-9ad5-66ba2dc8e046","Type":"ContainerStarted","Data":"f58df98f48f6025b9a2b89702e4788789f06e0c232a38d80ba405c1200fca93e"} Mar 12 14:51:01.085874 master-0 kubenswrapper[37036]: I0312 14:51:01.085343 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-05598-api-0" Mar 12 14:51:01.091216 master-0 kubenswrapper[37036]: I0312 14:51:01.091168 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65b67d6cc7-hwxhs" event={"ID":"62ff79c8-5b73-4f55-a1d5-6288146d42f7","Type":"ContainerStarted","Data":"6a4c760001a7f14951473ec78710fc2ff28d8f0eb8cc834995bd0d6e485df7e7"} Mar 12 14:51:01.091576 master-0 kubenswrapper[37036]: I0312 14:51:01.091557 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-65b67d6cc7-hwxhs" Mar 12 14:51:01.139451 master-0 kubenswrapper[37036]: I0312 14:51:01.139362 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-65b67d6cc7-hwxhs" podStartSLOduration=3.139344307 podStartE2EDuration="3.139344307s" podCreationTimestamp="2026-03-12 14:50:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:01.134606634 +0000 UTC m=+920.142347571" watchObservedRunningTime="2026-03-12 14:51:01.139344307 +0000 UTC m=+920.147085244" Mar 12 14:51:01.153275 master-0 kubenswrapper[37036]: I0312 14:51:01.153173 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-05598-api-0" podStartSLOduration=3.153148627 podStartE2EDuration="3.153148627s" podCreationTimestamp="2026-03-12 14:50:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:01.11022001 +0000 UTC m=+920.117960967" watchObservedRunningTime="2026-03-12 14:51:01.153148627 +0000 UTC m=+920.160889564" Mar 12 14:51:01.612749 master-0 kubenswrapper[37036]: I0312 14:51:01.612690 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:01.670798 master-0 kubenswrapper[37036]: I0312 14:51:01.670739 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-05598-volume-lvm-iscsi-0"] Mar 12 14:51:01.824602 master-0 kubenswrapper[37036]: I0312 14:51:01.824542 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-05598-scheduler-0" Mar 12 14:51:01.879696 master-0 kubenswrapper[37036]: I0312 14:51:01.879537 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-05598-scheduler-0"] Mar 12 14:51:02.110131 master-0 kubenswrapper[37036]: I0312 14:51:02.110061 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-05598-scheduler-0" podUID="fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7" containerName="cinder-scheduler" containerID="cri-o://d821634c4b364f62a644cc88a3dc597d65362ac07795102176632645f9dca271" gracePeriod=30 Mar 12 14:51:02.112383 master-0 kubenswrapper[37036]: I0312 14:51:02.112327 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-05598-volume-lvm-iscsi-0" podUID="f3229786-cb19-4355-b538-ac9bbaedc4b3" containerName="cinder-volume" containerID="cri-o://cc012d715203f799e02dbf073e8bbf4b7fa3a3de5b2b96876d2846fc6aea2746" gracePeriod=30 Mar 12 14:51:02.114705 master-0 kubenswrapper[37036]: I0312 14:51:02.113661 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-05598-scheduler-0" podUID="fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7" containerName="probe" containerID="cri-o://696b0917d16a65a000c7418fcc3f3586e0ddfb0f68db7e45eb59396b6b67b5b6" gracePeriod=30 Mar 12 14:51:02.114705 master-0 kubenswrapper[37036]: I0312 14:51:02.113748 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-05598-volume-lvm-iscsi-0" podUID="f3229786-cb19-4355-b538-ac9bbaedc4b3" containerName="probe" containerID="cri-o://0ae3e32dde61cfc9bffc5b18348f8686afde317becb2f0a984cb35b540adce74" gracePeriod=30 Mar 12 14:51:02.137316 master-0 kubenswrapper[37036]: I0312 14:51:02.137151 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-05598-backup-0" Mar 12 14:51:02.225099 master-0 kubenswrapper[37036]: I0312 14:51:02.225029 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-05598-backup-0"] Mar 12 14:51:03.158573 master-0 kubenswrapper[37036]: I0312 14:51:03.158479 37036 generic.go:334] "Generic (PLEG): container finished" podID="67c2c80d-8881-4a05-8d2f-2350b3848b13" containerID="3ffba879105a8fc866bb0739663862a8fdb342c2ae9dfc87fa49036a1948f8a7" exitCode=0 Mar 12 14:51:03.158573 master-0 kubenswrapper[37036]: I0312 14:51:03.158560 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-hzl7q" event={"ID":"67c2c80d-8881-4a05-8d2f-2350b3848b13","Type":"ContainerDied","Data":"3ffba879105a8fc866bb0739663862a8fdb342c2ae9dfc87fa49036a1948f8a7"} Mar 12 14:51:03.163058 master-0 kubenswrapper[37036]: I0312 14:51:03.163009 37036 generic.go:334] "Generic (PLEG): container finished" podID="fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7" containerID="696b0917d16a65a000c7418fcc3f3586e0ddfb0f68db7e45eb59396b6b67b5b6" exitCode=0 Mar 12 14:51:03.163193 master-0 kubenswrapper[37036]: I0312 14:51:03.163093 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-scheduler-0" event={"ID":"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7","Type":"ContainerDied","Data":"696b0917d16a65a000c7418fcc3f3586e0ddfb0f68db7e45eb59396b6b67b5b6"} Mar 12 14:51:03.166928 master-0 kubenswrapper[37036]: I0312 14:51:03.166864 37036 generic.go:334] "Generic (PLEG): container finished" podID="f3229786-cb19-4355-b538-ac9bbaedc4b3" containerID="0ae3e32dde61cfc9bffc5b18348f8686afde317becb2f0a984cb35b540adce74" exitCode=0 Mar 12 14:51:03.166928 master-0 kubenswrapper[37036]: I0312 14:51:03.166892 37036 generic.go:334] "Generic (PLEG): container finished" podID="f3229786-cb19-4355-b538-ac9bbaedc4b3" containerID="cc012d715203f799e02dbf073e8bbf4b7fa3a3de5b2b96876d2846fc6aea2746" exitCode=0 Mar 12 14:51:03.167094 master-0 kubenswrapper[37036]: I0312 14:51:03.166918 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-volume-lvm-iscsi-0" event={"ID":"f3229786-cb19-4355-b538-ac9bbaedc4b3","Type":"ContainerDied","Data":"0ae3e32dde61cfc9bffc5b18348f8686afde317becb2f0a984cb35b540adce74"} Mar 12 14:51:03.167094 master-0 kubenswrapper[37036]: I0312 14:51:03.166975 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-volume-lvm-iscsi-0" event={"ID":"f3229786-cb19-4355-b538-ac9bbaedc4b3","Type":"ContainerDied","Data":"cc012d715203f799e02dbf073e8bbf4b7fa3a3de5b2b96876d2846fc6aea2746"} Mar 12 14:51:03.167187 master-0 kubenswrapper[37036]: I0312 14:51:03.167150 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-05598-backup-0" podUID="f3fda052-1aaa-41a2-80a1-0917c2494c02" containerName="cinder-backup" containerID="cri-o://b4cc022785839865aedd2c73e9f5aa6cfda959189b88b8bb581e5cd617b459f0" gracePeriod=30 Mar 12 14:51:03.167235 master-0 kubenswrapper[37036]: I0312 14:51:03.167195 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-05598-backup-0" podUID="f3fda052-1aaa-41a2-80a1-0917c2494c02" containerName="probe" containerID="cri-o://508ccd5b5bd99366be885a72f52be840ab7faa8850dc8956610dd979f03cc6d6" gracePeriod=30 Mar 12 14:51:03.517320 master-0 kubenswrapper[37036]: I0312 14:51:03.517230 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:03.681695 master-0 kubenswrapper[37036]: I0312 14:51:03.681549 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3229786-cb19-4355-b538-ac9bbaedc4b3-combined-ca-bundle\") pod \"f3229786-cb19-4355-b538-ac9bbaedc4b3\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " Mar 12 14:51:03.681695 master-0 kubenswrapper[37036]: I0312 14:51:03.681617 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3229786-cb19-4355-b538-ac9bbaedc4b3-config-data-custom\") pod \"f3229786-cb19-4355-b538-ac9bbaedc4b3\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " Mar 12 14:51:03.681695 master-0 kubenswrapper[37036]: I0312 14:51:03.681635 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-var-locks-cinder\") pod \"f3229786-cb19-4355-b538-ac9bbaedc4b3\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " Mar 12 14:51:03.681695 master-0 kubenswrapper[37036]: I0312 14:51:03.681674 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-etc-iscsi\") pod \"f3229786-cb19-4355-b538-ac9bbaedc4b3\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " Mar 12 14:51:03.682062 master-0 kubenswrapper[37036]: I0312 14:51:03.681741 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x294q\" (UniqueName: \"kubernetes.io/projected/f3229786-cb19-4355-b538-ac9bbaedc4b3-kube-api-access-x294q\") pod \"f3229786-cb19-4355-b538-ac9bbaedc4b3\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " Mar 12 14:51:03.682062 master-0 kubenswrapper[37036]: I0312 14:51:03.681850 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3229786-cb19-4355-b538-ac9bbaedc4b3-scripts\") pod \"f3229786-cb19-4355-b538-ac9bbaedc4b3\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " Mar 12 14:51:03.682062 master-0 kubenswrapper[37036]: I0312 14:51:03.681857 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "f3229786-cb19-4355-b538-ac9bbaedc4b3" (UID: "f3229786-cb19-4355-b538-ac9bbaedc4b3"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:51:03.682062 master-0 kubenswrapper[37036]: I0312 14:51:03.681889 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-var-lib-cinder\") pod \"f3229786-cb19-4355-b538-ac9bbaedc4b3\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " Mar 12 14:51:03.682062 master-0 kubenswrapper[37036]: I0312 14:51:03.681921 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "f3229786-cb19-4355-b538-ac9bbaedc4b3" (UID: "f3229786-cb19-4355-b538-ac9bbaedc4b3"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:51:03.682062 master-0 kubenswrapper[37036]: I0312 14:51:03.681941 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-lib-modules\") pod \"f3229786-cb19-4355-b538-ac9bbaedc4b3\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " Mar 12 14:51:03.682062 master-0 kubenswrapper[37036]: I0312 14:51:03.681988 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f3229786-cb19-4355-b538-ac9bbaedc4b3" (UID: "f3229786-cb19-4355-b538-ac9bbaedc4b3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:51:03.682062 master-0 kubenswrapper[37036]: I0312 14:51:03.682018 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "f3229786-cb19-4355-b538-ac9bbaedc4b3" (UID: "f3229786-cb19-4355-b538-ac9bbaedc4b3"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:51:03.682422 master-0 kubenswrapper[37036]: I0312 14:51:03.682082 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-etc-machine-id\") pod \"f3229786-cb19-4355-b538-ac9bbaedc4b3\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " Mar 12 14:51:03.682422 master-0 kubenswrapper[37036]: I0312 14:51:03.682124 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-etc-nvme\") pod \"f3229786-cb19-4355-b538-ac9bbaedc4b3\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " Mar 12 14:51:03.682422 master-0 kubenswrapper[37036]: I0312 14:51:03.682199 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-sys\") pod \"f3229786-cb19-4355-b538-ac9bbaedc4b3\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " Mar 12 14:51:03.682422 master-0 kubenswrapper[37036]: I0312 14:51:03.682203 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f3229786-cb19-4355-b538-ac9bbaedc4b3" (UID: "f3229786-cb19-4355-b538-ac9bbaedc4b3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:51:03.682422 master-0 kubenswrapper[37036]: I0312 14:51:03.682274 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "f3229786-cb19-4355-b538-ac9bbaedc4b3" (UID: "f3229786-cb19-4355-b538-ac9bbaedc4b3"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:51:03.682422 master-0 kubenswrapper[37036]: I0312 14:51:03.682294 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3229786-cb19-4355-b538-ac9bbaedc4b3-config-data\") pod \"f3229786-cb19-4355-b538-ac9bbaedc4b3\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " Mar 12 14:51:03.682422 master-0 kubenswrapper[37036]: I0312 14:51:03.682308 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-sys" (OuterVolumeSpecName: "sys") pod "f3229786-cb19-4355-b538-ac9bbaedc4b3" (UID: "f3229786-cb19-4355-b538-ac9bbaedc4b3"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:51:03.682422 master-0 kubenswrapper[37036]: I0312 14:51:03.682327 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-run\") pod \"f3229786-cb19-4355-b538-ac9bbaedc4b3\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " Mar 12 14:51:03.682422 master-0 kubenswrapper[37036]: I0312 14:51:03.682383 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-dev\") pod \"f3229786-cb19-4355-b538-ac9bbaedc4b3\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " Mar 12 14:51:03.682779 master-0 kubenswrapper[37036]: I0312 14:51:03.682438 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-var-locks-brick\") pod \"f3229786-cb19-4355-b538-ac9bbaedc4b3\" (UID: \"f3229786-cb19-4355-b538-ac9bbaedc4b3\") " Mar 12 14:51:03.682830 master-0 kubenswrapper[37036]: I0312 14:51:03.682771 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "f3229786-cb19-4355-b538-ac9bbaedc4b3" (UID: "f3229786-cb19-4355-b538-ac9bbaedc4b3"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:51:03.682830 master-0 kubenswrapper[37036]: I0312 14:51:03.682806 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-run" (OuterVolumeSpecName: "run") pod "f3229786-cb19-4355-b538-ac9bbaedc4b3" (UID: "f3229786-cb19-4355-b538-ac9bbaedc4b3"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:51:03.682936 master-0 kubenswrapper[37036]: I0312 14:51:03.682833 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-dev" (OuterVolumeSpecName: "dev") pod "f3229786-cb19-4355-b538-ac9bbaedc4b3" (UID: "f3229786-cb19-4355-b538-ac9bbaedc4b3"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:51:03.683550 master-0 kubenswrapper[37036]: I0312 14:51:03.683490 37036 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:03.683550 master-0 kubenswrapper[37036]: I0312 14:51:03.683520 37036 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:03.683550 master-0 kubenswrapper[37036]: I0312 14:51:03.683531 37036 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:03.683550 master-0 kubenswrapper[37036]: I0312 14:51:03.683540 37036 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-lib-modules\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:03.683550 master-0 kubenswrapper[37036]: I0312 14:51:03.683548 37036 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:03.683796 master-0 kubenswrapper[37036]: I0312 14:51:03.683556 37036 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-etc-nvme\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:03.683796 master-0 kubenswrapper[37036]: I0312 14:51:03.683564 37036 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-sys\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:03.683796 master-0 kubenswrapper[37036]: I0312 14:51:03.683572 37036 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-run\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:03.683796 master-0 kubenswrapper[37036]: I0312 14:51:03.683579 37036 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-dev\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:03.683796 master-0 kubenswrapper[37036]: I0312 14:51:03.683587 37036 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f3229786-cb19-4355-b538-ac9bbaedc4b3-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:03.684802 master-0 kubenswrapper[37036]: I0312 14:51:03.684734 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3229786-cb19-4355-b538-ac9bbaedc4b3-scripts" (OuterVolumeSpecName: "scripts") pod "f3229786-cb19-4355-b538-ac9bbaedc4b3" (UID: "f3229786-cb19-4355-b538-ac9bbaedc4b3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:03.690416 master-0 kubenswrapper[37036]: I0312 14:51:03.690310 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3229786-cb19-4355-b538-ac9bbaedc4b3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f3229786-cb19-4355-b538-ac9bbaedc4b3" (UID: "f3229786-cb19-4355-b538-ac9bbaedc4b3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:03.691024 master-0 kubenswrapper[37036]: I0312 14:51:03.690940 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3229786-cb19-4355-b538-ac9bbaedc4b3-kube-api-access-x294q" (OuterVolumeSpecName: "kube-api-access-x294q") pod "f3229786-cb19-4355-b538-ac9bbaedc4b3" (UID: "f3229786-cb19-4355-b538-ac9bbaedc4b3"). InnerVolumeSpecName "kube-api-access-x294q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:03.745948 master-0 kubenswrapper[37036]: I0312 14:51:03.745866 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3229786-cb19-4355-b538-ac9bbaedc4b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3229786-cb19-4355-b538-ac9bbaedc4b3" (UID: "f3229786-cb19-4355-b538-ac9bbaedc4b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:03.786318 master-0 kubenswrapper[37036]: I0312 14:51:03.786218 37036 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3229786-cb19-4355-b538-ac9bbaedc4b3-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:03.786318 master-0 kubenswrapper[37036]: I0312 14:51:03.786261 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3229786-cb19-4355-b538-ac9bbaedc4b3-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:03.786318 master-0 kubenswrapper[37036]: I0312 14:51:03.786271 37036 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3229786-cb19-4355-b538-ac9bbaedc4b3-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:03.786318 master-0 kubenswrapper[37036]: I0312 14:51:03.786283 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x294q\" (UniqueName: \"kubernetes.io/projected/f3229786-cb19-4355-b538-ac9bbaedc4b3-kube-api-access-x294q\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:03.834556 master-0 kubenswrapper[37036]: I0312 14:51:03.834495 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3229786-cb19-4355-b538-ac9bbaedc4b3-config-data" (OuterVolumeSpecName: "config-data") pod "f3229786-cb19-4355-b538-ac9bbaedc4b3" (UID: "f3229786-cb19-4355-b538-ac9bbaedc4b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:03.888358 master-0 kubenswrapper[37036]: I0312 14:51:03.888294 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3229786-cb19-4355-b538-ac9bbaedc4b3-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:04.179604 master-0 kubenswrapper[37036]: I0312 14:51:04.179561 37036 generic.go:334] "Generic (PLEG): container finished" podID="fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7" containerID="d821634c4b364f62a644cc88a3dc597d65362ac07795102176632645f9dca271" exitCode=0 Mar 12 14:51:04.180073 master-0 kubenswrapper[37036]: I0312 14:51:04.179629 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-scheduler-0" event={"ID":"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7","Type":"ContainerDied","Data":"d821634c4b364f62a644cc88a3dc597d65362ac07795102176632645f9dca271"} Mar 12 14:51:04.183033 master-0 kubenswrapper[37036]: I0312 14:51:04.182997 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-backup-0" event={"ID":"f3fda052-1aaa-41a2-80a1-0917c2494c02","Type":"ContainerDied","Data":"508ccd5b5bd99366be885a72f52be840ab7faa8850dc8956610dd979f03cc6d6"} Mar 12 14:51:04.185977 master-0 kubenswrapper[37036]: I0312 14:51:04.185918 37036 generic.go:334] "Generic (PLEG): container finished" podID="f3fda052-1aaa-41a2-80a1-0917c2494c02" containerID="508ccd5b5bd99366be885a72f52be840ab7faa8850dc8956610dd979f03cc6d6" exitCode=0 Mar 12 14:51:04.195573 master-0 kubenswrapper[37036]: I0312 14:51:04.195250 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.196149 master-0 kubenswrapper[37036]: I0312 14:51:04.195731 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-volume-lvm-iscsi-0" event={"ID":"f3229786-cb19-4355-b538-ac9bbaedc4b3","Type":"ContainerDied","Data":"2cef4745b40ae14ecc080e42b727f7bdbcb62567da720ac71e610e705673b596"} Mar 12 14:51:04.196149 master-0 kubenswrapper[37036]: I0312 14:51:04.195780 37036 scope.go:117] "RemoveContainer" containerID="0ae3e32dde61cfc9bffc5b18348f8686afde317becb2f0a984cb35b540adce74" Mar 12 14:51:04.248352 master-0 kubenswrapper[37036]: I0312 14:51:04.248050 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-05598-volume-lvm-iscsi-0"] Mar 12 14:51:04.265350 master-0 kubenswrapper[37036]: I0312 14:51:04.263943 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-05598-volume-lvm-iscsi-0"] Mar 12 14:51:04.265350 master-0 kubenswrapper[37036]: I0312 14:51:04.264246 37036 scope.go:117] "RemoveContainer" containerID="cc012d715203f799e02dbf073e8bbf4b7fa3a3de5b2b96876d2846fc6aea2746" Mar 12 14:51:04.319049 master-0 kubenswrapper[37036]: I0312 14:51:04.294482 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-05598-volume-lvm-iscsi-0"] Mar 12 14:51:04.319049 master-0 kubenswrapper[37036]: E0312 14:51:04.295040 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3229786-cb19-4355-b538-ac9bbaedc4b3" containerName="probe" Mar 12 14:51:04.319049 master-0 kubenswrapper[37036]: I0312 14:51:04.295056 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3229786-cb19-4355-b538-ac9bbaedc4b3" containerName="probe" Mar 12 14:51:04.319049 master-0 kubenswrapper[37036]: E0312 14:51:04.295088 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3229786-cb19-4355-b538-ac9bbaedc4b3" containerName="cinder-volume" Mar 12 14:51:04.319049 master-0 kubenswrapper[37036]: I0312 14:51:04.295095 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3229786-cb19-4355-b538-ac9bbaedc4b3" containerName="cinder-volume" Mar 12 14:51:04.319049 master-0 kubenswrapper[37036]: I0312 14:51:04.295326 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3229786-cb19-4355-b538-ac9bbaedc4b3" containerName="cinder-volume" Mar 12 14:51:04.319049 master-0 kubenswrapper[37036]: I0312 14:51:04.295379 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3229786-cb19-4355-b538-ac9bbaedc4b3" containerName="probe" Mar 12 14:51:04.319049 master-0 kubenswrapper[37036]: I0312 14:51:04.296614 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.319049 master-0 kubenswrapper[37036]: I0312 14:51:04.300741 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-05598-volume-lvm-iscsi-config-data" Mar 12 14:51:04.319049 master-0 kubenswrapper[37036]: I0312 14:51:04.301749 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-etc-nvme\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.319049 master-0 kubenswrapper[37036]: I0312 14:51:04.301784 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-var-locks-brick\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.319049 master-0 kubenswrapper[37036]: I0312 14:51:04.301841 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-etc-machine-id\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.319049 master-0 kubenswrapper[37036]: I0312 14:51:04.301864 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-var-lib-cinder\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.319049 master-0 kubenswrapper[37036]: I0312 14:51:04.301881 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-lib-modules\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.319049 master-0 kubenswrapper[37036]: I0312 14:51:04.301912 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-var-locks-cinder\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.319049 master-0 kubenswrapper[37036]: I0312 14:51:04.301963 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d700a6d3-cb4f-4971-8b80-30eaab119193-scripts\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.319049 master-0 kubenswrapper[37036]: I0312 14:51:04.301993 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d700a6d3-cb4f-4971-8b80-30eaab119193-config-data-custom\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.319049 master-0 kubenswrapper[37036]: I0312 14:51:04.302018 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl9ph\" (UniqueName: \"kubernetes.io/projected/d700a6d3-cb4f-4971-8b80-30eaab119193-kube-api-access-vl9ph\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.319049 master-0 kubenswrapper[37036]: I0312 14:51:04.302038 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-dev\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.319049 master-0 kubenswrapper[37036]: I0312 14:51:04.302108 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-sys\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.319049 master-0 kubenswrapper[37036]: I0312 14:51:04.302126 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d700a6d3-cb4f-4971-8b80-30eaab119193-config-data\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.319049 master-0 kubenswrapper[37036]: I0312 14:51:04.302155 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-etc-iscsi\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.319049 master-0 kubenswrapper[37036]: I0312 14:51:04.302204 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d700a6d3-cb4f-4971-8b80-30eaab119193-combined-ca-bundle\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.319049 master-0 kubenswrapper[37036]: I0312 14:51:04.302226 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-run\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.319049 master-0 kubenswrapper[37036]: I0312 14:51:04.311519 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-05598-volume-lvm-iscsi-0"] Mar 12 14:51:04.403230 master-0 kubenswrapper[37036]: I0312 14:51:04.403118 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d700a6d3-cb4f-4971-8b80-30eaab119193-config-data\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.403230 master-0 kubenswrapper[37036]: I0312 14:51:04.403165 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-sys\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.403230 master-0 kubenswrapper[37036]: I0312 14:51:04.403201 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-etc-iscsi\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.403230 master-0 kubenswrapper[37036]: I0312 14:51:04.403226 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d700a6d3-cb4f-4971-8b80-30eaab119193-combined-ca-bundle\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.403581 master-0 kubenswrapper[37036]: I0312 14:51:04.403247 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-run\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.403581 master-0 kubenswrapper[37036]: I0312 14:51:04.403274 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-etc-nvme\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.403581 master-0 kubenswrapper[37036]: I0312 14:51:04.403294 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-var-locks-brick\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.403581 master-0 kubenswrapper[37036]: I0312 14:51:04.403337 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-etc-machine-id\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.403581 master-0 kubenswrapper[37036]: I0312 14:51:04.403357 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-var-lib-cinder\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.403581 master-0 kubenswrapper[37036]: I0312 14:51:04.403372 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-lib-modules\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.403581 master-0 kubenswrapper[37036]: I0312 14:51:04.403391 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-var-locks-cinder\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.403581 master-0 kubenswrapper[37036]: I0312 14:51:04.403441 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d700a6d3-cb4f-4971-8b80-30eaab119193-scripts\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.403581 master-0 kubenswrapper[37036]: I0312 14:51:04.403476 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d700a6d3-cb4f-4971-8b80-30eaab119193-config-data-custom\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.403581 master-0 kubenswrapper[37036]: I0312 14:51:04.403500 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl9ph\" (UniqueName: \"kubernetes.io/projected/d700a6d3-cb4f-4971-8b80-30eaab119193-kube-api-access-vl9ph\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.403581 master-0 kubenswrapper[37036]: I0312 14:51:04.403519 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-dev\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.404040 master-0 kubenswrapper[37036]: I0312 14:51:04.404013 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-var-locks-brick\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.404211 master-0 kubenswrapper[37036]: I0312 14:51:04.404168 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-etc-iscsi\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.404342 master-0 kubenswrapper[37036]: I0312 14:51:04.404326 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-run\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.404466 master-0 kubenswrapper[37036]: I0312 14:51:04.404447 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-etc-nvme\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.404940 master-0 kubenswrapper[37036]: I0312 14:51:04.404910 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-var-locks-cinder\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.405088 master-0 kubenswrapper[37036]: I0312 14:51:04.405054 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-etc-machine-id\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.405161 master-0 kubenswrapper[37036]: I0312 14:51:04.405102 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-var-lib-cinder\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.405161 master-0 kubenswrapper[37036]: I0312 14:51:04.405128 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-lib-modules\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.405161 master-0 kubenswrapper[37036]: I0312 14:51:04.405154 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-sys\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.407788 master-0 kubenswrapper[37036]: I0312 14:51:04.407669 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d700a6d3-cb4f-4971-8b80-30eaab119193-dev\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.410763 master-0 kubenswrapper[37036]: I0312 14:51:04.410683 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d700a6d3-cb4f-4971-8b80-30eaab119193-scripts\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.422747 master-0 kubenswrapper[37036]: I0312 14:51:04.422690 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d700a6d3-cb4f-4971-8b80-30eaab119193-combined-ca-bundle\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.425121 master-0 kubenswrapper[37036]: I0312 14:51:04.425081 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d700a6d3-cb4f-4971-8b80-30eaab119193-config-data-custom\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.429107 master-0 kubenswrapper[37036]: I0312 14:51:04.429033 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl9ph\" (UniqueName: \"kubernetes.io/projected/d700a6d3-cb4f-4971-8b80-30eaab119193-kube-api-access-vl9ph\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.456086 master-0 kubenswrapper[37036]: I0312 14:51:04.456018 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d700a6d3-cb4f-4971-8b80-30eaab119193-config-data\") pod \"cinder-05598-volume-lvm-iscsi-0\" (UID: \"d700a6d3-cb4f-4971-8b80-30eaab119193\") " pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.650834 master-0 kubenswrapper[37036]: I0312 14:51:04.649860 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:04.802676 master-0 kubenswrapper[37036]: I0312 14:51:04.802617 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-scheduler-0" Mar 12 14:51:04.921941 master-0 kubenswrapper[37036]: I0312 14:51:04.920693 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-scripts\") pod \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\" (UID: \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\") " Mar 12 14:51:04.921941 master-0 kubenswrapper[37036]: I0312 14:51:04.920819 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-etc-machine-id\") pod \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\" (UID: \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\") " Mar 12 14:51:04.921941 master-0 kubenswrapper[37036]: I0312 14:51:04.920864 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-config-data-custom\") pod \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\" (UID: \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\") " Mar 12 14:51:04.921941 master-0 kubenswrapper[37036]: I0312 14:51:04.920970 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-config-data\") pod \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\" (UID: \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\") " Mar 12 14:51:04.921941 master-0 kubenswrapper[37036]: I0312 14:51:04.921001 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-combined-ca-bundle\") pod \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\" (UID: \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\") " Mar 12 14:51:04.921941 master-0 kubenswrapper[37036]: I0312 14:51:04.921022 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7" (UID: "fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:51:04.921941 master-0 kubenswrapper[37036]: I0312 14:51:04.921076 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pqfq\" (UniqueName: \"kubernetes.io/projected/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-kube-api-access-7pqfq\") pod \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\" (UID: \"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7\") " Mar 12 14:51:04.921941 master-0 kubenswrapper[37036]: I0312 14:51:04.921726 37036 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:04.936475 master-0 kubenswrapper[37036]: I0312 14:51:04.936040 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7" (UID: "fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:04.937362 master-0 kubenswrapper[37036]: I0312 14:51:04.937289 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-kube-api-access-7pqfq" (OuterVolumeSpecName: "kube-api-access-7pqfq") pod "fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7" (UID: "fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7"). InnerVolumeSpecName "kube-api-access-7pqfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:04.941081 master-0 kubenswrapper[37036]: I0312 14:51:04.941031 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-scripts" (OuterVolumeSpecName: "scripts") pod "fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7" (UID: "fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:04.963376 master-0 kubenswrapper[37036]: I0312 14:51:04.963316 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-hzl7q" Mar 12 14:51:05.008188 master-0 kubenswrapper[37036]: I0312 14:51:05.008086 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7" (UID: "fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:05.024533 master-0 kubenswrapper[37036]: I0312 14:51:05.024357 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pqfq\" (UniqueName: \"kubernetes.io/projected/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-kube-api-access-7pqfq\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:05.024533 master-0 kubenswrapper[37036]: I0312 14:51:05.024423 37036 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:05.024533 master-0 kubenswrapper[37036]: I0312 14:51:05.024437 37036 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:05.024533 master-0 kubenswrapper[37036]: I0312 14:51:05.024449 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:05.066947 master-0 kubenswrapper[37036]: I0312 14:51:05.066808 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-config-data" (OuterVolumeSpecName: "config-data") pod "fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7" (UID: "fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:05.146207 master-0 kubenswrapper[37036]: I0312 14:51:05.146142 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/67c2c80d-8881-4a05-8d2f-2350b3848b13-etc-podinfo\") pod \"67c2c80d-8881-4a05-8d2f-2350b3848b13\" (UID: \"67c2c80d-8881-4a05-8d2f-2350b3848b13\") " Mar 12 14:51:05.146452 master-0 kubenswrapper[37036]: I0312 14:51:05.146268 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67c2c80d-8881-4a05-8d2f-2350b3848b13-config-data\") pod \"67c2c80d-8881-4a05-8d2f-2350b3848b13\" (UID: \"67c2c80d-8881-4a05-8d2f-2350b3848b13\") " Mar 12 14:51:05.146452 master-0 kubenswrapper[37036]: I0312 14:51:05.146369 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67c2c80d-8881-4a05-8d2f-2350b3848b13-scripts\") pod \"67c2c80d-8881-4a05-8d2f-2350b3848b13\" (UID: \"67c2c80d-8881-4a05-8d2f-2350b3848b13\") " Mar 12 14:51:05.146604 master-0 kubenswrapper[37036]: I0312 14:51:05.146510 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c2c80d-8881-4a05-8d2f-2350b3848b13-combined-ca-bundle\") pod \"67c2c80d-8881-4a05-8d2f-2350b3848b13\" (UID: \"67c2c80d-8881-4a05-8d2f-2350b3848b13\") " Mar 12 14:51:05.146733 master-0 kubenswrapper[37036]: I0312 14:51:05.146636 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwzj7\" (UniqueName: \"kubernetes.io/projected/67c2c80d-8881-4a05-8d2f-2350b3848b13-kube-api-access-bwzj7\") pod \"67c2c80d-8881-4a05-8d2f-2350b3848b13\" (UID: \"67c2c80d-8881-4a05-8d2f-2350b3848b13\") " Mar 12 14:51:05.146733 master-0 kubenswrapper[37036]: I0312 14:51:05.146719 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/67c2c80d-8881-4a05-8d2f-2350b3848b13-config-data-merged\") pod \"67c2c80d-8881-4a05-8d2f-2350b3848b13\" (UID: \"67c2c80d-8881-4a05-8d2f-2350b3848b13\") " Mar 12 14:51:05.147396 master-0 kubenswrapper[37036]: I0312 14:51:05.147302 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:05.148353 master-0 kubenswrapper[37036]: I0312 14:51:05.147753 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67c2c80d-8881-4a05-8d2f-2350b3848b13-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "67c2c80d-8881-4a05-8d2f-2350b3848b13" (UID: "67c2c80d-8881-4a05-8d2f-2350b3848b13"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:51:05.162922 master-0 kubenswrapper[37036]: I0312 14:51:05.162810 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67c2c80d-8881-4a05-8d2f-2350b3848b13-scripts" (OuterVolumeSpecName: "scripts") pod "67c2c80d-8881-4a05-8d2f-2350b3848b13" (UID: "67c2c80d-8881-4a05-8d2f-2350b3848b13"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:05.163229 master-0 kubenswrapper[37036]: I0312 14:51:05.162969 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67c2c80d-8881-4a05-8d2f-2350b3848b13-kube-api-access-bwzj7" (OuterVolumeSpecName: "kube-api-access-bwzj7") pod "67c2c80d-8881-4a05-8d2f-2350b3848b13" (UID: "67c2c80d-8881-4a05-8d2f-2350b3848b13"). InnerVolumeSpecName "kube-api-access-bwzj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:05.166880 master-0 kubenswrapper[37036]: I0312 14:51:05.166798 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/67c2c80d-8881-4a05-8d2f-2350b3848b13-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "67c2c80d-8881-4a05-8d2f-2350b3848b13" (UID: "67c2c80d-8881-4a05-8d2f-2350b3848b13"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 12 14:51:05.215020 master-0 kubenswrapper[37036]: I0312 14:51:05.213840 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67c2c80d-8881-4a05-8d2f-2350b3848b13-config-data" (OuterVolumeSpecName: "config-data") pod "67c2c80d-8881-4a05-8d2f-2350b3848b13" (UID: "67c2c80d-8881-4a05-8d2f-2350b3848b13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:05.235169 master-0 kubenswrapper[37036]: I0312 14:51:05.235116 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-scheduler-0" Mar 12 14:51:05.247922 master-0 kubenswrapper[37036]: I0312 14:51:05.247303 37036 generic.go:334] "Generic (PLEG): container finished" podID="f3fda052-1aaa-41a2-80a1-0917c2494c02" containerID="b4cc022785839865aedd2c73e9f5aa6cfda959189b88b8bb581e5cd617b459f0" exitCode=0 Mar 12 14:51:05.248851 master-0 kubenswrapper[37036]: I0312 14:51:05.248816 37036 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/67c2c80d-8881-4a05-8d2f-2350b3848b13-config-data-merged\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:05.248851 master-0 kubenswrapper[37036]: I0312 14:51:05.248839 37036 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/67c2c80d-8881-4a05-8d2f-2350b3848b13-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:05.248851 master-0 kubenswrapper[37036]: I0312 14:51:05.248850 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67c2c80d-8881-4a05-8d2f-2350b3848b13-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:05.249022 master-0 kubenswrapper[37036]: I0312 14:51:05.248860 37036 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67c2c80d-8881-4a05-8d2f-2350b3848b13-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:05.249022 master-0 kubenswrapper[37036]: I0312 14:51:05.248869 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwzj7\" (UniqueName: \"kubernetes.io/projected/67c2c80d-8881-4a05-8d2f-2350b3848b13-kube-api-access-bwzj7\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:05.255301 master-0 kubenswrapper[37036]: I0312 14:51:05.255228 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-hzl7q" Mar 12 14:51:05.255743 master-0 kubenswrapper[37036]: I0312 14:51:05.255674 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67c2c80d-8881-4a05-8d2f-2350b3848b13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67c2c80d-8881-4a05-8d2f-2350b3848b13" (UID: "67c2c80d-8881-4a05-8d2f-2350b3848b13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:05.262785 master-0 kubenswrapper[37036]: I0312 14:51:05.262739 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3229786-cb19-4355-b538-ac9bbaedc4b3" path="/var/lib/kubelet/pods/f3229786-cb19-4355-b538-ac9bbaedc4b3/volumes" Mar 12 14:51:05.264607 master-0 kubenswrapper[37036]: I0312 14:51:05.264282 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-scheduler-0" event={"ID":"fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7","Type":"ContainerDied","Data":"31ae4d9822a7931bc1aadbda22285a8b3e0599ef3fbbaa0541eac585a8dbb91f"} Mar 12 14:51:05.264607 master-0 kubenswrapper[37036]: I0312 14:51:05.264321 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-backup-0" event={"ID":"f3fda052-1aaa-41a2-80a1-0917c2494c02","Type":"ContainerDied","Data":"b4cc022785839865aedd2c73e9f5aa6cfda959189b88b8bb581e5cd617b459f0"} Mar 12 14:51:05.264607 master-0 kubenswrapper[37036]: I0312 14:51:05.264382 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-hzl7q" event={"ID":"67c2c80d-8881-4a05-8d2f-2350b3848b13","Type":"ContainerDied","Data":"a2757b0846a16d24795ea7de5e51ceda67532a048830e0bb73d068ae57aec828"} Mar 12 14:51:05.264607 master-0 kubenswrapper[37036]: I0312 14:51:05.264398 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2757b0846a16d24795ea7de5e51ceda67532a048830e0bb73d068ae57aec828" Mar 12 14:51:05.264607 master-0 kubenswrapper[37036]: I0312 14:51:05.264416 37036 scope.go:117] "RemoveContainer" containerID="696b0917d16a65a000c7418fcc3f3586e0ddfb0f68db7e45eb59396b6b67b5b6" Mar 12 14:51:05.282321 master-0 kubenswrapper[37036]: I0312 14:51:05.282230 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-05598-volume-lvm-iscsi-0"] Mar 12 14:51:05.323843 master-0 kubenswrapper[37036]: I0312 14:51:05.312477 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-05598-scheduler-0"] Mar 12 14:51:05.327330 master-0 kubenswrapper[37036]: I0312 14:51:05.327271 37036 scope.go:117] "RemoveContainer" containerID="d821634c4b364f62a644cc88a3dc597d65362ac07795102176632645f9dca271" Mar 12 14:51:05.375072 master-0 kubenswrapper[37036]: I0312 14:51:05.353586 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-05598-scheduler-0"] Mar 12 14:51:05.375072 master-0 kubenswrapper[37036]: I0312 14:51:05.366172 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c2c80d-8881-4a05-8d2f-2350b3848b13-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:05.375624 master-0 kubenswrapper[37036]: I0312 14:51:05.375470 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-05598-scheduler-0"] Mar 12 14:51:05.376095 master-0 kubenswrapper[37036]: E0312 14:51:05.376073 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c2c80d-8881-4a05-8d2f-2350b3848b13" containerName="init" Mar 12 14:51:05.376166 master-0 kubenswrapper[37036]: I0312 14:51:05.376099 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c2c80d-8881-4a05-8d2f-2350b3848b13" containerName="init" Mar 12 14:51:05.376166 master-0 kubenswrapper[37036]: E0312 14:51:05.376123 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7" containerName="probe" Mar 12 14:51:05.376166 master-0 kubenswrapper[37036]: I0312 14:51:05.376132 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7" containerName="probe" Mar 12 14:51:05.376166 master-0 kubenswrapper[37036]: E0312 14:51:05.376156 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7" containerName="cinder-scheduler" Mar 12 14:51:05.376166 master-0 kubenswrapper[37036]: I0312 14:51:05.376165 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7" containerName="cinder-scheduler" Mar 12 14:51:05.376319 master-0 kubenswrapper[37036]: E0312 14:51:05.376183 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c2c80d-8881-4a05-8d2f-2350b3848b13" containerName="ironic-db-sync" Mar 12 14:51:05.376319 master-0 kubenswrapper[37036]: I0312 14:51:05.376193 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c2c80d-8881-4a05-8d2f-2350b3848b13" containerName="ironic-db-sync" Mar 12 14:51:05.376545 master-0 kubenswrapper[37036]: I0312 14:51:05.376524 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7" containerName="cinder-scheduler" Mar 12 14:51:05.376623 master-0 kubenswrapper[37036]: I0312 14:51:05.376592 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c2c80d-8881-4a05-8d2f-2350b3848b13" containerName="ironic-db-sync" Mar 12 14:51:05.376623 master-0 kubenswrapper[37036]: I0312 14:51:05.376613 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7" containerName="probe" Mar 12 14:51:05.381018 master-0 kubenswrapper[37036]: I0312 14:51:05.378626 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-scheduler-0" Mar 12 14:51:05.382262 master-0 kubenswrapper[37036]: I0312 14:51:05.382230 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-05598-scheduler-config-data" Mar 12 14:51:05.395184 master-0 kubenswrapper[37036]: I0312 14:51:05.395122 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-05598-scheduler-0"] Mar 12 14:51:05.476729 master-0 kubenswrapper[37036]: I0312 14:51:05.476005 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76d48ea9-e7d6-4acd-b340-957c34aceb04-config-data-custom\") pod \"cinder-05598-scheduler-0\" (UID: \"76d48ea9-e7d6-4acd-b340-957c34aceb04\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:51:05.476729 master-0 kubenswrapper[37036]: I0312 14:51:05.476232 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smxng\" (UniqueName: \"kubernetes.io/projected/76d48ea9-e7d6-4acd-b340-957c34aceb04-kube-api-access-smxng\") pod \"cinder-05598-scheduler-0\" (UID: \"76d48ea9-e7d6-4acd-b340-957c34aceb04\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:51:05.476729 master-0 kubenswrapper[37036]: I0312 14:51:05.476388 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d48ea9-e7d6-4acd-b340-957c34aceb04-combined-ca-bundle\") pod \"cinder-05598-scheduler-0\" (UID: \"76d48ea9-e7d6-4acd-b340-957c34aceb04\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:51:05.476729 master-0 kubenswrapper[37036]: I0312 14:51:05.476490 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76d48ea9-e7d6-4acd-b340-957c34aceb04-etc-machine-id\") pod \"cinder-05598-scheduler-0\" (UID: \"76d48ea9-e7d6-4acd-b340-957c34aceb04\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:51:05.476729 master-0 kubenswrapper[37036]: I0312 14:51:05.476519 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76d48ea9-e7d6-4acd-b340-957c34aceb04-config-data\") pod \"cinder-05598-scheduler-0\" (UID: \"76d48ea9-e7d6-4acd-b340-957c34aceb04\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:51:05.476729 master-0 kubenswrapper[37036]: I0312 14:51:05.476602 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76d48ea9-e7d6-4acd-b340-957c34aceb04-scripts\") pod \"cinder-05598-scheduler-0\" (UID: \"76d48ea9-e7d6-4acd-b340-957c34aceb04\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:51:05.580430 master-0 kubenswrapper[37036]: I0312 14:51:05.579548 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76d48ea9-e7d6-4acd-b340-957c34aceb04-scripts\") pod \"cinder-05598-scheduler-0\" (UID: \"76d48ea9-e7d6-4acd-b340-957c34aceb04\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:51:05.580430 master-0 kubenswrapper[37036]: I0312 14:51:05.579671 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76d48ea9-e7d6-4acd-b340-957c34aceb04-config-data-custom\") pod \"cinder-05598-scheduler-0\" (UID: \"76d48ea9-e7d6-4acd-b340-957c34aceb04\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:51:05.580430 master-0 kubenswrapper[37036]: I0312 14:51:05.579771 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smxng\" (UniqueName: \"kubernetes.io/projected/76d48ea9-e7d6-4acd-b340-957c34aceb04-kube-api-access-smxng\") pod \"cinder-05598-scheduler-0\" (UID: \"76d48ea9-e7d6-4acd-b340-957c34aceb04\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:51:05.580430 master-0 kubenswrapper[37036]: I0312 14:51:05.579840 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d48ea9-e7d6-4acd-b340-957c34aceb04-combined-ca-bundle\") pod \"cinder-05598-scheduler-0\" (UID: \"76d48ea9-e7d6-4acd-b340-957c34aceb04\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:51:05.580430 master-0 kubenswrapper[37036]: I0312 14:51:05.579887 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76d48ea9-e7d6-4acd-b340-957c34aceb04-etc-machine-id\") pod \"cinder-05598-scheduler-0\" (UID: \"76d48ea9-e7d6-4acd-b340-957c34aceb04\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:51:05.580430 master-0 kubenswrapper[37036]: I0312 14:51:05.579926 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76d48ea9-e7d6-4acd-b340-957c34aceb04-config-data\") pod \"cinder-05598-scheduler-0\" (UID: \"76d48ea9-e7d6-4acd-b340-957c34aceb04\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:51:05.585358 master-0 kubenswrapper[37036]: I0312 14:51:05.585332 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76d48ea9-e7d6-4acd-b340-957c34aceb04-etc-machine-id\") pod \"cinder-05598-scheduler-0\" (UID: \"76d48ea9-e7d6-4acd-b340-957c34aceb04\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:51:05.589375 master-0 kubenswrapper[37036]: I0312 14:51:05.588140 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76d48ea9-e7d6-4acd-b340-957c34aceb04-config-data\") pod \"cinder-05598-scheduler-0\" (UID: \"76d48ea9-e7d6-4acd-b340-957c34aceb04\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:51:05.589513 master-0 kubenswrapper[37036]: I0312 14:51:05.589293 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76d48ea9-e7d6-4acd-b340-957c34aceb04-scripts\") pod \"cinder-05598-scheduler-0\" (UID: \"76d48ea9-e7d6-4acd-b340-957c34aceb04\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:51:05.593196 master-0 kubenswrapper[37036]: I0312 14:51:05.593171 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76d48ea9-e7d6-4acd-b340-957c34aceb04-config-data-custom\") pod \"cinder-05598-scheduler-0\" (UID: \"76d48ea9-e7d6-4acd-b340-957c34aceb04\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:51:05.598516 master-0 kubenswrapper[37036]: I0312 14:51:05.596605 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d48ea9-e7d6-4acd-b340-957c34aceb04-combined-ca-bundle\") pod \"cinder-05598-scheduler-0\" (UID: \"76d48ea9-e7d6-4acd-b340-957c34aceb04\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:51:05.659932 master-0 kubenswrapper[37036]: I0312 14:51:05.658308 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smxng\" (UniqueName: \"kubernetes.io/projected/76d48ea9-e7d6-4acd-b340-957c34aceb04-kube-api-access-smxng\") pod \"cinder-05598-scheduler-0\" (UID: \"76d48ea9-e7d6-4acd-b340-957c34aceb04\") " pod="openstack/cinder-05598-scheduler-0" Mar 12 14:51:05.718558 master-0 kubenswrapper[37036]: I0312 14:51:05.718108 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-scheduler-0" Mar 12 14:51:05.723380 master-0 kubenswrapper[37036]: I0312 14:51:05.723279 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" Mar 12 14:51:05.731922 master-0 kubenswrapper[37036]: I0312 14:51:05.729998 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-create-w9frr"] Mar 12 14:51:05.735941 master-0 kubenswrapper[37036]: I0312 14:51:05.735641 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-w9frr" Mar 12 14:51:05.738276 master-0 kubenswrapper[37036]: I0312 14:51:05.738224 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-backup-0" Mar 12 14:51:05.760925 master-0 kubenswrapper[37036]: I0312 14:51:05.747841 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-w9frr"] Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.784172 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-etc-iscsi\") pod \"f3fda052-1aaa-41a2-80a1-0917c2494c02\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.784219 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-var-lib-cinder\") pod \"f3fda052-1aaa-41a2-80a1-0917c2494c02\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.784251 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-var-locks-cinder\") pod \"f3fda052-1aaa-41a2-80a1-0917c2494c02\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.784282 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs2lt\" (UniqueName: \"kubernetes.io/projected/f3fda052-1aaa-41a2-80a1-0917c2494c02-kube-api-access-fs2lt\") pod \"f3fda052-1aaa-41a2-80a1-0917c2494c02\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.784547 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3fda052-1aaa-41a2-80a1-0917c2494c02-config-data\") pod \"f3fda052-1aaa-41a2-80a1-0917c2494c02\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.784602 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3fda052-1aaa-41a2-80a1-0917c2494c02-config-data-custom\") pod \"f3fda052-1aaa-41a2-80a1-0917c2494c02\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.784631 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-run\") pod \"f3fda052-1aaa-41a2-80a1-0917c2494c02\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.784672 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-etc-machine-id\") pod \"f3fda052-1aaa-41a2-80a1-0917c2494c02\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.784711 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3fda052-1aaa-41a2-80a1-0917c2494c02-combined-ca-bundle\") pod \"f3fda052-1aaa-41a2-80a1-0917c2494c02\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.784800 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-var-locks-brick\") pod \"f3fda052-1aaa-41a2-80a1-0917c2494c02\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.784825 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-sys\") pod \"f3fda052-1aaa-41a2-80a1-0917c2494c02\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.784982 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f3fda052-1aaa-41a2-80a1-0917c2494c02" (UID: "f3fda052-1aaa-41a2-80a1-0917c2494c02"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.785049 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-run" (OuterVolumeSpecName: "run") pod "f3fda052-1aaa-41a2-80a1-0917c2494c02" (UID: "f3fda052-1aaa-41a2-80a1-0917c2494c02"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.785100 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "f3fda052-1aaa-41a2-80a1-0917c2494c02" (UID: "f3fda052-1aaa-41a2-80a1-0917c2494c02"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.785125 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "f3fda052-1aaa-41a2-80a1-0917c2494c02" (UID: "f3fda052-1aaa-41a2-80a1-0917c2494c02"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.785143 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "f3fda052-1aaa-41a2-80a1-0917c2494c02" (UID: "f3fda052-1aaa-41a2-80a1-0917c2494c02"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.785491 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-lib-modules\") pod \"f3fda052-1aaa-41a2-80a1-0917c2494c02\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.785532 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-dev\") pod \"f3fda052-1aaa-41a2-80a1-0917c2494c02\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.785564 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3fda052-1aaa-41a2-80a1-0917c2494c02-scripts\") pod \"f3fda052-1aaa-41a2-80a1-0917c2494c02\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.785615 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-etc-nvme\") pod \"f3fda052-1aaa-41a2-80a1-0917c2494c02\" (UID: \"f3fda052-1aaa-41a2-80a1-0917c2494c02\") " Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.785844 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgkp5\" (UniqueName: \"kubernetes.io/projected/a271d093-100f-4c56-a201-16eb10358184-kube-api-access-qgkp5\") pod \"ironic-inspector-db-create-w9frr\" (UID: \"a271d093-100f-4c56-a201-16eb10358184\") " pod="openstack/ironic-inspector-db-create-w9frr" Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.785995 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a271d093-100f-4c56-a201-16eb10358184-operator-scripts\") pod \"ironic-inspector-db-create-w9frr\" (UID: \"a271d093-100f-4c56-a201-16eb10358184\") " pod="openstack/ironic-inspector-db-create-w9frr" Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.786217 37036 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.786233 37036 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.786242 37036 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.786251 37036 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-run\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.786259 37036 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.786552 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "f3fda052-1aaa-41a2-80a1-0917c2494c02" (UID: "f3fda052-1aaa-41a2-80a1-0917c2494c02"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.786627 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-sys" (OuterVolumeSpecName: "sys") pod "f3fda052-1aaa-41a2-80a1-0917c2494c02" (UID: "f3fda052-1aaa-41a2-80a1-0917c2494c02"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.786661 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f3fda052-1aaa-41a2-80a1-0917c2494c02" (UID: "f3fda052-1aaa-41a2-80a1-0917c2494c02"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:51:05.786658 master-0 kubenswrapper[37036]: I0312 14:51:05.786689 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-dev" (OuterVolumeSpecName: "dev") pod "f3fda052-1aaa-41a2-80a1-0917c2494c02" (UID: "f3fda052-1aaa-41a2-80a1-0917c2494c02"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:51:05.788997 master-0 kubenswrapper[37036]: I0312 14:51:05.787892 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "f3fda052-1aaa-41a2-80a1-0917c2494c02" (UID: "f3fda052-1aaa-41a2-80a1-0917c2494c02"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 14:51:05.822926 master-0 kubenswrapper[37036]: I0312 14:51:05.815017 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95846d9c5-hjsmg"] Mar 12 14:51:05.890628 master-0 kubenswrapper[37036]: I0312 14:51:05.889962 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3fda052-1aaa-41a2-80a1-0917c2494c02-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f3fda052-1aaa-41a2-80a1-0917c2494c02" (UID: "f3fda052-1aaa-41a2-80a1-0917c2494c02"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:05.905878 master-0 kubenswrapper[37036]: I0312 14:51:05.895868 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a271d093-100f-4c56-a201-16eb10358184-operator-scripts\") pod \"ironic-inspector-db-create-w9frr\" (UID: \"a271d093-100f-4c56-a201-16eb10358184\") " pod="openstack/ironic-inspector-db-create-w9frr" Mar 12 14:51:05.905878 master-0 kubenswrapper[37036]: I0312 14:51:05.896316 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgkp5\" (UniqueName: \"kubernetes.io/projected/a271d093-100f-4c56-a201-16eb10358184-kube-api-access-qgkp5\") pod \"ironic-inspector-db-create-w9frr\" (UID: \"a271d093-100f-4c56-a201-16eb10358184\") " pod="openstack/ironic-inspector-db-create-w9frr" Mar 12 14:51:05.905878 master-0 kubenswrapper[37036]: I0312 14:51:05.896411 37036 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3fda052-1aaa-41a2-80a1-0917c2494c02-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:05.905878 master-0 kubenswrapper[37036]: I0312 14:51:05.896424 37036 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:05.905878 master-0 kubenswrapper[37036]: I0312 14:51:05.896435 37036 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-sys\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:05.905878 master-0 kubenswrapper[37036]: I0312 14:51:05.896444 37036 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-lib-modules\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:05.905878 master-0 kubenswrapper[37036]: I0312 14:51:05.896452 37036 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-dev\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:05.905878 master-0 kubenswrapper[37036]: I0312 14:51:05.896464 37036 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f3fda052-1aaa-41a2-80a1-0917c2494c02-etc-nvme\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:05.905878 master-0 kubenswrapper[37036]: I0312 14:51:05.897417 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a271d093-100f-4c56-a201-16eb10358184-operator-scripts\") pod \"ironic-inspector-db-create-w9frr\" (UID: \"a271d093-100f-4c56-a201-16eb10358184\") " pod="openstack/ironic-inspector-db-create-w9frr" Mar 12 14:51:05.905878 master-0 kubenswrapper[37036]: I0312 14:51:05.905173 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-neutron-agent-5685659465-xhxkv"] Mar 12 14:51:05.905878 master-0 kubenswrapper[37036]: E0312 14:51:05.905692 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3fda052-1aaa-41a2-80a1-0917c2494c02" containerName="cinder-backup" Mar 12 14:51:05.905878 master-0 kubenswrapper[37036]: I0312 14:51:05.905705 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3fda052-1aaa-41a2-80a1-0917c2494c02" containerName="cinder-backup" Mar 12 14:51:05.905878 master-0 kubenswrapper[37036]: E0312 14:51:05.905754 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3fda052-1aaa-41a2-80a1-0917c2494c02" containerName="probe" Mar 12 14:51:05.905878 master-0 kubenswrapper[37036]: I0312 14:51:05.905761 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3fda052-1aaa-41a2-80a1-0917c2494c02" containerName="probe" Mar 12 14:51:05.911619 master-0 kubenswrapper[37036]: I0312 14:51:05.909483 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3fda052-1aaa-41a2-80a1-0917c2494c02" containerName="cinder-backup" Mar 12 14:51:05.911619 master-0 kubenswrapper[37036]: I0312 14:51:05.909531 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3fda052-1aaa-41a2-80a1-0917c2494c02" containerName="probe" Mar 12 14:51:05.911619 master-0 kubenswrapper[37036]: I0312 14:51:05.910258 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-5685659465-xhxkv" Mar 12 14:51:05.932142 master-0 kubenswrapper[37036]: I0312 14:51:05.915601 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-neutron-agent-config-data" Mar 12 14:51:05.961953 master-0 kubenswrapper[37036]: I0312 14:51:05.951495 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3fda052-1aaa-41a2-80a1-0917c2494c02-kube-api-access-fs2lt" (OuterVolumeSpecName: "kube-api-access-fs2lt") pod "f3fda052-1aaa-41a2-80a1-0917c2494c02" (UID: "f3fda052-1aaa-41a2-80a1-0917c2494c02"). InnerVolumeSpecName "kube-api-access-fs2lt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:05.961953 master-0 kubenswrapper[37036]: I0312 14:51:05.951611 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-a166-account-create-update-4lb7x"] Mar 12 14:51:05.983540 master-0 kubenswrapper[37036]: I0312 14:51:05.979066 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-a166-account-create-update-4lb7x" Mar 12 14:51:05.991841 master-0 kubenswrapper[37036]: I0312 14:51:05.989159 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3fda052-1aaa-41a2-80a1-0917c2494c02-scripts" (OuterVolumeSpecName: "scripts") pod "f3fda052-1aaa-41a2-80a1-0917c2494c02" (UID: "f3fda052-1aaa-41a2-80a1-0917c2494c02"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:05.999944 master-0 kubenswrapper[37036]: I0312 14:51:05.998667 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97d7251f-7c8b-4119-af0a-368d13352fc2-operator-scripts\") pod \"ironic-inspector-a166-account-create-update-4lb7x\" (UID: \"97d7251f-7c8b-4119-af0a-368d13352fc2\") " pod="openstack/ironic-inspector-a166-account-create-update-4lb7x" Mar 12 14:51:05.999944 master-0 kubenswrapper[37036]: I0312 14:51:05.998737 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hghvs\" (UniqueName: \"kubernetes.io/projected/97d7251f-7c8b-4119-af0a-368d13352fc2-kube-api-access-hghvs\") pod \"ironic-inspector-a166-account-create-update-4lb7x\" (UID: \"97d7251f-7c8b-4119-af0a-368d13352fc2\") " pod="openstack/ironic-inspector-a166-account-create-update-4lb7x" Mar 12 14:51:05.999944 master-0 kubenswrapper[37036]: I0312 14:51:05.998766 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ee3a29d2-bf14-4521-896e-b0169adefcb2-config\") pod \"ironic-neutron-agent-5685659465-xhxkv\" (UID: \"ee3a29d2-bf14-4521-896e-b0169adefcb2\") " pod="openstack/ironic-neutron-agent-5685659465-xhxkv" Mar 12 14:51:05.999944 master-0 kubenswrapper[37036]: I0312 14:51:05.998787 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwv9z\" (UniqueName: \"kubernetes.io/projected/ee3a29d2-bf14-4521-896e-b0169adefcb2-kube-api-access-gwv9z\") pod \"ironic-neutron-agent-5685659465-xhxkv\" (UID: \"ee3a29d2-bf14-4521-896e-b0169adefcb2\") " pod="openstack/ironic-neutron-agent-5685659465-xhxkv" Mar 12 14:51:05.999944 master-0 kubenswrapper[37036]: I0312 14:51:05.998812 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3a29d2-bf14-4521-896e-b0169adefcb2-combined-ca-bundle\") pod \"ironic-neutron-agent-5685659465-xhxkv\" (UID: \"ee3a29d2-bf14-4521-896e-b0169adefcb2\") " pod="openstack/ironic-neutron-agent-5685659465-xhxkv" Mar 12 14:51:05.999944 master-0 kubenswrapper[37036]: I0312 14:51:05.998982 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs2lt\" (UniqueName: \"kubernetes.io/projected/f3fda052-1aaa-41a2-80a1-0917c2494c02-kube-api-access-fs2lt\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:05.999944 master-0 kubenswrapper[37036]: I0312 14:51:05.998997 37036 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3fda052-1aaa-41a2-80a1-0917c2494c02-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:06.011319 master-0 kubenswrapper[37036]: I0312 14:51:06.010908 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgkp5\" (UniqueName: \"kubernetes.io/projected/a271d093-100f-4c56-a201-16eb10358184-kube-api-access-qgkp5\") pod \"ironic-inspector-db-create-w9frr\" (UID: \"a271d093-100f-4c56-a201-16eb10358184\") " pod="openstack/ironic-inspector-db-create-w9frr" Mar 12 14:51:06.011319 master-0 kubenswrapper[37036]: I0312 14:51:06.011072 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-db-secret" Mar 12 14:51:06.028920 master-0 kubenswrapper[37036]: I0312 14:51:06.027373 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7847764989-d9gwb"] Mar 12 14:51:06.030125 master-0 kubenswrapper[37036]: I0312 14:51:06.030057 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7847764989-d9gwb" Mar 12 14:51:06.034635 master-0 kubenswrapper[37036]: I0312 14:51:06.034309 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-5685659465-xhxkv"] Mar 12 14:51:06.051500 master-0 kubenswrapper[37036]: I0312 14:51:06.050595 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-587fdb965c-q72qp"] Mar 12 14:51:06.052737 master-0 kubenswrapper[37036]: I0312 14:51:06.052705 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:06.056334 master-0 kubenswrapper[37036]: I0312 14:51:06.056182 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-transport" Mar 12 14:51:06.057475 master-0 kubenswrapper[37036]: I0312 14:51:06.057445 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 12 14:51:06.057646 master-0 kubenswrapper[37036]: I0312 14:51:06.057623 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-scripts" Mar 12 14:51:06.057756 master-0 kubenswrapper[37036]: I0312 14:51:06.057733 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-config-data" Mar 12 14:51:06.058081 master-0 kubenswrapper[37036]: I0312 14:51:06.057853 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Mar 12 14:51:06.203766 master-0 kubenswrapper[37036]: I0312 14:51:06.197251 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7847764989-d9gwb"] Mar 12 14:51:06.203766 master-0 kubenswrapper[37036]: I0312 14:51:06.198321 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97d7251f-7c8b-4119-af0a-368d13352fc2-operator-scripts\") pod \"ironic-inspector-a166-account-create-update-4lb7x\" (UID: \"97d7251f-7c8b-4119-af0a-368d13352fc2\") " pod="openstack/ironic-inspector-a166-account-create-update-4lb7x" Mar 12 14:51:06.203766 master-0 kubenswrapper[37036]: I0312 14:51:06.198395 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4f72582b-03ab-4662-bb3e-3683d598e72b-config-data-merged\") pod \"ironic-587fdb965c-q72qp\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:06.203766 master-0 kubenswrapper[37036]: I0312 14:51:06.198424 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-config\") pod \"dnsmasq-dns-7847764989-d9gwb\" (UID: \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\") " pod="openstack/dnsmasq-dns-7847764989-d9gwb" Mar 12 14:51:06.203766 master-0 kubenswrapper[37036]: I0312 14:51:06.198448 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hghvs\" (UniqueName: \"kubernetes.io/projected/97d7251f-7c8b-4119-af0a-368d13352fc2-kube-api-access-hghvs\") pod \"ironic-inspector-a166-account-create-update-4lb7x\" (UID: \"97d7251f-7c8b-4119-af0a-368d13352fc2\") " pod="openstack/ironic-inspector-a166-account-create-update-4lb7x" Mar 12 14:51:06.203766 master-0 kubenswrapper[37036]: I0312 14:51:06.198484 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjpqk\" (UniqueName: \"kubernetes.io/projected/3ce6481f-851c-4ead-a7c8-5de1d781cef9-kube-api-access-cjpqk\") pod \"dnsmasq-dns-7847764989-d9gwb\" (UID: \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\") " pod="openstack/dnsmasq-dns-7847764989-d9gwb" Mar 12 14:51:06.203766 master-0 kubenswrapper[37036]: I0312 14:51:06.198513 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ee3a29d2-bf14-4521-896e-b0169adefcb2-config\") pod \"ironic-neutron-agent-5685659465-xhxkv\" (UID: \"ee3a29d2-bf14-4521-896e-b0169adefcb2\") " pod="openstack/ironic-neutron-agent-5685659465-xhxkv" Mar 12 14:51:06.203766 master-0 kubenswrapper[37036]: I0312 14:51:06.198549 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-ovsdbserver-nb\") pod \"dnsmasq-dns-7847764989-d9gwb\" (UID: \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\") " pod="openstack/dnsmasq-dns-7847764989-d9gwb" Mar 12 14:51:06.203766 master-0 kubenswrapper[37036]: I0312 14:51:06.198568 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwv9z\" (UniqueName: \"kubernetes.io/projected/ee3a29d2-bf14-4521-896e-b0169adefcb2-kube-api-access-gwv9z\") pod \"ironic-neutron-agent-5685659465-xhxkv\" (UID: \"ee3a29d2-bf14-4521-896e-b0169adefcb2\") " pod="openstack/ironic-neutron-agent-5685659465-xhxkv" Mar 12 14:51:06.203766 master-0 kubenswrapper[37036]: I0312 14:51:06.198596 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3a29d2-bf14-4521-896e-b0169adefcb2-combined-ca-bundle\") pod \"ironic-neutron-agent-5685659465-xhxkv\" (UID: \"ee3a29d2-bf14-4521-896e-b0169adefcb2\") " pod="openstack/ironic-neutron-agent-5685659465-xhxkv" Mar 12 14:51:06.203766 master-0 kubenswrapper[37036]: I0312 14:51:06.198620 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f72582b-03ab-4662-bb3e-3683d598e72b-logs\") pod \"ironic-587fdb965c-q72qp\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:06.203766 master-0 kubenswrapper[37036]: I0312 14:51:06.198639 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-728l6\" (UniqueName: \"kubernetes.io/projected/4f72582b-03ab-4662-bb3e-3683d598e72b-kube-api-access-728l6\") pod \"ironic-587fdb965c-q72qp\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:06.203766 master-0 kubenswrapper[37036]: I0312 14:51:06.198691 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-dns-swift-storage-0\") pod \"dnsmasq-dns-7847764989-d9gwb\" (UID: \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\") " pod="openstack/dnsmasq-dns-7847764989-d9gwb" Mar 12 14:51:06.203766 master-0 kubenswrapper[37036]: I0312 14:51:06.198721 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f72582b-03ab-4662-bb3e-3683d598e72b-scripts\") pod \"ironic-587fdb965c-q72qp\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:06.203766 master-0 kubenswrapper[37036]: I0312 14:51:06.198746 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/4f72582b-03ab-4662-bb3e-3683d598e72b-etc-podinfo\") pod \"ironic-587fdb965c-q72qp\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:06.203766 master-0 kubenswrapper[37036]: I0312 14:51:06.198768 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f72582b-03ab-4662-bb3e-3683d598e72b-config-data-custom\") pod \"ironic-587fdb965c-q72qp\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:06.203766 master-0 kubenswrapper[37036]: I0312 14:51:06.198812 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-ovsdbserver-sb\") pod \"dnsmasq-dns-7847764989-d9gwb\" (UID: \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\") " pod="openstack/dnsmasq-dns-7847764989-d9gwb" Mar 12 14:51:06.203766 master-0 kubenswrapper[37036]: I0312 14:51:06.198830 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-dns-svc\") pod \"dnsmasq-dns-7847764989-d9gwb\" (UID: \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\") " pod="openstack/dnsmasq-dns-7847764989-d9gwb" Mar 12 14:51:06.247683 master-0 kubenswrapper[37036]: I0312 14:51:06.198885 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f72582b-03ab-4662-bb3e-3683d598e72b-combined-ca-bundle\") pod \"ironic-587fdb965c-q72qp\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:06.251874 master-0 kubenswrapper[37036]: I0312 14:51:06.201484 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-w9frr" Mar 12 14:51:06.251874 master-0 kubenswrapper[37036]: I0312 14:51:06.249973 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f72582b-03ab-4662-bb3e-3683d598e72b-config-data\") pod \"ironic-587fdb965c-q72qp\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:06.251874 master-0 kubenswrapper[37036]: I0312 14:51:06.202919 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97d7251f-7c8b-4119-af0a-368d13352fc2-operator-scripts\") pod \"ironic-inspector-a166-account-create-update-4lb7x\" (UID: \"97d7251f-7c8b-4119-af0a-368d13352fc2\") " pod="openstack/ironic-inspector-a166-account-create-update-4lb7x" Mar 12 14:51:06.261743 master-0 kubenswrapper[37036]: I0312 14:51:06.261680 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-a166-account-create-update-4lb7x"] Mar 12 14:51:06.330108 master-0 kubenswrapper[37036]: I0312 14:51:06.329869 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3a29d2-bf14-4521-896e-b0169adefcb2-combined-ca-bundle\") pod \"ironic-neutron-agent-5685659465-xhxkv\" (UID: \"ee3a29d2-bf14-4521-896e-b0169adefcb2\") " pod="openstack/ironic-neutron-agent-5685659465-xhxkv" Mar 12 14:51:06.330108 master-0 kubenswrapper[37036]: I0312 14:51:06.329987 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ee3a29d2-bf14-4521-896e-b0169adefcb2-config\") pod \"ironic-neutron-agent-5685659465-xhxkv\" (UID: \"ee3a29d2-bf14-4521-896e-b0169adefcb2\") " pod="openstack/ironic-neutron-agent-5685659465-xhxkv" Mar 12 14:51:06.337715 master-0 kubenswrapper[37036]: I0312 14:51:06.337668 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hghvs\" (UniqueName: \"kubernetes.io/projected/97d7251f-7c8b-4119-af0a-368d13352fc2-kube-api-access-hghvs\") pod \"ironic-inspector-a166-account-create-update-4lb7x\" (UID: \"97d7251f-7c8b-4119-af0a-368d13352fc2\") " pod="openstack/ironic-inspector-a166-account-create-update-4lb7x" Mar 12 14:51:06.368335 master-0 kubenswrapper[37036]: I0312 14:51:06.362746 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-587fdb965c-q72qp"] Mar 12 14:51:06.381219 master-0 kubenswrapper[37036]: I0312 14:51:06.377886 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwv9z\" (UniqueName: \"kubernetes.io/projected/ee3a29d2-bf14-4521-896e-b0169adefcb2-kube-api-access-gwv9z\") pod \"ironic-neutron-agent-5685659465-xhxkv\" (UID: \"ee3a29d2-bf14-4521-896e-b0169adefcb2\") " pod="openstack/ironic-neutron-agent-5685659465-xhxkv" Mar 12 14:51:06.386153 master-0 kubenswrapper[37036]: I0312 14:51:06.386099 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f72582b-03ab-4662-bb3e-3683d598e72b-config-data\") pod \"ironic-587fdb965c-q72qp\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:06.386257 master-0 kubenswrapper[37036]: I0312 14:51:06.386238 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4f72582b-03ab-4662-bb3e-3683d598e72b-config-data-merged\") pod \"ironic-587fdb965c-q72qp\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:06.386306 master-0 kubenswrapper[37036]: I0312 14:51:06.386263 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-config\") pod \"dnsmasq-dns-7847764989-d9gwb\" (UID: \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\") " pod="openstack/dnsmasq-dns-7847764989-d9gwb" Mar 12 14:51:06.386306 master-0 kubenswrapper[37036]: I0312 14:51:06.386290 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjpqk\" (UniqueName: \"kubernetes.io/projected/3ce6481f-851c-4ead-a7c8-5de1d781cef9-kube-api-access-cjpqk\") pod \"dnsmasq-dns-7847764989-d9gwb\" (UID: \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\") " pod="openstack/dnsmasq-dns-7847764989-d9gwb" Mar 12 14:51:06.386374 master-0 kubenswrapper[37036]: I0312 14:51:06.386312 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-ovsdbserver-nb\") pod \"dnsmasq-dns-7847764989-d9gwb\" (UID: \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\") " pod="openstack/dnsmasq-dns-7847764989-d9gwb" Mar 12 14:51:06.386374 master-0 kubenswrapper[37036]: I0312 14:51:06.386341 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f72582b-03ab-4662-bb3e-3683d598e72b-logs\") pod \"ironic-587fdb965c-q72qp\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:06.386374 master-0 kubenswrapper[37036]: I0312 14:51:06.386359 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-728l6\" (UniqueName: \"kubernetes.io/projected/4f72582b-03ab-4662-bb3e-3683d598e72b-kube-api-access-728l6\") pod \"ironic-587fdb965c-q72qp\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:06.386466 master-0 kubenswrapper[37036]: I0312 14:51:06.386425 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-dns-swift-storage-0\") pod \"dnsmasq-dns-7847764989-d9gwb\" (UID: \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\") " pod="openstack/dnsmasq-dns-7847764989-d9gwb" Mar 12 14:51:06.386503 master-0 kubenswrapper[37036]: I0312 14:51:06.386464 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f72582b-03ab-4662-bb3e-3683d598e72b-scripts\") pod \"ironic-587fdb965c-q72qp\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:06.386503 master-0 kubenswrapper[37036]: I0312 14:51:06.386487 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/4f72582b-03ab-4662-bb3e-3683d598e72b-etc-podinfo\") pod \"ironic-587fdb965c-q72qp\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:06.386561 master-0 kubenswrapper[37036]: I0312 14:51:06.386512 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f72582b-03ab-4662-bb3e-3683d598e72b-config-data-custom\") pod \"ironic-587fdb965c-q72qp\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:06.386592 master-0 kubenswrapper[37036]: I0312 14:51:06.386573 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-ovsdbserver-sb\") pod \"dnsmasq-dns-7847764989-d9gwb\" (UID: \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\") " pod="openstack/dnsmasq-dns-7847764989-d9gwb" Mar 12 14:51:06.386699 master-0 kubenswrapper[37036]: I0312 14:51:06.386595 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-dns-svc\") pod \"dnsmasq-dns-7847764989-d9gwb\" (UID: \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\") " pod="openstack/dnsmasq-dns-7847764989-d9gwb" Mar 12 14:51:06.386699 master-0 kubenswrapper[37036]: I0312 14:51:06.386666 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f72582b-03ab-4662-bb3e-3683d598e72b-combined-ca-bundle\") pod \"ironic-587fdb965c-q72qp\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:06.389742 master-0 kubenswrapper[37036]: I0312 14:51:06.388337 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-ovsdbserver-nb\") pod \"dnsmasq-dns-7847764989-d9gwb\" (UID: \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\") " pod="openstack/dnsmasq-dns-7847764989-d9gwb" Mar 12 14:51:06.391395 master-0 kubenswrapper[37036]: I0312 14:51:06.390939 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f72582b-03ab-4662-bb3e-3683d598e72b-logs\") pod \"ironic-587fdb965c-q72qp\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:06.393151 master-0 kubenswrapper[37036]: I0312 14:51:06.393116 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-dns-swift-storage-0\") pod \"dnsmasq-dns-7847764989-d9gwb\" (UID: \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\") " pod="openstack/dnsmasq-dns-7847764989-d9gwb" Mar 12 14:51:06.394489 master-0 kubenswrapper[37036]: I0312 14:51:06.393911 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-ovsdbserver-sb\") pod \"dnsmasq-dns-7847764989-d9gwb\" (UID: \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\") " pod="openstack/dnsmasq-dns-7847764989-d9gwb" Mar 12 14:51:06.396174 master-0 kubenswrapper[37036]: I0312 14:51:06.394678 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-dns-svc\") pod \"dnsmasq-dns-7847764989-d9gwb\" (UID: \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\") " pod="openstack/dnsmasq-dns-7847764989-d9gwb" Mar 12 14:51:06.398415 master-0 kubenswrapper[37036]: I0312 14:51:06.397731 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f72582b-03ab-4662-bb3e-3683d598e72b-combined-ca-bundle\") pod \"ironic-587fdb965c-q72qp\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:06.398498 master-0 kubenswrapper[37036]: I0312 14:51:06.398413 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4f72582b-03ab-4662-bb3e-3683d598e72b-config-data-merged\") pod \"ironic-587fdb965c-q72qp\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:06.400062 master-0 kubenswrapper[37036]: I0312 14:51:06.400014 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f72582b-03ab-4662-bb3e-3683d598e72b-scripts\") pod \"ironic-587fdb965c-q72qp\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:06.400242 master-0 kubenswrapper[37036]: I0312 14:51:06.400210 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/4f72582b-03ab-4662-bb3e-3683d598e72b-etc-podinfo\") pod \"ironic-587fdb965c-q72qp\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:06.400675 master-0 kubenswrapper[37036]: I0312 14:51:06.400648 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-config\") pod \"dnsmasq-dns-7847764989-d9gwb\" (UID: \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\") " pod="openstack/dnsmasq-dns-7847764989-d9gwb" Mar 12 14:51:06.410015 master-0 kubenswrapper[37036]: I0312 14:51:06.406782 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f72582b-03ab-4662-bb3e-3683d598e72b-config-data-custom\") pod \"ironic-587fdb965c-q72qp\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:06.410015 master-0 kubenswrapper[37036]: I0312 14:51:06.406868 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f72582b-03ab-4662-bb3e-3683d598e72b-config-data\") pod \"ironic-587fdb965c-q72qp\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:06.429342 master-0 kubenswrapper[37036]: I0312 14:51:06.429141 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-volume-lvm-iscsi-0" event={"ID":"d700a6d3-cb4f-4971-8b80-30eaab119193","Type":"ContainerStarted","Data":"2c00de4d84b8939c275d9075f82e2e405027a14901ce1f4eefad63da7cb07e79"} Mar 12 14:51:06.447000 master-0 kubenswrapper[37036]: I0312 14:51:06.445945 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-728l6\" (UniqueName: \"kubernetes.io/projected/4f72582b-03ab-4662-bb3e-3683d598e72b-kube-api-access-728l6\") pod \"ironic-587fdb965c-q72qp\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:06.493042 master-0 kubenswrapper[37036]: I0312 14:51:06.491626 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" podUID="00cfce92-4961-4a84-a59e-b1b979f29a35" containerName="dnsmasq-dns" containerID="cri-o://e012b91d96d153c570bd70db1b78442cffd6781ba2df9b6f987377ce7dbc2fd8" gracePeriod=10 Mar 12 14:51:06.493042 master-0 kubenswrapper[37036]: I0312 14:51:06.492023 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-backup-0" Mar 12 14:51:06.493042 master-0 kubenswrapper[37036]: I0312 14:51:06.492722 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-backup-0" event={"ID":"f3fda052-1aaa-41a2-80a1-0917c2494c02","Type":"ContainerDied","Data":"2c9a12334196292e01eb5f9d62751f426dea592252ad7974c923d14988307dd5"} Mar 12 14:51:06.493042 master-0 kubenswrapper[37036]: I0312 14:51:06.492792 37036 scope.go:117] "RemoveContainer" containerID="508ccd5b5bd99366be885a72f52be840ab7faa8850dc8956610dd979f03cc6d6" Mar 12 14:51:06.509845 master-0 kubenswrapper[37036]: I0312 14:51:06.509792 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjpqk\" (UniqueName: \"kubernetes.io/projected/3ce6481f-851c-4ead-a7c8-5de1d781cef9-kube-api-access-cjpqk\") pod \"dnsmasq-dns-7847764989-d9gwb\" (UID: \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\") " pod="openstack/dnsmasq-dns-7847764989-d9gwb" Mar 12 14:51:06.970017 master-0 kubenswrapper[37036]: I0312 14:51:06.962163 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-05598-scheduler-0"] Mar 12 14:51:07.290442 master-0 kubenswrapper[37036]: I0312 14:51:07.276371 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3fda052-1aaa-41a2-80a1-0917c2494c02-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3fda052-1aaa-41a2-80a1-0917c2494c02" (UID: "f3fda052-1aaa-41a2-80a1-0917c2494c02"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:07.290442 master-0 kubenswrapper[37036]: I0312 14:51:07.286285 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7" path="/var/lib/kubelet/pods/fd7b2ee2-8dd2-4588-aa3f-cf22774d3ff7/volumes" Mar 12 14:51:07.290442 master-0 kubenswrapper[37036]: I0312 14:51:07.289713 37036 scope.go:117] "RemoveContainer" containerID="b4cc022785839865aedd2c73e9f5aa6cfda959189b88b8bb581e5cd617b459f0" Mar 12 14:51:07.307184 master-0 kubenswrapper[37036]: I0312 14:51:07.307122 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-w9frr"] Mar 12 14:51:07.348827 master-0 kubenswrapper[37036]: I0312 14:51:07.347326 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3fda052-1aaa-41a2-80a1-0917c2494c02-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:07.360144 master-0 kubenswrapper[37036]: I0312 14:51:07.360107 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-5685659465-xhxkv" Mar 12 14:51:07.404362 master-0 kubenswrapper[37036]: I0312 14:51:07.384798 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-a166-account-create-update-4lb7x" Mar 12 14:51:07.404362 master-0 kubenswrapper[37036]: I0312 14:51:07.395143 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7847764989-d9gwb" Mar 12 14:51:07.415405 master-0 kubenswrapper[37036]: I0312 14:51:07.414971 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:07.441482 master-0 kubenswrapper[37036]: I0312 14:51:07.440177 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3fda052-1aaa-41a2-80a1-0917c2494c02-config-data" (OuterVolumeSpecName: "config-data") pod "f3fda052-1aaa-41a2-80a1-0917c2494c02" (UID: "f3fda052-1aaa-41a2-80a1-0917c2494c02"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:07.450562 master-0 kubenswrapper[37036]: I0312 14:51:07.450514 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3fda052-1aaa-41a2-80a1-0917c2494c02-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:07.581998 master-0 kubenswrapper[37036]: I0312 14:51:07.581845 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-scheduler-0" event={"ID":"76d48ea9-e7d6-4acd-b340-957c34aceb04","Type":"ContainerStarted","Data":"30eab59cc7365253440babbd4f2e280a03580a1ecf7aeaa45c24310d5a0fb5fe"} Mar 12 14:51:07.588532 master-0 kubenswrapper[37036]: I0312 14:51:07.587276 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-w9frr" event={"ID":"a271d093-100f-4c56-a201-16eb10358184","Type":"ContainerStarted","Data":"902b5e49e8379eebf353f9ba026577f131f269f2cd660aaee8781cb8656b738b"} Mar 12 14:51:07.592560 master-0 kubenswrapper[37036]: I0312 14:51:07.592515 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-volume-lvm-iscsi-0" event={"ID":"d700a6d3-cb4f-4971-8b80-30eaab119193","Type":"ContainerStarted","Data":"97bf1350dbd5f30f831152da2c08d93ec43c352073ee1d592068ba79fa7ca9b0"} Mar 12 14:51:07.617758 master-0 kubenswrapper[37036]: I0312 14:51:07.617707 37036 generic.go:334] "Generic (PLEG): container finished" podID="00cfce92-4961-4a84-a59e-b1b979f29a35" containerID="e012b91d96d153c570bd70db1b78442cffd6781ba2df9b6f987377ce7dbc2fd8" exitCode=0 Mar 12 14:51:07.618038 master-0 kubenswrapper[37036]: I0312 14:51:07.617764 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" event={"ID":"00cfce92-4961-4a84-a59e-b1b979f29a35","Type":"ContainerDied","Data":"e012b91d96d153c570bd70db1b78442cffd6781ba2df9b6f987377ce7dbc2fd8"} Mar 12 14:51:07.877834 master-0 kubenswrapper[37036]: I0312 14:51:07.877573 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" Mar 12 14:51:07.957051 master-0 kubenswrapper[37036]: I0312 14:51:07.956836 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-05598-backup-0"] Mar 12 14:51:07.980948 master-0 kubenswrapper[37036]: I0312 14:51:07.975106 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-dns-swift-storage-0\") pod \"00cfce92-4961-4a84-a59e-b1b979f29a35\" (UID: \"00cfce92-4961-4a84-a59e-b1b979f29a35\") " Mar 12 14:51:07.980948 master-0 kubenswrapper[37036]: I0312 14:51:07.975157 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l5g\" (UniqueName: \"kubernetes.io/projected/00cfce92-4961-4a84-a59e-b1b979f29a35-kube-api-access-94l5g\") pod \"00cfce92-4961-4a84-a59e-b1b979f29a35\" (UID: \"00cfce92-4961-4a84-a59e-b1b979f29a35\") " Mar 12 14:51:07.980948 master-0 kubenswrapper[37036]: I0312 14:51:07.975189 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-ovsdbserver-nb\") pod \"00cfce92-4961-4a84-a59e-b1b979f29a35\" (UID: \"00cfce92-4961-4a84-a59e-b1b979f29a35\") " Mar 12 14:51:07.988940 master-0 kubenswrapper[37036]: I0312 14:51:07.982617 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-dns-svc\") pod \"00cfce92-4961-4a84-a59e-b1b979f29a35\" (UID: \"00cfce92-4961-4a84-a59e-b1b979f29a35\") " Mar 12 14:51:07.988940 master-0 kubenswrapper[37036]: I0312 14:51:07.982816 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-ovsdbserver-sb\") pod \"00cfce92-4961-4a84-a59e-b1b979f29a35\" (UID: \"00cfce92-4961-4a84-a59e-b1b979f29a35\") " Mar 12 14:51:07.988940 master-0 kubenswrapper[37036]: I0312 14:51:07.983106 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-config\") pod \"00cfce92-4961-4a84-a59e-b1b979f29a35\" (UID: \"00cfce92-4961-4a84-a59e-b1b979f29a35\") " Mar 12 14:51:08.026065 master-0 kubenswrapper[37036]: I0312 14:51:08.020250 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-05598-backup-0"] Mar 12 14:51:08.033069 master-0 kubenswrapper[37036]: I0312 14:51:08.032990 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00cfce92-4961-4a84-a59e-b1b979f29a35-kube-api-access-94l5g" (OuterVolumeSpecName: "kube-api-access-94l5g") pod "00cfce92-4961-4a84-a59e-b1b979f29a35" (UID: "00cfce92-4961-4a84-a59e-b1b979f29a35"). InnerVolumeSpecName "kube-api-access-94l5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:08.051855 master-0 kubenswrapper[37036]: I0312 14:51:08.051525 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-conductor-0"] Mar 12 14:51:08.056088 master-0 kubenswrapper[37036]: E0312 14:51:08.052415 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00cfce92-4961-4a84-a59e-b1b979f29a35" containerName="dnsmasq-dns" Mar 12 14:51:08.056088 master-0 kubenswrapper[37036]: I0312 14:51:08.052440 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="00cfce92-4961-4a84-a59e-b1b979f29a35" containerName="dnsmasq-dns" Mar 12 14:51:08.056088 master-0 kubenswrapper[37036]: E0312 14:51:08.052461 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00cfce92-4961-4a84-a59e-b1b979f29a35" containerName="init" Mar 12 14:51:08.056088 master-0 kubenswrapper[37036]: I0312 14:51:08.052470 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="00cfce92-4961-4a84-a59e-b1b979f29a35" containerName="init" Mar 12 14:51:08.056088 master-0 kubenswrapper[37036]: I0312 14:51:08.052717 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="00cfce92-4961-4a84-a59e-b1b979f29a35" containerName="dnsmasq-dns" Mar 12 14:51:08.072695 master-0 kubenswrapper[37036]: I0312 14:51:08.070121 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Mar 12 14:51:08.076511 master-0 kubenswrapper[37036]: I0312 14:51:08.073418 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-config-data" Mar 12 14:51:08.076511 master-0 kubenswrapper[37036]: I0312 14:51:08.073637 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-scripts" Mar 12 14:51:08.076511 master-0 kubenswrapper[37036]: I0312 14:51:08.075126 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-05598-backup-0"] Mar 12 14:51:08.078121 master-0 kubenswrapper[37036]: I0312 14:51:08.078055 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.080317 master-0 kubenswrapper[37036]: I0312 14:51:08.079751 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-05598-backup-config-data" Mar 12 14:51:08.088564 master-0 kubenswrapper[37036]: I0312 14:51:08.088444 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94l5g\" (UniqueName: \"kubernetes.io/projected/00cfce92-4961-4a84-a59e-b1b979f29a35-kube-api-access-94l5g\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:08.193731 master-0 kubenswrapper[37036]: I0312 14:51:08.189818 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c0524b9-cbf3-40e3-9424-98b634ba1b10-scripts\") pod \"ironic-conductor-0\" (UID: \"8c0524b9-cbf3-40e3-9424-98b634ba1b10\") " pod="openstack/ironic-conductor-0" Mar 12 14:51:08.193731 master-0 kubenswrapper[37036]: I0312 14:51:08.189877 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59d7a356-c194-4ab2-9291-c1116ecc4bde-config-data\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.193731 master-0 kubenswrapper[37036]: I0312 14:51:08.189935 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59d7a356-c194-4ab2-9291-c1116ecc4bde-combined-ca-bundle\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.193731 master-0 kubenswrapper[37036]: I0312 14:51:08.189958 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-dev\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.193731 master-0 kubenswrapper[37036]: I0312 14:51:08.189977 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c0524b9-cbf3-40e3-9424-98b634ba1b10-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"8c0524b9-cbf3-40e3-9424-98b634ba1b10\") " pod="openstack/ironic-conductor-0" Mar 12 14:51:08.193731 master-0 kubenswrapper[37036]: I0312 14:51:08.189998 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/8c0524b9-cbf3-40e3-9424-98b634ba1b10-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"8c0524b9-cbf3-40e3-9424-98b634ba1b10\") " pod="openstack/ironic-conductor-0" Mar 12 14:51:08.193731 master-0 kubenswrapper[37036]: I0312 14:51:08.190023 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-sys\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.193731 master-0 kubenswrapper[37036]: I0312 14:51:08.190062 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-var-locks-brick\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.193731 master-0 kubenswrapper[37036]: I0312 14:51:08.190102 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rjm4\" (UniqueName: \"kubernetes.io/projected/8c0524b9-cbf3-40e3-9424-98b634ba1b10-kube-api-access-5rjm4\") pod \"ironic-conductor-0\" (UID: \"8c0524b9-cbf3-40e3-9424-98b634ba1b10\") " pod="openstack/ironic-conductor-0" Mar 12 14:51:08.193731 master-0 kubenswrapper[37036]: I0312 14:51:08.190127 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-etc-nvme\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.193731 master-0 kubenswrapper[37036]: I0312 14:51:08.190165 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ef3bb033-9476-44e0-b183-a02b7cec1cf4\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a8ccea30-1ca1-4b4a-b263-8970cf916af6\") pod \"ironic-conductor-0\" (UID: \"8c0524b9-cbf3-40e3-9424-98b634ba1b10\") " pod="openstack/ironic-conductor-0" Mar 12 14:51:08.193731 master-0 kubenswrapper[37036]: I0312 14:51:08.190207 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/8c0524b9-cbf3-40e3-9424-98b634ba1b10-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"8c0524b9-cbf3-40e3-9424-98b634ba1b10\") " pod="openstack/ironic-conductor-0" Mar 12 14:51:08.193731 master-0 kubenswrapper[37036]: I0312 14:51:08.190255 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c0524b9-cbf3-40e3-9424-98b634ba1b10-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"8c0524b9-cbf3-40e3-9424-98b634ba1b10\") " pod="openstack/ironic-conductor-0" Mar 12 14:51:08.193731 master-0 kubenswrapper[37036]: I0312 14:51:08.190300 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-run\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.193731 master-0 kubenswrapper[37036]: I0312 14:51:08.190330 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-var-lib-cinder\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.193731 master-0 kubenswrapper[37036]: I0312 14:51:08.190353 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c0524b9-cbf3-40e3-9424-98b634ba1b10-config-data\") pod \"ironic-conductor-0\" (UID: \"8c0524b9-cbf3-40e3-9424-98b634ba1b10\") " pod="openstack/ironic-conductor-0" Mar 12 14:51:08.193731 master-0 kubenswrapper[37036]: I0312 14:51:08.190393 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc2vc\" (UniqueName: \"kubernetes.io/projected/59d7a356-c194-4ab2-9291-c1116ecc4bde-kube-api-access-tc2vc\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.193731 master-0 kubenswrapper[37036]: I0312 14:51:08.190415 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59d7a356-c194-4ab2-9291-c1116ecc4bde-scripts\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.193731 master-0 kubenswrapper[37036]: I0312 14:51:08.190437 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-etc-machine-id\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.193731 master-0 kubenswrapper[37036]: I0312 14:51:08.190476 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-var-locks-cinder\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.193731 master-0 kubenswrapper[37036]: I0312 14:51:08.190504 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-etc-iscsi\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.193731 master-0 kubenswrapper[37036]: I0312 14:51:08.190521 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59d7a356-c194-4ab2-9291-c1116ecc4bde-config-data-custom\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.193731 master-0 kubenswrapper[37036]: I0312 14:51:08.190541 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-lib-modules\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.219226 master-0 kubenswrapper[37036]: I0312 14:51:08.219148 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Mar 12 14:51:08.247980 master-0 kubenswrapper[37036]: I0312 14:51:08.247857 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-05598-backup-0"] Mar 12 14:51:08.295960 master-0 kubenswrapper[37036]: I0312 14:51:08.295048 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-run\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.295960 master-0 kubenswrapper[37036]: I0312 14:51:08.295148 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-var-lib-cinder\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.295960 master-0 kubenswrapper[37036]: I0312 14:51:08.295196 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c0524b9-cbf3-40e3-9424-98b634ba1b10-config-data\") pod \"ironic-conductor-0\" (UID: \"8c0524b9-cbf3-40e3-9424-98b634ba1b10\") " pod="openstack/ironic-conductor-0" Mar 12 14:51:08.295960 master-0 kubenswrapper[37036]: I0312 14:51:08.295240 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc2vc\" (UniqueName: \"kubernetes.io/projected/59d7a356-c194-4ab2-9291-c1116ecc4bde-kube-api-access-tc2vc\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.295960 master-0 kubenswrapper[37036]: I0312 14:51:08.295273 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59d7a356-c194-4ab2-9291-c1116ecc4bde-scripts\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.295960 master-0 kubenswrapper[37036]: I0312 14:51:08.295310 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-etc-machine-id\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.295960 master-0 kubenswrapper[37036]: I0312 14:51:08.295397 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-var-locks-cinder\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.295960 master-0 kubenswrapper[37036]: I0312 14:51:08.295463 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-etc-iscsi\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.295960 master-0 kubenswrapper[37036]: I0312 14:51:08.295492 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59d7a356-c194-4ab2-9291-c1116ecc4bde-config-data-custom\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.295960 master-0 kubenswrapper[37036]: I0312 14:51:08.295522 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-lib-modules\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.295960 master-0 kubenswrapper[37036]: I0312 14:51:08.295586 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c0524b9-cbf3-40e3-9424-98b634ba1b10-scripts\") pod \"ironic-conductor-0\" (UID: \"8c0524b9-cbf3-40e3-9424-98b634ba1b10\") " pod="openstack/ironic-conductor-0" Mar 12 14:51:08.295960 master-0 kubenswrapper[37036]: I0312 14:51:08.295617 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59d7a356-c194-4ab2-9291-c1116ecc4bde-config-data\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.295960 master-0 kubenswrapper[37036]: I0312 14:51:08.295682 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59d7a356-c194-4ab2-9291-c1116ecc4bde-combined-ca-bundle\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.295960 master-0 kubenswrapper[37036]: I0312 14:51:08.295719 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-dev\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.295960 master-0 kubenswrapper[37036]: I0312 14:51:08.295753 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c0524b9-cbf3-40e3-9424-98b634ba1b10-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"8c0524b9-cbf3-40e3-9424-98b634ba1b10\") " pod="openstack/ironic-conductor-0" Mar 12 14:51:08.295960 master-0 kubenswrapper[37036]: I0312 14:51:08.295785 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/8c0524b9-cbf3-40e3-9424-98b634ba1b10-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"8c0524b9-cbf3-40e3-9424-98b634ba1b10\") " pod="openstack/ironic-conductor-0" Mar 12 14:51:08.295960 master-0 kubenswrapper[37036]: I0312 14:51:08.295830 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-sys\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.295960 master-0 kubenswrapper[37036]: I0312 14:51:08.295875 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-var-locks-brick\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.295960 master-0 kubenswrapper[37036]: I0312 14:51:08.295928 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rjm4\" (UniqueName: \"kubernetes.io/projected/8c0524b9-cbf3-40e3-9424-98b634ba1b10-kube-api-access-5rjm4\") pod \"ironic-conductor-0\" (UID: \"8c0524b9-cbf3-40e3-9424-98b634ba1b10\") " pod="openstack/ironic-conductor-0" Mar 12 14:51:08.295960 master-0 kubenswrapper[37036]: I0312 14:51:08.295954 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-etc-nvme\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.297266 master-0 kubenswrapper[37036]: I0312 14:51:08.296028 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ef3bb033-9476-44e0-b183-a02b7cec1cf4\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a8ccea30-1ca1-4b4a-b263-8970cf916af6\") pod \"ironic-conductor-0\" (UID: \"8c0524b9-cbf3-40e3-9424-98b634ba1b10\") " pod="openstack/ironic-conductor-0" Mar 12 14:51:08.297266 master-0 kubenswrapper[37036]: I0312 14:51:08.296099 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/8c0524b9-cbf3-40e3-9424-98b634ba1b10-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"8c0524b9-cbf3-40e3-9424-98b634ba1b10\") " pod="openstack/ironic-conductor-0" Mar 12 14:51:08.297266 master-0 kubenswrapper[37036]: I0312 14:51:08.296158 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c0524b9-cbf3-40e3-9424-98b634ba1b10-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"8c0524b9-cbf3-40e3-9424-98b634ba1b10\") " pod="openstack/ironic-conductor-0" Mar 12 14:51:08.297266 master-0 kubenswrapper[37036]: I0312 14:51:08.297242 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-dev\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.297441 master-0 kubenswrapper[37036]: I0312 14:51:08.297391 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-var-locks-cinder\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.297576 master-0 kubenswrapper[37036]: I0312 14:51:08.297552 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-etc-machine-id\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.297765 master-0 kubenswrapper[37036]: I0312 14:51:08.297743 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-var-lib-cinder\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.297840 master-0 kubenswrapper[37036]: I0312 14:51:08.297822 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-var-locks-brick\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.304067 master-0 kubenswrapper[37036]: I0312 14:51:08.298032 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-etc-nvme\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.304067 master-0 kubenswrapper[37036]: I0312 14:51:08.298136 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-sys\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.304067 master-0 kubenswrapper[37036]: I0312 14:51:08.298325 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-etc-iscsi\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.304067 master-0 kubenswrapper[37036]: I0312 14:51:08.298512 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-lib-modules\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.304067 master-0 kubenswrapper[37036]: I0312 14:51:08.299124 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/59d7a356-c194-4ab2-9291-c1116ecc4bde-run\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.304067 master-0 kubenswrapper[37036]: I0312 14:51:08.301777 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/8c0524b9-cbf3-40e3-9424-98b634ba1b10-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"8c0524b9-cbf3-40e3-9424-98b634ba1b10\") " pod="openstack/ironic-conductor-0" Mar 12 14:51:08.307363 master-0 kubenswrapper[37036]: I0312 14:51:08.307327 37036 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 14:51:08.307520 master-0 kubenswrapper[37036]: I0312 14:51:08.307375 37036 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ef3bb033-9476-44e0-b183-a02b7cec1cf4\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a8ccea30-1ca1-4b4a-b263-8970cf916af6\") pod \"ironic-conductor-0\" (UID: \"8c0524b9-cbf3-40e3-9424-98b634ba1b10\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/fb956a260f686053a52a3068f56475f2ac6429e63d672fba964e3601e1968edb/globalmount\"" pod="openstack/ironic-conductor-0" Mar 12 14:51:08.323624 master-0 kubenswrapper[37036]: I0312 14:51:08.323317 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c0524b9-cbf3-40e3-9424-98b634ba1b10-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"8c0524b9-cbf3-40e3-9424-98b634ba1b10\") " pod="openstack/ironic-conductor-0" Mar 12 14:51:08.323624 master-0 kubenswrapper[37036]: I0312 14:51:08.323417 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c0524b9-cbf3-40e3-9424-98b634ba1b10-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"8c0524b9-cbf3-40e3-9424-98b634ba1b10\") " pod="openstack/ironic-conductor-0" Mar 12 14:51:08.330049 master-0 kubenswrapper[37036]: I0312 14:51:08.328633 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59d7a356-c194-4ab2-9291-c1116ecc4bde-config-data\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.341936 master-0 kubenswrapper[37036]: I0312 14:51:08.341442 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59d7a356-c194-4ab2-9291-c1116ecc4bde-scripts\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.341936 master-0 kubenswrapper[37036]: I0312 14:51:08.341548 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59d7a356-c194-4ab2-9291-c1116ecc4bde-config-data-custom\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.341936 master-0 kubenswrapper[37036]: I0312 14:51:08.341630 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59d7a356-c194-4ab2-9291-c1116ecc4bde-combined-ca-bundle\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.342253 master-0 kubenswrapper[37036]: I0312 14:51:08.341987 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c0524b9-cbf3-40e3-9424-98b634ba1b10-scripts\") pod \"ironic-conductor-0\" (UID: \"8c0524b9-cbf3-40e3-9424-98b634ba1b10\") " pod="openstack/ironic-conductor-0" Mar 12 14:51:08.342431 master-0 kubenswrapper[37036]: I0312 14:51:08.342395 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/8c0524b9-cbf3-40e3-9424-98b634ba1b10-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"8c0524b9-cbf3-40e3-9424-98b634ba1b10\") " pod="openstack/ironic-conductor-0" Mar 12 14:51:08.344330 master-0 kubenswrapper[37036]: I0312 14:51:08.344291 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rjm4\" (UniqueName: \"kubernetes.io/projected/8c0524b9-cbf3-40e3-9424-98b634ba1b10-kube-api-access-5rjm4\") pod \"ironic-conductor-0\" (UID: \"8c0524b9-cbf3-40e3-9424-98b634ba1b10\") " pod="openstack/ironic-conductor-0" Mar 12 14:51:08.346514 master-0 kubenswrapper[37036]: I0312 14:51:08.346118 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc2vc\" (UniqueName: \"kubernetes.io/projected/59d7a356-c194-4ab2-9291-c1116ecc4bde-kube-api-access-tc2vc\") pod \"cinder-05598-backup-0\" (UID: \"59d7a356-c194-4ab2-9291-c1116ecc4bde\") " pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.346977 master-0 kubenswrapper[37036]: I0312 14:51:08.346864 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c0524b9-cbf3-40e3-9424-98b634ba1b10-config-data\") pod \"ironic-conductor-0\" (UID: \"8c0524b9-cbf3-40e3-9424-98b634ba1b10\") " pod="openstack/ironic-conductor-0" Mar 12 14:51:08.388072 master-0 kubenswrapper[37036]: I0312 14:51:08.388027 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-05598-backup-0" Mar 12 14:51:08.654151 master-0 kubenswrapper[37036]: I0312 14:51:08.654062 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" event={"ID":"00cfce92-4961-4a84-a59e-b1b979f29a35","Type":"ContainerDied","Data":"72548b0e888eb9b502240039b5beb76972e493ea0aa39524a2e19b00e6f160af"} Mar 12 14:51:08.654151 master-0 kubenswrapper[37036]: I0312 14:51:08.654129 37036 scope.go:117] "RemoveContainer" containerID="e012b91d96d153c570bd70db1b78442cffd6781ba2df9b6f987377ce7dbc2fd8" Mar 12 14:51:08.654411 master-0 kubenswrapper[37036]: I0312 14:51:08.654246 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95846d9c5-hjsmg" Mar 12 14:51:08.674938 master-0 kubenswrapper[37036]: I0312 14:51:08.671992 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-w9frr" event={"ID":"a271d093-100f-4c56-a201-16eb10358184","Type":"ContainerStarted","Data":"698138c12fbb175685e44cfc402ba1706f53defef1f0d9d0661d78e2a6a067c1"} Mar 12 14:51:08.785524 master-0 kubenswrapper[37036]: I0312 14:51:08.785436 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-create-w9frr" podStartSLOduration=3.785409176 podStartE2EDuration="3.785409176s" podCreationTimestamp="2026-03-12 14:51:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:08.726829606 +0000 UTC m=+927.734570543" watchObservedRunningTime="2026-03-12 14:51:08.785409176 +0000 UTC m=+927.793150113" Mar 12 14:51:09.009524 master-0 kubenswrapper[37036]: I0312 14:51:09.006766 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "00cfce92-4961-4a84-a59e-b1b979f29a35" (UID: "00cfce92-4961-4a84-a59e-b1b979f29a35"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:09.033672 master-0 kubenswrapper[37036]: I0312 14:51:09.033564 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7847764989-d9gwb"] Mar 12 14:51:09.048209 master-0 kubenswrapper[37036]: I0312 14:51:09.043932 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-5685659465-xhxkv"] Mar 12 14:51:09.069733 master-0 kubenswrapper[37036]: I0312 14:51:09.069647 37036 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:09.247543 master-0 kubenswrapper[37036]: W0312 14:51:09.247193 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97d7251f_7c8b_4119_af0a_368d13352fc2.slice/crio-ce9c23e95ac7116f153d77e9fbf8ec1d013da9335dcf3a5980d26fed0cb6a359 WatchSource:0}: Error finding container ce9c23e95ac7116f153d77e9fbf8ec1d013da9335dcf3a5980d26fed0cb6a359: Status 404 returned error can't find the container with id ce9c23e95ac7116f153d77e9fbf8ec1d013da9335dcf3a5980d26fed0cb6a359 Mar 12 14:51:09.327353 master-0 kubenswrapper[37036]: I0312 14:51:09.321080 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "00cfce92-4961-4a84-a59e-b1b979f29a35" (UID: "00cfce92-4961-4a84-a59e-b1b979f29a35"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:09.327353 master-0 kubenswrapper[37036]: I0312 14:51:09.326656 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-config" (OuterVolumeSpecName: "config") pod "00cfce92-4961-4a84-a59e-b1b979f29a35" (UID: "00cfce92-4961-4a84-a59e-b1b979f29a35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:09.328030 master-0 kubenswrapper[37036]: I0312 14:51:09.328001 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3fda052-1aaa-41a2-80a1-0917c2494c02" path="/var/lib/kubelet/pods/f3fda052-1aaa-41a2-80a1-0917c2494c02/volumes" Mar 12 14:51:09.342064 master-0 kubenswrapper[37036]: I0312 14:51:09.341521 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "00cfce92-4961-4a84-a59e-b1b979f29a35" (UID: "00cfce92-4961-4a84-a59e-b1b979f29a35"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:09.395055 master-0 kubenswrapper[37036]: I0312 14:51:09.394995 37036 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:09.395055 master-0 kubenswrapper[37036]: I0312 14:51:09.395047 37036 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:09.395055 master-0 kubenswrapper[37036]: I0312 14:51:09.395057 37036 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:09.600251 master-0 kubenswrapper[37036]: I0312 14:51:09.600041 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "00cfce92-4961-4a84-a59e-b1b979f29a35" (UID: "00cfce92-4961-4a84-a59e-b1b979f29a35"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:09.614374 master-0 kubenswrapper[37036]: I0312 14:51:09.614310 37036 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00cfce92-4961-4a84-a59e-b1b979f29a35-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:09.754836 master-0 kubenswrapper[37036]: I0312 14:51:09.752994 37036 generic.go:334] "Generic (PLEG): container finished" podID="a271d093-100f-4c56-a201-16eb10358184" containerID="698138c12fbb175685e44cfc402ba1706f53defef1f0d9d0661d78e2a6a067c1" exitCode=0 Mar 12 14:51:09.795721 master-0 kubenswrapper[37036]: I0312 14:51:09.795529 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-a166-account-create-update-4lb7x" event={"ID":"97d7251f-7c8b-4119-af0a-368d13352fc2","Type":"ContainerStarted","Data":"ce9c23e95ac7116f153d77e9fbf8ec1d013da9335dcf3a5980d26fed0cb6a359"} Mar 12 14:51:09.795721 master-0 kubenswrapper[37036]: I0312 14:51:09.795590 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-scheduler-0" event={"ID":"76d48ea9-e7d6-4acd-b340-957c34aceb04","Type":"ContainerStarted","Data":"dff9ee5f25f8c2c52d17e7157375eb314dce6162a184df48f66955578a700163"} Mar 12 14:51:09.795721 master-0 kubenswrapper[37036]: I0312 14:51:09.795608 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-w9frr" event={"ID":"a271d093-100f-4c56-a201-16eb10358184","Type":"ContainerDied","Data":"698138c12fbb175685e44cfc402ba1706f53defef1f0d9d0661d78e2a6a067c1"} Mar 12 14:51:09.795721 master-0 kubenswrapper[37036]: I0312 14:51:09.795630 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-a166-account-create-update-4lb7x"] Mar 12 14:51:09.795721 master-0 kubenswrapper[37036]: I0312 14:51:09.795646 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-587fdb965c-q72qp"] Mar 12 14:51:09.795721 master-0 kubenswrapper[37036]: I0312 14:51:09.795660 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-05598-backup-0"] Mar 12 14:51:09.801312 master-0 kubenswrapper[37036]: I0312 14:51:09.801158 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-volume-lvm-iscsi-0" event={"ID":"d700a6d3-cb4f-4971-8b80-30eaab119193","Type":"ContainerStarted","Data":"df9e9a7301a65e7955429fa2c741e714607308f18bf946319634fad5a4f20e94"} Mar 12 14:51:09.832275 master-0 kubenswrapper[37036]: I0312 14:51:09.831811 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-5685659465-xhxkv" event={"ID":"ee3a29d2-bf14-4521-896e-b0169adefcb2","Type":"ContainerStarted","Data":"37b5a12e1d54f26b0d2b74e8045d8de6882a95ef706cbe7016e5fa290391ac49"} Mar 12 14:51:09.832644 master-0 kubenswrapper[37036]: I0312 14:51:09.832572 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-05598-volume-lvm-iscsi-0" podStartSLOduration=5.83255162 podStartE2EDuration="5.83255162s" podCreationTimestamp="2026-03-12 14:51:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:09.832251435 +0000 UTC m=+928.839992372" watchObservedRunningTime="2026-03-12 14:51:09.83255162 +0000 UTC m=+928.840292557" Mar 12 14:51:09.834448 master-0 kubenswrapper[37036]: I0312 14:51:09.834396 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7847764989-d9gwb" event={"ID":"3ce6481f-851c-4ead-a7c8-5de1d781cef9","Type":"ContainerStarted","Data":"df8917773066c6768a329bbd755868e50bcd7f9bcd206c2729fda8c145e9f1b6"} Mar 12 14:51:09.893328 master-0 kubenswrapper[37036]: I0312 14:51:09.893275 37036 scope.go:117] "RemoveContainer" containerID="ca8b737ce527a754ad4e301d625c526e66031c4a6b5979a7660b0fca39d46665" Mar 12 14:51:09.993093 master-0 kubenswrapper[37036]: I0312 14:51:09.992938 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95846d9c5-hjsmg"] Mar 12 14:51:10.023516 master-0 kubenswrapper[37036]: I0312 14:51:10.023445 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95846d9c5-hjsmg"] Mar 12 14:51:10.260035 master-0 kubenswrapper[37036]: E0312 14:51:10.259892 37036 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00cfce92_4961_4a84_a59e_b1b979f29a35.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00cfce92_4961_4a84_a59e_b1b979f29a35.slice/crio-72548b0e888eb9b502240039b5beb76972e493ea0aa39524a2e19b00e6f160af\": RecentStats: unable to find data in memory cache]" Mar 12 14:51:10.436086 master-0 kubenswrapper[37036]: I0312 14:51:10.436021 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-5df675497f-chr8j"] Mar 12 14:51:10.456357 master-0 kubenswrapper[37036]: I0312 14:51:10.442367 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.456956 master-0 kubenswrapper[37036]: I0312 14:51:10.456742 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-public-svc" Mar 12 14:51:10.490736 master-0 kubenswrapper[37036]: I0312 14:51:10.459091 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-internal-svc" Mar 12 14:51:10.522430 master-0 kubenswrapper[37036]: I0312 14:51:10.522270 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-5df675497f-chr8j"] Mar 12 14:51:10.542478 master-0 kubenswrapper[37036]: I0312 14:51:10.542392 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09d8068a-62a2-4363-9735-46c62d79015e-internal-tls-certs\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.542478 master-0 kubenswrapper[37036]: I0312 14:51:10.542480 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnbhz\" (UniqueName: \"kubernetes.io/projected/09d8068a-62a2-4363-9735-46c62d79015e-kube-api-access-cnbhz\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.542740 master-0 kubenswrapper[37036]: I0312 14:51:10.542508 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/09d8068a-62a2-4363-9735-46c62d79015e-etc-podinfo\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.542740 master-0 kubenswrapper[37036]: I0312 14:51:10.542591 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d8068a-62a2-4363-9735-46c62d79015e-config-data\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.542740 master-0 kubenswrapper[37036]: I0312 14:51:10.542669 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09d8068a-62a2-4363-9735-46c62d79015e-config-data-custom\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.542874 master-0 kubenswrapper[37036]: I0312 14:51:10.542782 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09d8068a-62a2-4363-9735-46c62d79015e-public-tls-certs\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.542874 master-0 kubenswrapper[37036]: I0312 14:51:10.542838 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09d8068a-62a2-4363-9735-46c62d79015e-scripts\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.542971 master-0 kubenswrapper[37036]: I0312 14:51:10.542883 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d8068a-62a2-4363-9735-46c62d79015e-combined-ca-bundle\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.542971 master-0 kubenswrapper[37036]: I0312 14:51:10.542932 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09d8068a-62a2-4363-9735-46c62d79015e-logs\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.542971 master-0 kubenswrapper[37036]: I0312 14:51:10.542957 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/09d8068a-62a2-4363-9735-46c62d79015e-config-data-merged\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.652846 master-0 kubenswrapper[37036]: I0312 14:51:10.646463 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09d8068a-62a2-4363-9735-46c62d79015e-public-tls-certs\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.652846 master-0 kubenswrapper[37036]: I0312 14:51:10.646571 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09d8068a-62a2-4363-9735-46c62d79015e-scripts\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.652846 master-0 kubenswrapper[37036]: I0312 14:51:10.646624 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d8068a-62a2-4363-9735-46c62d79015e-combined-ca-bundle\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.652846 master-0 kubenswrapper[37036]: I0312 14:51:10.646650 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09d8068a-62a2-4363-9735-46c62d79015e-logs\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.652846 master-0 kubenswrapper[37036]: I0312 14:51:10.646679 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/09d8068a-62a2-4363-9735-46c62d79015e-config-data-merged\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.652846 master-0 kubenswrapper[37036]: I0312 14:51:10.646745 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09d8068a-62a2-4363-9735-46c62d79015e-internal-tls-certs\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.652846 master-0 kubenswrapper[37036]: I0312 14:51:10.646771 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnbhz\" (UniqueName: \"kubernetes.io/projected/09d8068a-62a2-4363-9735-46c62d79015e-kube-api-access-cnbhz\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.652846 master-0 kubenswrapper[37036]: I0312 14:51:10.646794 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/09d8068a-62a2-4363-9735-46c62d79015e-etc-podinfo\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.652846 master-0 kubenswrapper[37036]: I0312 14:51:10.646857 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d8068a-62a2-4363-9735-46c62d79015e-config-data\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.652846 master-0 kubenswrapper[37036]: I0312 14:51:10.646947 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09d8068a-62a2-4363-9735-46c62d79015e-config-data-custom\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.652846 master-0 kubenswrapper[37036]: I0312 14:51:10.648042 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/09d8068a-62a2-4363-9735-46c62d79015e-config-data-merged\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.655386 master-0 kubenswrapper[37036]: I0312 14:51:10.655336 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09d8068a-62a2-4363-9735-46c62d79015e-logs\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.682641 master-0 kubenswrapper[37036]: I0312 14:51:10.682577 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09d8068a-62a2-4363-9735-46c62d79015e-config-data-custom\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.689743 master-0 kubenswrapper[37036]: I0312 14:51:10.687441 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09d8068a-62a2-4363-9735-46c62d79015e-public-tls-certs\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.692436 master-0 kubenswrapper[37036]: I0312 14:51:10.692358 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09d8068a-62a2-4363-9735-46c62d79015e-scripts\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.694935 master-0 kubenswrapper[37036]: I0312 14:51:10.693816 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09d8068a-62a2-4363-9735-46c62d79015e-internal-tls-certs\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.700272 master-0 kubenswrapper[37036]: I0312 14:51:10.700176 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/09d8068a-62a2-4363-9735-46c62d79015e-etc-podinfo\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.701715 master-0 kubenswrapper[37036]: I0312 14:51:10.701258 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d8068a-62a2-4363-9735-46c62d79015e-config-data\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.703793 master-0 kubenswrapper[37036]: I0312 14:51:10.703747 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d8068a-62a2-4363-9735-46c62d79015e-combined-ca-bundle\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.708884 master-0 kubenswrapper[37036]: I0312 14:51:10.708814 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnbhz\" (UniqueName: \"kubernetes.io/projected/09d8068a-62a2-4363-9735-46c62d79015e-kube-api-access-cnbhz\") pod \"ironic-5df675497f-chr8j\" (UID: \"09d8068a-62a2-4363-9735-46c62d79015e\") " pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.748659 master-0 kubenswrapper[37036]: I0312 14:51:10.745791 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:51:10.806843 master-0 kubenswrapper[37036]: I0312 14:51:10.806464 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:10.892969 master-0 kubenswrapper[37036]: I0312 14:51:10.892851 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-587fdb965c-q72qp" event={"ID":"4f72582b-03ab-4662-bb3e-3683d598e72b","Type":"ContainerStarted","Data":"a3645372bfa200ad2a463de128192d5f9784ae804a7c2ba4005a3814b2564bfb"} Mar 12 14:51:10.906409 master-0 kubenswrapper[37036]: I0312 14:51:10.906322 37036 generic.go:334] "Generic (PLEG): container finished" podID="3ce6481f-851c-4ead-a7c8-5de1d781cef9" containerID="29127fdb6c441e4a272691a75084a293302a2094a49f7af6db4342c807953247" exitCode=0 Mar 12 14:51:10.907101 master-0 kubenswrapper[37036]: I0312 14:51:10.907064 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7847764989-d9gwb" event={"ID":"3ce6481f-851c-4ead-a7c8-5de1d781cef9","Type":"ContainerDied","Data":"29127fdb6c441e4a272691a75084a293302a2094a49f7af6db4342c807953247"} Mar 12 14:51:10.916733 master-0 kubenswrapper[37036]: I0312 14:51:10.916660 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-a166-account-create-update-4lb7x" event={"ID":"97d7251f-7c8b-4119-af0a-368d13352fc2","Type":"ContainerStarted","Data":"966a7e4d99fc6a7bbac52cc57bc2b2c1f002822d08294ceebd81ca13d2d164a9"} Mar 12 14:51:10.931118 master-0 kubenswrapper[37036]: I0312 14:51:10.931046 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-backup-0" event={"ID":"59d7a356-c194-4ab2-9291-c1116ecc4bde","Type":"ContainerStarted","Data":"12c30a1a586ff70a8d8cab087a22f746313077a8700f219df5498ea16286fca1"} Mar 12 14:51:11.001856 master-0 kubenswrapper[37036]: I0312 14:51:11.001745 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-a166-account-create-update-4lb7x" podStartSLOduration=6.001686281 podStartE2EDuration="6.001686281s" podCreationTimestamp="2026-03-12 14:51:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:10.996921168 +0000 UTC m=+930.004662115" watchObservedRunningTime="2026-03-12 14:51:11.001686281 +0000 UTC m=+930.009427238" Mar 12 14:51:11.129170 master-0 kubenswrapper[37036]: I0312 14:51:11.122885 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:51:11.287280 master-0 kubenswrapper[37036]: I0312 14:51:11.287232 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00cfce92-4961-4a84-a59e-b1b979f29a35" path="/var/lib/kubelet/pods/00cfce92-4961-4a84-a59e-b1b979f29a35/volumes" Mar 12 14:51:11.389819 master-0 kubenswrapper[37036]: I0312 14:51:11.381018 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ef3bb033-9476-44e0-b183-a02b7cec1cf4\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a8ccea30-1ca1-4b4a-b263-8970cf916af6\") pod \"ironic-conductor-0\" (UID: \"8c0524b9-cbf3-40e3-9424-98b634ba1b10\") " pod="openstack/ironic-conductor-0" Mar 12 14:51:11.435115 master-0 kubenswrapper[37036]: I0312 14:51:11.434565 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Mar 12 14:51:11.701168 master-0 kubenswrapper[37036]: I0312 14:51:11.701092 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-867bb94d6d-fmw6x"] Mar 12 14:51:11.705034 master-0 kubenswrapper[37036]: I0312 14:51:11.703562 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:11.758175 master-0 kubenswrapper[37036]: I0312 14:51:11.758076 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-867bb94d6d-fmw6x"] Mar 12 14:51:11.875101 master-0 kubenswrapper[37036]: I0312 14:51:11.840614 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c163c0a9-d63a-491b-a3c7-4a97bede9f2f-config-data\") pod \"placement-867bb94d6d-fmw6x\" (UID: \"c163c0a9-d63a-491b-a3c7-4a97bede9f2f\") " pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:11.875101 master-0 kubenswrapper[37036]: I0312 14:51:11.840694 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c163c0a9-d63a-491b-a3c7-4a97bede9f2f-internal-tls-certs\") pod \"placement-867bb94d6d-fmw6x\" (UID: \"c163c0a9-d63a-491b-a3c7-4a97bede9f2f\") " pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:11.875101 master-0 kubenswrapper[37036]: I0312 14:51:11.840819 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56bkq\" (UniqueName: \"kubernetes.io/projected/c163c0a9-d63a-491b-a3c7-4a97bede9f2f-kube-api-access-56bkq\") pod \"placement-867bb94d6d-fmw6x\" (UID: \"c163c0a9-d63a-491b-a3c7-4a97bede9f2f\") " pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:11.875101 master-0 kubenswrapper[37036]: I0312 14:51:11.840866 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c163c0a9-d63a-491b-a3c7-4a97bede9f2f-logs\") pod \"placement-867bb94d6d-fmw6x\" (UID: \"c163c0a9-d63a-491b-a3c7-4a97bede9f2f\") " pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:11.875101 master-0 kubenswrapper[37036]: I0312 14:51:11.840919 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c163c0a9-d63a-491b-a3c7-4a97bede9f2f-scripts\") pod \"placement-867bb94d6d-fmw6x\" (UID: \"c163c0a9-d63a-491b-a3c7-4a97bede9f2f\") " pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:11.875101 master-0 kubenswrapper[37036]: I0312 14:51:11.840964 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c163c0a9-d63a-491b-a3c7-4a97bede9f2f-public-tls-certs\") pod \"placement-867bb94d6d-fmw6x\" (UID: \"c163c0a9-d63a-491b-a3c7-4a97bede9f2f\") " pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:11.899936 master-0 kubenswrapper[37036]: I0312 14:51:11.884808 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-5df675497f-chr8j"] Mar 12 14:51:11.899936 master-0 kubenswrapper[37036]: I0312 14:51:11.897609 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c163c0a9-d63a-491b-a3c7-4a97bede9f2f-combined-ca-bundle\") pod \"placement-867bb94d6d-fmw6x\" (UID: \"c163c0a9-d63a-491b-a3c7-4a97bede9f2f\") " pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:11.983272 master-0 kubenswrapper[37036]: I0312 14:51:11.980078 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-w9frr" Mar 12 14:51:12.006047 master-0 kubenswrapper[37036]: I0312 14:51:11.999330 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56bkq\" (UniqueName: \"kubernetes.io/projected/c163c0a9-d63a-491b-a3c7-4a97bede9f2f-kube-api-access-56bkq\") pod \"placement-867bb94d6d-fmw6x\" (UID: \"c163c0a9-d63a-491b-a3c7-4a97bede9f2f\") " pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:12.006047 master-0 kubenswrapper[37036]: I0312 14:51:11.999388 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c163c0a9-d63a-491b-a3c7-4a97bede9f2f-logs\") pod \"placement-867bb94d6d-fmw6x\" (UID: \"c163c0a9-d63a-491b-a3c7-4a97bede9f2f\") " pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:12.006047 master-0 kubenswrapper[37036]: I0312 14:51:11.999423 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c163c0a9-d63a-491b-a3c7-4a97bede9f2f-scripts\") pod \"placement-867bb94d6d-fmw6x\" (UID: \"c163c0a9-d63a-491b-a3c7-4a97bede9f2f\") " pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:12.006047 master-0 kubenswrapper[37036]: I0312 14:51:11.999469 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c163c0a9-d63a-491b-a3c7-4a97bede9f2f-public-tls-certs\") pod \"placement-867bb94d6d-fmw6x\" (UID: \"c163c0a9-d63a-491b-a3c7-4a97bede9f2f\") " pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:12.006047 master-0 kubenswrapper[37036]: I0312 14:51:11.999532 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c163c0a9-d63a-491b-a3c7-4a97bede9f2f-combined-ca-bundle\") pod \"placement-867bb94d6d-fmw6x\" (UID: \"c163c0a9-d63a-491b-a3c7-4a97bede9f2f\") " pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:12.006047 master-0 kubenswrapper[37036]: I0312 14:51:11.999587 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c163c0a9-d63a-491b-a3c7-4a97bede9f2f-config-data\") pod \"placement-867bb94d6d-fmw6x\" (UID: \"c163c0a9-d63a-491b-a3c7-4a97bede9f2f\") " pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:12.006047 master-0 kubenswrapper[37036]: I0312 14:51:11.999607 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c163c0a9-d63a-491b-a3c7-4a97bede9f2f-internal-tls-certs\") pod \"placement-867bb94d6d-fmw6x\" (UID: \"c163c0a9-d63a-491b-a3c7-4a97bede9f2f\") " pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:12.006047 master-0 kubenswrapper[37036]: I0312 14:51:12.004090 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c163c0a9-d63a-491b-a3c7-4a97bede9f2f-logs\") pod \"placement-867bb94d6d-fmw6x\" (UID: \"c163c0a9-d63a-491b-a3c7-4a97bede9f2f\") " pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:12.014453 master-0 kubenswrapper[37036]: I0312 14:51:12.008494 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c163c0a9-d63a-491b-a3c7-4a97bede9f2f-public-tls-certs\") pod \"placement-867bb94d6d-fmw6x\" (UID: \"c163c0a9-d63a-491b-a3c7-4a97bede9f2f\") " pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:12.014453 master-0 kubenswrapper[37036]: I0312 14:51:12.009198 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c163c0a9-d63a-491b-a3c7-4a97bede9f2f-internal-tls-certs\") pod \"placement-867bb94d6d-fmw6x\" (UID: \"c163c0a9-d63a-491b-a3c7-4a97bede9f2f\") " pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:12.039145 master-0 kubenswrapper[37036]: I0312 14:51:12.024028 37036 generic.go:334] "Generic (PLEG): container finished" podID="97d7251f-7c8b-4119-af0a-368d13352fc2" containerID="966a7e4d99fc6a7bbac52cc57bc2b2c1f002822d08294ceebd81ca13d2d164a9" exitCode=0 Mar 12 14:51:12.039145 master-0 kubenswrapper[37036]: I0312 14:51:12.024131 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-a166-account-create-update-4lb7x" event={"ID":"97d7251f-7c8b-4119-af0a-368d13352fc2","Type":"ContainerDied","Data":"966a7e4d99fc6a7bbac52cc57bc2b2c1f002822d08294ceebd81ca13d2d164a9"} Mar 12 14:51:12.071309 master-0 kubenswrapper[37036]: I0312 14:51:12.070987 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-backup-0" event={"ID":"59d7a356-c194-4ab2-9291-c1116ecc4bde","Type":"ContainerStarted","Data":"487dd0211e8c49805545236c7371ac3e942173479df7d9cb7a8f57d4cbf8800d"} Mar 12 14:51:12.077060 master-0 kubenswrapper[37036]: I0312 14:51:12.076908 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-w9frr" event={"ID":"a271d093-100f-4c56-a201-16eb10358184","Type":"ContainerDied","Data":"902b5e49e8379eebf353f9ba026577f131f269f2cd660aaee8781cb8656b738b"} Mar 12 14:51:12.077060 master-0 kubenswrapper[37036]: I0312 14:51:12.076963 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="902b5e49e8379eebf353f9ba026577f131f269f2cd660aaee8781cb8656b738b" Mar 12 14:51:12.077060 master-0 kubenswrapper[37036]: I0312 14:51:12.077018 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-w9frr" Mar 12 14:51:12.082183 master-0 kubenswrapper[37036]: I0312 14:51:12.079866 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c163c0a9-d63a-491b-a3c7-4a97bede9f2f-scripts\") pod \"placement-867bb94d6d-fmw6x\" (UID: \"c163c0a9-d63a-491b-a3c7-4a97bede9f2f\") " pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:12.088843 master-0 kubenswrapper[37036]: I0312 14:51:12.082757 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c163c0a9-d63a-491b-a3c7-4a97bede9f2f-combined-ca-bundle\") pod \"placement-867bb94d6d-fmw6x\" (UID: \"c163c0a9-d63a-491b-a3c7-4a97bede9f2f\") " pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:12.088843 master-0 kubenswrapper[37036]: I0312 14:51:12.088149 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5df675497f-chr8j" event={"ID":"09d8068a-62a2-4363-9735-46c62d79015e","Type":"ContainerStarted","Data":"d584c38fd65ed2fd22bce89295429026de2a22331032e6b7dfcd1d69d4d2e883"} Mar 12 14:51:12.112145 master-0 kubenswrapper[37036]: I0312 14:51:12.088283 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56bkq\" (UniqueName: \"kubernetes.io/projected/c163c0a9-d63a-491b-a3c7-4a97bede9f2f-kube-api-access-56bkq\") pod \"placement-867bb94d6d-fmw6x\" (UID: \"c163c0a9-d63a-491b-a3c7-4a97bede9f2f\") " pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:12.112145 master-0 kubenswrapper[37036]: I0312 14:51:12.099641 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c163c0a9-d63a-491b-a3c7-4a97bede9f2f-config-data\") pod \"placement-867bb94d6d-fmw6x\" (UID: \"c163c0a9-d63a-491b-a3c7-4a97bede9f2f\") " pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:12.112145 master-0 kubenswrapper[37036]: I0312 14:51:12.101538 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a271d093-100f-4c56-a201-16eb10358184-operator-scripts\") pod \"a271d093-100f-4c56-a201-16eb10358184\" (UID: \"a271d093-100f-4c56-a201-16eb10358184\") " Mar 12 14:51:12.112145 master-0 kubenswrapper[37036]: I0312 14:51:12.110851 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgkp5\" (UniqueName: \"kubernetes.io/projected/a271d093-100f-4c56-a201-16eb10358184-kube-api-access-qgkp5\") pod \"a271d093-100f-4c56-a201-16eb10358184\" (UID: \"a271d093-100f-4c56-a201-16eb10358184\") " Mar 12 14:51:12.112692 master-0 kubenswrapper[37036]: I0312 14:51:12.102074 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a271d093-100f-4c56-a201-16eb10358184-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a271d093-100f-4c56-a201-16eb10358184" (UID: "a271d093-100f-4c56-a201-16eb10358184"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:12.140480 master-0 kubenswrapper[37036]: I0312 14:51:12.140385 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a271d093-100f-4c56-a201-16eb10358184-kube-api-access-qgkp5" (OuterVolumeSpecName: "kube-api-access-qgkp5") pod "a271d093-100f-4c56-a201-16eb10358184" (UID: "a271d093-100f-4c56-a201-16eb10358184"). InnerVolumeSpecName "kube-api-access-qgkp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:12.213730 master-0 kubenswrapper[37036]: I0312 14:51:12.213527 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgkp5\" (UniqueName: \"kubernetes.io/projected/a271d093-100f-4c56-a201-16eb10358184-kube-api-access-qgkp5\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:12.213730 master-0 kubenswrapper[37036]: I0312 14:51:12.213574 37036 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a271d093-100f-4c56-a201-16eb10358184-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:12.294466 master-0 kubenswrapper[37036]: I0312 14:51:12.293791 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:12.502564 master-0 kubenswrapper[37036]: I0312 14:51:12.502509 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-05598-api-0" Mar 12 14:51:12.646226 master-0 kubenswrapper[37036]: I0312 14:51:12.641168 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Mar 12 14:51:13.006619 master-0 kubenswrapper[37036]: I0312 14:51:13.002782 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-867bb94d6d-fmw6x"] Mar 12 14:51:13.112339 master-0 kubenswrapper[37036]: I0312 14:51:13.112224 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"8c0524b9-cbf3-40e3-9424-98b634ba1b10","Type":"ContainerStarted","Data":"b506948e6ebca5f4fc876d7a3b6dd7abb9f217e6c833db48a42445ea08e7683f"} Mar 12 14:51:13.117783 master-0 kubenswrapper[37036]: I0312 14:51:13.117740 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7847764989-d9gwb" event={"ID":"3ce6481f-851c-4ead-a7c8-5de1d781cef9","Type":"ContainerStarted","Data":"964973b397b2284ee8057cda7c0bad6efd8b6105a2d8773ee1796bca5ac9d87d"} Mar 12 14:51:13.117924 master-0 kubenswrapper[37036]: I0312 14:51:13.117854 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7847764989-d9gwb" Mar 12 14:51:13.128550 master-0 kubenswrapper[37036]: I0312 14:51:13.128496 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-backup-0" event={"ID":"59d7a356-c194-4ab2-9291-c1116ecc4bde","Type":"ContainerStarted","Data":"6aeca1f8b131d58dd69c6805fc02325683f5be74cb56099911b23a863621f0f3"} Mar 12 14:51:13.136711 master-0 kubenswrapper[37036]: I0312 14:51:13.135961 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-05598-scheduler-0" event={"ID":"76d48ea9-e7d6-4acd-b340-957c34aceb04","Type":"ContainerStarted","Data":"1659fcf76ca6ab55bf13d17112ded703a0b243ff6271ec8a0d10adfbf151394d"} Mar 12 14:51:13.155774 master-0 kubenswrapper[37036]: I0312 14:51:13.155702 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7847764989-d9gwb" podStartSLOduration=8.155683521 podStartE2EDuration="8.155683521s" podCreationTimestamp="2026-03-12 14:51:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:13.147475988 +0000 UTC m=+932.155216925" watchObservedRunningTime="2026-03-12 14:51:13.155683521 +0000 UTC m=+932.163424458" Mar 12 14:51:13.185878 master-0 kubenswrapper[37036]: I0312 14:51:13.185799 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-05598-backup-0" podStartSLOduration=6.185777905 podStartE2EDuration="6.185777905s" podCreationTimestamp="2026-03-12 14:51:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:13.179205601 +0000 UTC m=+932.186946538" watchObservedRunningTime="2026-03-12 14:51:13.185777905 +0000 UTC m=+932.193518842" Mar 12 14:51:13.233457 master-0 kubenswrapper[37036]: I0312 14:51:13.233383 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-05598-scheduler-0" podStartSLOduration=8.233359525000001 podStartE2EDuration="8.233359525s" podCreationTimestamp="2026-03-12 14:51:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:13.211588376 +0000 UTC m=+932.219329313" watchObservedRunningTime="2026-03-12 14:51:13.233359525 +0000 UTC m=+932.241100462" Mar 12 14:51:13.389292 master-0 kubenswrapper[37036]: I0312 14:51:13.389230 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-05598-backup-0" Mar 12 14:51:14.151997 master-0 kubenswrapper[37036]: I0312 14:51:14.149958 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"8c0524b9-cbf3-40e3-9424-98b634ba1b10","Type":"ContainerStarted","Data":"cf46e17b4a3885c0faaf42badb59c19852647617911132941437d8434a4db38f"} Mar 12 14:51:14.647271 master-0 kubenswrapper[37036]: I0312 14:51:14.647220 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-a166-account-create-update-4lb7x" Mar 12 14:51:14.651822 master-0 kubenswrapper[37036]: I0312 14:51:14.651678 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:14.822279 master-0 kubenswrapper[37036]: I0312 14:51:14.821437 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hghvs\" (UniqueName: \"kubernetes.io/projected/97d7251f-7c8b-4119-af0a-368d13352fc2-kube-api-access-hghvs\") pod \"97d7251f-7c8b-4119-af0a-368d13352fc2\" (UID: \"97d7251f-7c8b-4119-af0a-368d13352fc2\") " Mar 12 14:51:14.822279 master-0 kubenswrapper[37036]: I0312 14:51:14.821708 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97d7251f-7c8b-4119-af0a-368d13352fc2-operator-scripts\") pod \"97d7251f-7c8b-4119-af0a-368d13352fc2\" (UID: \"97d7251f-7c8b-4119-af0a-368d13352fc2\") " Mar 12 14:51:14.826808 master-0 kubenswrapper[37036]: I0312 14:51:14.826760 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97d7251f-7c8b-4119-af0a-368d13352fc2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "97d7251f-7c8b-4119-af0a-368d13352fc2" (UID: "97d7251f-7c8b-4119-af0a-368d13352fc2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:14.831511 master-0 kubenswrapper[37036]: I0312 14:51:14.831473 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97d7251f-7c8b-4119-af0a-368d13352fc2-kube-api-access-hghvs" (OuterVolumeSpecName: "kube-api-access-hghvs") pod "97d7251f-7c8b-4119-af0a-368d13352fc2" (UID: "97d7251f-7c8b-4119-af0a-368d13352fc2"). InnerVolumeSpecName "kube-api-access-hghvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:14.926759 master-0 kubenswrapper[37036]: I0312 14:51:14.926698 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hghvs\" (UniqueName: \"kubernetes.io/projected/97d7251f-7c8b-4119-af0a-368d13352fc2-kube-api-access-hghvs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:14.926759 master-0 kubenswrapper[37036]: I0312 14:51:14.926757 37036 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97d7251f-7c8b-4119-af0a-368d13352fc2-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:14.977579 master-0 kubenswrapper[37036]: I0312 14:51:14.977528 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-05598-volume-lvm-iscsi-0" Mar 12 14:51:15.183311 master-0 kubenswrapper[37036]: I0312 14:51:15.183071 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-a166-account-create-update-4lb7x" Mar 12 14:51:15.183311 master-0 kubenswrapper[37036]: I0312 14:51:15.183088 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-a166-account-create-update-4lb7x" event={"ID":"97d7251f-7c8b-4119-af0a-368d13352fc2","Type":"ContainerDied","Data":"ce9c23e95ac7116f153d77e9fbf8ec1d013da9335dcf3a5980d26fed0cb6a359"} Mar 12 14:51:15.183311 master-0 kubenswrapper[37036]: I0312 14:51:15.183146 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce9c23e95ac7116f153d77e9fbf8ec1d013da9335dcf3a5980d26fed0cb6a359" Mar 12 14:51:15.189399 master-0 kubenswrapper[37036]: I0312 14:51:15.189284 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-867bb94d6d-fmw6x" event={"ID":"c163c0a9-d63a-491b-a3c7-4a97bede9f2f","Type":"ContainerStarted","Data":"f55d68957e4720a91800c560e70aefca161f2ff2284e78c3f165d9895aecb5a3"} Mar 12 14:51:15.724066 master-0 kubenswrapper[37036]: I0312 14:51:15.721633 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-05598-scheduler-0" Mar 12 14:51:16.109331 master-0 kubenswrapper[37036]: I0312 14:51:16.108797 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-05598-scheduler-0" Mar 12 14:51:16.225728 master-0 kubenswrapper[37036]: I0312 14:51:16.222812 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5df675497f-chr8j" event={"ID":"09d8068a-62a2-4363-9735-46c62d79015e","Type":"ContainerStarted","Data":"adf98d0aeebdafad79029becc382a6fa80e5acb39739553eab2886983d48415a"} Mar 12 14:51:16.228942 master-0 kubenswrapper[37036]: I0312 14:51:16.226499 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-5685659465-xhxkv" event={"ID":"ee3a29d2-bf14-4521-896e-b0169adefcb2","Type":"ContainerStarted","Data":"fde1591505c12d055b802d151139f4372caac4dda91b98e9941added9fdd34fe"} Mar 12 14:51:16.228942 master-0 kubenswrapper[37036]: I0312 14:51:16.227327 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-5685659465-xhxkv" Mar 12 14:51:16.238478 master-0 kubenswrapper[37036]: I0312 14:51:16.238261 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-867bb94d6d-fmw6x" event={"ID":"c163c0a9-d63a-491b-a3c7-4a97bede9f2f","Type":"ContainerStarted","Data":"747175eeda46a17d3a9336b08505db9904b7cd6603778f6dd05de0266e2e302a"} Mar 12 14:51:16.253922 master-0 kubenswrapper[37036]: I0312 14:51:16.250238 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-587fdb965c-q72qp" event={"ID":"4f72582b-03ab-4662-bb3e-3683d598e72b","Type":"ContainerStarted","Data":"6ddf96b1fdba41e47ba8d373f11a06b68cc5242b427861645b84750a361dad34"} Mar 12 14:51:16.364816 master-0 kubenswrapper[37036]: I0312 14:51:16.359731 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-neutron-agent-5685659465-xhxkv" podStartSLOduration=4.944484953 podStartE2EDuration="11.359708815s" podCreationTimestamp="2026-03-12 14:51:05 +0000 UTC" firstStartedPulling="2026-03-12 14:51:09.125953023 +0000 UTC m=+928.133693960" lastFinishedPulling="2026-03-12 14:51:15.541176885 +0000 UTC m=+934.548917822" observedRunningTime="2026-03-12 14:51:16.343176236 +0000 UTC m=+935.350917173" watchObservedRunningTime="2026-03-12 14:51:16.359708815 +0000 UTC m=+935.367449752" Mar 12 14:51:17.265178 master-0 kubenswrapper[37036]: I0312 14:51:17.264873 37036 generic.go:334] "Generic (PLEG): container finished" podID="09d8068a-62a2-4363-9735-46c62d79015e" containerID="adf98d0aeebdafad79029becc382a6fa80e5acb39739553eab2886983d48415a" exitCode=0 Mar 12 14:51:17.265178 master-0 kubenswrapper[37036]: I0312 14:51:17.264961 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5df675497f-chr8j" event={"ID":"09d8068a-62a2-4363-9735-46c62d79015e","Type":"ContainerDied","Data":"adf98d0aeebdafad79029becc382a6fa80e5acb39739553eab2886983d48415a"} Mar 12 14:51:17.273424 master-0 kubenswrapper[37036]: I0312 14:51:17.271149 37036 generic.go:334] "Generic (PLEG): container finished" podID="8c0524b9-cbf3-40e3-9424-98b634ba1b10" containerID="cf46e17b4a3885c0faaf42badb59c19852647617911132941437d8434a4db38f" exitCode=0 Mar 12 14:51:17.273424 master-0 kubenswrapper[37036]: I0312 14:51:17.271241 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"8c0524b9-cbf3-40e3-9424-98b634ba1b10","Type":"ContainerDied","Data":"cf46e17b4a3885c0faaf42badb59c19852647617911132941437d8434a4db38f"} Mar 12 14:51:17.284571 master-0 kubenswrapper[37036]: I0312 14:51:17.284500 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-867bb94d6d-fmw6x" event={"ID":"c163c0a9-d63a-491b-a3c7-4a97bede9f2f","Type":"ContainerStarted","Data":"501cb75c6ab99a5966ce61f7dcc6d49c43c5fc0e014efd30389de3754f6fd340"} Mar 12 14:51:17.285518 master-0 kubenswrapper[37036]: I0312 14:51:17.285484 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:17.285603 master-0 kubenswrapper[37036]: I0312 14:51:17.285531 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:17.289230 master-0 kubenswrapper[37036]: I0312 14:51:17.289162 37036 generic.go:334] "Generic (PLEG): container finished" podID="4f72582b-03ab-4662-bb3e-3683d598e72b" containerID="6ddf96b1fdba41e47ba8d373f11a06b68cc5242b427861645b84750a361dad34" exitCode=1 Mar 12 14:51:17.291051 master-0 kubenswrapper[37036]: I0312 14:51:17.290962 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-587fdb965c-q72qp" event={"ID":"4f72582b-03ab-4662-bb3e-3683d598e72b","Type":"ContainerDied","Data":"6ddf96b1fdba41e47ba8d373f11a06b68cc5242b427861645b84750a361dad34"} Mar 12 14:51:17.373463 master-0 kubenswrapper[37036]: I0312 14:51:17.373344 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-867bb94d6d-fmw6x" podStartSLOduration=6.3732873340000005 podStartE2EDuration="6.373287334s" podCreationTimestamp="2026-03-12 14:51:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:17.360376719 +0000 UTC m=+936.368117656" watchObservedRunningTime="2026-03-12 14:51:17.373287334 +0000 UTC m=+936.381028271" Mar 12 14:51:17.400289 master-0 kubenswrapper[37036]: I0312 14:51:17.397479 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7847764989-d9gwb" Mar 12 14:51:17.673174 master-0 kubenswrapper[37036]: I0312 14:51:17.671836 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c468f6c5c-sfk59"] Mar 12 14:51:17.676326 master-0 kubenswrapper[37036]: I0312 14:51:17.675997 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" podUID="4957b7fc-e353-40a6-b5c5-39b608bb366d" containerName="dnsmasq-dns" containerID="cri-o://3e29737a7de27a41ff097699281dddf8a3c0641df751cc31550f9a2113f449e5" gracePeriod=10 Mar 12 14:51:18.291000 master-0 kubenswrapper[37036]: I0312 14:51:18.274436 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-679985d476-m2lh7" Mar 12 14:51:18.325213 master-0 kubenswrapper[37036]: I0312 14:51:18.325166 37036 generic.go:334] "Generic (PLEG): container finished" podID="4957b7fc-e353-40a6-b5c5-39b608bb366d" containerID="3e29737a7de27a41ff097699281dddf8a3c0641df751cc31550f9a2113f449e5" exitCode=0 Mar 12 14:51:18.325547 master-0 kubenswrapper[37036]: I0312 14:51:18.325224 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" event={"ID":"4957b7fc-e353-40a6-b5c5-39b608bb366d","Type":"ContainerDied","Data":"3e29737a7de27a41ff097699281dddf8a3c0641df751cc31550f9a2113f449e5"} Mar 12 14:51:18.352664 master-0 kubenswrapper[37036]: I0312 14:51:18.352593 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-587fdb965c-q72qp" event={"ID":"4f72582b-03ab-4662-bb3e-3683d598e72b","Type":"ContainerStarted","Data":"bf4b936a363ad3d4a62443746f8f433ab14a885554c7f3795cec69249496e449"} Mar 12 14:51:18.404251 master-0 kubenswrapper[37036]: I0312 14:51:18.404191 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5df675497f-chr8j" event={"ID":"09d8068a-62a2-4363-9735-46c62d79015e","Type":"ContainerStarted","Data":"43098ce8d8ac7c54b242cc1c1b009ea031b8c43a23c2a02c129652e3aa2b496f"} Mar 12 14:51:18.404387 master-0 kubenswrapper[37036]: I0312 14:51:18.404272 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:18.494206 master-0 kubenswrapper[37036]: I0312 14:51:18.494110 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" Mar 12 14:51:18.502852 master-0 kubenswrapper[37036]: I0312 14:51:18.502745 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-5df675497f-chr8j" podStartSLOduration=4.860119013 podStartE2EDuration="8.502728163s" podCreationTimestamp="2026-03-12 14:51:10 +0000 UTC" firstStartedPulling="2026-03-12 14:51:11.900241595 +0000 UTC m=+930.907982532" lastFinishedPulling="2026-03-12 14:51:15.542850745 +0000 UTC m=+934.550591682" observedRunningTime="2026-03-12 14:51:18.466190106 +0000 UTC m=+937.473931043" watchObservedRunningTime="2026-03-12 14:51:18.502728163 +0000 UTC m=+937.510469090" Mar 12 14:51:18.513948 master-0 kubenswrapper[37036]: I0312 14:51:18.513204 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-ovsdbserver-sb\") pod \"4957b7fc-e353-40a6-b5c5-39b608bb366d\" (UID: \"4957b7fc-e353-40a6-b5c5-39b608bb366d\") " Mar 12 14:51:18.513948 master-0 kubenswrapper[37036]: I0312 14:51:18.513394 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-dns-svc\") pod \"4957b7fc-e353-40a6-b5c5-39b608bb366d\" (UID: \"4957b7fc-e353-40a6-b5c5-39b608bb366d\") " Mar 12 14:51:18.513948 master-0 kubenswrapper[37036]: I0312 14:51:18.513715 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-dns-swift-storage-0\") pod \"4957b7fc-e353-40a6-b5c5-39b608bb366d\" (UID: \"4957b7fc-e353-40a6-b5c5-39b608bb366d\") " Mar 12 14:51:18.513948 master-0 kubenswrapper[37036]: I0312 14:51:18.513947 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-config\") pod \"4957b7fc-e353-40a6-b5c5-39b608bb366d\" (UID: \"4957b7fc-e353-40a6-b5c5-39b608bb366d\") " Mar 12 14:51:18.514360 master-0 kubenswrapper[37036]: I0312 14:51:18.513980 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x54xp\" (UniqueName: \"kubernetes.io/projected/4957b7fc-e353-40a6-b5c5-39b608bb366d-kube-api-access-x54xp\") pod \"4957b7fc-e353-40a6-b5c5-39b608bb366d\" (UID: \"4957b7fc-e353-40a6-b5c5-39b608bb366d\") " Mar 12 14:51:18.514360 master-0 kubenswrapper[37036]: I0312 14:51:18.514006 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-ovsdbserver-nb\") pod \"4957b7fc-e353-40a6-b5c5-39b608bb366d\" (UID: \"4957b7fc-e353-40a6-b5c5-39b608bb366d\") " Mar 12 14:51:18.527781 master-0 kubenswrapper[37036]: I0312 14:51:18.527694 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4957b7fc-e353-40a6-b5c5-39b608bb366d-kube-api-access-x54xp" (OuterVolumeSpecName: "kube-api-access-x54xp") pod "4957b7fc-e353-40a6-b5c5-39b608bb366d" (UID: "4957b7fc-e353-40a6-b5c5-39b608bb366d"). InnerVolumeSpecName "kube-api-access-x54xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:18.618806 master-0 kubenswrapper[37036]: I0312 14:51:18.618000 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x54xp\" (UniqueName: \"kubernetes.io/projected/4957b7fc-e353-40a6-b5c5-39b608bb366d-kube-api-access-x54xp\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:18.623736 master-0 kubenswrapper[37036]: I0312 14:51:18.622656 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4957b7fc-e353-40a6-b5c5-39b608bb366d" (UID: "4957b7fc-e353-40a6-b5c5-39b608bb366d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:18.638554 master-0 kubenswrapper[37036]: I0312 14:51:18.636018 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4957b7fc-e353-40a6-b5c5-39b608bb366d" (UID: "4957b7fc-e353-40a6-b5c5-39b608bb366d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:18.662622 master-0 kubenswrapper[37036]: I0312 14:51:18.662556 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-config" (OuterVolumeSpecName: "config") pod "4957b7fc-e353-40a6-b5c5-39b608bb366d" (UID: "4957b7fc-e353-40a6-b5c5-39b608bb366d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:18.674149 master-0 kubenswrapper[37036]: I0312 14:51:18.673500 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4957b7fc-e353-40a6-b5c5-39b608bb366d" (UID: "4957b7fc-e353-40a6-b5c5-39b608bb366d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:18.692668 master-0 kubenswrapper[37036]: I0312 14:51:18.692586 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4957b7fc-e353-40a6-b5c5-39b608bb366d" (UID: "4957b7fc-e353-40a6-b5c5-39b608bb366d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:18.721628 master-0 kubenswrapper[37036]: I0312 14:51:18.721573 37036 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:18.721628 master-0 kubenswrapper[37036]: I0312 14:51:18.721621 37036 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:18.721628 master-0 kubenswrapper[37036]: I0312 14:51:18.721634 37036 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:18.721628 master-0 kubenswrapper[37036]: I0312 14:51:18.721643 37036 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:18.722006 master-0 kubenswrapper[37036]: I0312 14:51:18.721653 37036 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4957b7fc-e353-40a6-b5c5-39b608bb366d-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:18.750988 master-0 kubenswrapper[37036]: I0312 14:51:18.748249 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-05598-backup-0" Mar 12 14:51:19.417724 master-0 kubenswrapper[37036]: I0312 14:51:19.417602 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" event={"ID":"4957b7fc-e353-40a6-b5c5-39b608bb366d","Type":"ContainerDied","Data":"4253f189b2496fef804d6e2264bfdfc278708c5336db677d0eee80545c7163e6"} Mar 12 14:51:19.419041 master-0 kubenswrapper[37036]: I0312 14:51:19.418922 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c468f6c5c-sfk59" Mar 12 14:51:19.422077 master-0 kubenswrapper[37036]: I0312 14:51:19.422038 37036 scope.go:117] "RemoveContainer" containerID="3e29737a7de27a41ff097699281dddf8a3c0641df751cc31550f9a2113f449e5" Mar 12 14:51:19.425132 master-0 kubenswrapper[37036]: I0312 14:51:19.425097 37036 generic.go:334] "Generic (PLEG): container finished" podID="4f72582b-03ab-4662-bb3e-3683d598e72b" containerID="bf4b936a363ad3d4a62443746f8f433ab14a885554c7f3795cec69249496e449" exitCode=0 Mar 12 14:51:19.425271 master-0 kubenswrapper[37036]: I0312 14:51:19.425168 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-587fdb965c-q72qp" event={"ID":"4f72582b-03ab-4662-bb3e-3683d598e72b","Type":"ContainerDied","Data":"bf4b936a363ad3d4a62443746f8f433ab14a885554c7f3795cec69249496e449"} Mar 12 14:51:19.425839 master-0 kubenswrapper[37036]: I0312 14:51:19.425816 37036 scope.go:117] "RemoveContainer" containerID="6ddf96b1fdba41e47ba8d373f11a06b68cc5242b427861645b84750a361dad34" Mar 12 14:51:19.434094 master-0 kubenswrapper[37036]: I0312 14:51:19.434033 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5df675497f-chr8j" event={"ID":"09d8068a-62a2-4363-9735-46c62d79015e","Type":"ContainerStarted","Data":"306adc88ad12891544338ab37a3e93a9b1e5f2d2b0f866808e680077b3406fe0"} Mar 12 14:51:19.489476 master-0 kubenswrapper[37036]: I0312 14:51:19.489411 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c468f6c5c-sfk59"] Mar 12 14:51:19.506199 master-0 kubenswrapper[37036]: I0312 14:51:19.506137 37036 scope.go:117] "RemoveContainer" containerID="2eaf55a3e6d8bc65c052162f70a6ff505057828559184eddf1ae00f63dba9316" Mar 12 14:51:19.533168 master-0 kubenswrapper[37036]: I0312 14:51:19.533080 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c468f6c5c-sfk59"] Mar 12 14:51:19.550527 master-0 kubenswrapper[37036]: I0312 14:51:19.550479 37036 scope.go:117] "RemoveContainer" containerID="6ddf96b1fdba41e47ba8d373f11a06b68cc5242b427861645b84750a361dad34" Mar 12 14:51:19.551075 master-0 kubenswrapper[37036]: E0312 14:51:19.551028 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ddf96b1fdba41e47ba8d373f11a06b68cc5242b427861645b84750a361dad34\": container with ID starting with 6ddf96b1fdba41e47ba8d373f11a06b68cc5242b427861645b84750a361dad34 not found: ID does not exist" containerID="6ddf96b1fdba41e47ba8d373f11a06b68cc5242b427861645b84750a361dad34" Mar 12 14:51:19.551162 master-0 kubenswrapper[37036]: I0312 14:51:19.551066 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ddf96b1fdba41e47ba8d373f11a06b68cc5242b427861645b84750a361dad34"} err="failed to get container status \"6ddf96b1fdba41e47ba8d373f11a06b68cc5242b427861645b84750a361dad34\": rpc error: code = NotFound desc = could not find container \"6ddf96b1fdba41e47ba8d373f11a06b68cc5242b427861645b84750a361dad34\": container with ID starting with 6ddf96b1fdba41e47ba8d373f11a06b68cc5242b427861645b84750a361dad34 not found: ID does not exist" Mar 12 14:51:20.453385 master-0 kubenswrapper[37036]: I0312 14:51:20.451035 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-587fdb965c-q72qp" event={"ID":"4f72582b-03ab-4662-bb3e-3683d598e72b","Type":"ContainerStarted","Data":"d25abaf1a0366ce158682d6d8cb39f30e262b0a2dad3fc1200bffeec01596c35"} Mar 12 14:51:20.453385 master-0 kubenswrapper[37036]: I0312 14:51:20.451086 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-587fdb965c-q72qp" event={"ID":"4f72582b-03ab-4662-bb3e-3683d598e72b","Type":"ContainerStarted","Data":"f3c99d4969bf4db83d78362ca1faa18ddc899ee4154b0962c3ab5143b5fd6467"} Mar 12 14:51:20.453385 master-0 kubenswrapper[37036]: I0312 14:51:20.451134 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:20.483994 master-0 kubenswrapper[37036]: I0312 14:51:20.483927 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-587fdb965c-q72qp" podStartSLOduration=9.670751073 podStartE2EDuration="15.48390744s" podCreationTimestamp="2026-03-12 14:51:05 +0000 UTC" firstStartedPulling="2026-03-12 14:51:09.728501097 +0000 UTC m=+928.736242034" lastFinishedPulling="2026-03-12 14:51:15.541657464 +0000 UTC m=+934.549398401" observedRunningTime="2026-03-12 14:51:20.477419137 +0000 UTC m=+939.485160074" watchObservedRunningTime="2026-03-12 14:51:20.48390744 +0000 UTC m=+939.491648377" Mar 12 14:51:20.788688 master-0 kubenswrapper[37036]: I0312 14:51:20.788597 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-sync-ht555"] Mar 12 14:51:20.803152 master-0 kubenswrapper[37036]: E0312 14:51:20.800205 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4957b7fc-e353-40a6-b5c5-39b608bb366d" containerName="init" Mar 12 14:51:20.803152 master-0 kubenswrapper[37036]: I0312 14:51:20.800269 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="4957b7fc-e353-40a6-b5c5-39b608bb366d" containerName="init" Mar 12 14:51:20.803152 master-0 kubenswrapper[37036]: E0312 14:51:20.800458 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a271d093-100f-4c56-a201-16eb10358184" containerName="mariadb-database-create" Mar 12 14:51:20.803152 master-0 kubenswrapper[37036]: I0312 14:51:20.800470 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="a271d093-100f-4c56-a201-16eb10358184" containerName="mariadb-database-create" Mar 12 14:51:20.803152 master-0 kubenswrapper[37036]: E0312 14:51:20.800482 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d7251f-7c8b-4119-af0a-368d13352fc2" containerName="mariadb-account-create-update" Mar 12 14:51:20.803152 master-0 kubenswrapper[37036]: I0312 14:51:20.800489 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d7251f-7c8b-4119-af0a-368d13352fc2" containerName="mariadb-account-create-update" Mar 12 14:51:20.803152 master-0 kubenswrapper[37036]: E0312 14:51:20.800633 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4957b7fc-e353-40a6-b5c5-39b608bb366d" containerName="dnsmasq-dns" Mar 12 14:51:20.803152 master-0 kubenswrapper[37036]: I0312 14:51:20.800642 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="4957b7fc-e353-40a6-b5c5-39b608bb366d" containerName="dnsmasq-dns" Mar 12 14:51:20.803152 master-0 kubenswrapper[37036]: I0312 14:51:20.801352 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="a271d093-100f-4c56-a201-16eb10358184" containerName="mariadb-database-create" Mar 12 14:51:20.803152 master-0 kubenswrapper[37036]: I0312 14:51:20.801383 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="4957b7fc-e353-40a6-b5c5-39b608bb366d" containerName="dnsmasq-dns" Mar 12 14:51:20.803152 master-0 kubenswrapper[37036]: I0312 14:51:20.801412 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="97d7251f-7c8b-4119-af0a-368d13352fc2" containerName="mariadb-account-create-update" Mar 12 14:51:20.811673 master-0 kubenswrapper[37036]: I0312 14:51:20.807193 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-ht555" Mar 12 14:51:20.811673 master-0 kubenswrapper[37036]: I0312 14:51:20.807524 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-ht555"] Mar 12 14:51:20.819687 master-0 kubenswrapper[37036]: I0312 14:51:20.819628 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Mar 12 14:51:20.820250 master-0 kubenswrapper[37036]: I0312 14:51:20.820226 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Mar 12 14:51:20.918647 master-0 kubenswrapper[37036]: I0312 14:51:20.917943 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-var-lib-ironic\") pod \"ironic-inspector-db-sync-ht555\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " pod="openstack/ironic-inspector-db-sync-ht555" Mar 12 14:51:20.918647 master-0 kubenswrapper[37036]: I0312 14:51:20.918295 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-ht555\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " pod="openstack/ironic-inspector-db-sync-ht555" Mar 12 14:51:20.919353 master-0 kubenswrapper[37036]: I0312 14:51:20.919157 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-scripts\") pod \"ironic-inspector-db-sync-ht555\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " pod="openstack/ironic-inspector-db-sync-ht555" Mar 12 14:51:20.919353 master-0 kubenswrapper[37036]: I0312 14:51:20.919234 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qctl2\" (UniqueName: \"kubernetes.io/projected/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-kube-api-access-qctl2\") pod \"ironic-inspector-db-sync-ht555\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " pod="openstack/ironic-inspector-db-sync-ht555" Mar 12 14:51:20.919353 master-0 kubenswrapper[37036]: I0312 14:51:20.919314 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-config\") pod \"ironic-inspector-db-sync-ht555\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " pod="openstack/ironic-inspector-db-sync-ht555" Mar 12 14:51:20.919745 master-0 kubenswrapper[37036]: I0312 14:51:20.919709 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-etc-podinfo\") pod \"ironic-inspector-db-sync-ht555\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " pod="openstack/ironic-inspector-db-sync-ht555" Mar 12 14:51:20.919803 master-0 kubenswrapper[37036]: I0312 14:51:20.919772 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-combined-ca-bundle\") pod \"ironic-inspector-db-sync-ht555\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " pod="openstack/ironic-inspector-db-sync-ht555" Mar 12 14:51:21.021647 master-0 kubenswrapper[37036]: I0312 14:51:21.021587 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-scripts\") pod \"ironic-inspector-db-sync-ht555\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " pod="openstack/ironic-inspector-db-sync-ht555" Mar 12 14:51:21.022277 master-0 kubenswrapper[37036]: I0312 14:51:21.021674 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qctl2\" (UniqueName: \"kubernetes.io/projected/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-kube-api-access-qctl2\") pod \"ironic-inspector-db-sync-ht555\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " pod="openstack/ironic-inspector-db-sync-ht555" Mar 12 14:51:21.022277 master-0 kubenswrapper[37036]: I0312 14:51:21.021736 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-config\") pod \"ironic-inspector-db-sync-ht555\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " pod="openstack/ironic-inspector-db-sync-ht555" Mar 12 14:51:21.022277 master-0 kubenswrapper[37036]: I0312 14:51:21.021938 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-etc-podinfo\") pod \"ironic-inspector-db-sync-ht555\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " pod="openstack/ironic-inspector-db-sync-ht555" Mar 12 14:51:21.022277 master-0 kubenswrapper[37036]: I0312 14:51:21.022012 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-combined-ca-bundle\") pod \"ironic-inspector-db-sync-ht555\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " pod="openstack/ironic-inspector-db-sync-ht555" Mar 12 14:51:21.022892 master-0 kubenswrapper[37036]: I0312 14:51:21.022875 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-var-lib-ironic\") pod \"ironic-inspector-db-sync-ht555\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " pod="openstack/ironic-inspector-db-sync-ht555" Mar 12 14:51:21.025462 master-0 kubenswrapper[37036]: I0312 14:51:21.023776 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-var-lib-ironic\") pod \"ironic-inspector-db-sync-ht555\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " pod="openstack/ironic-inspector-db-sync-ht555" Mar 12 14:51:21.025892 master-0 kubenswrapper[37036]: I0312 14:51:21.025852 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-etc-podinfo\") pod \"ironic-inspector-db-sync-ht555\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " pod="openstack/ironic-inspector-db-sync-ht555" Mar 12 14:51:21.026095 master-0 kubenswrapper[37036]: I0312 14:51:21.026064 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-ht555\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " pod="openstack/ironic-inspector-db-sync-ht555" Mar 12 14:51:21.026785 master-0 kubenswrapper[37036]: I0312 14:51:21.026600 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-ht555\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " pod="openstack/ironic-inspector-db-sync-ht555" Mar 12 14:51:21.026785 master-0 kubenswrapper[37036]: I0312 14:51:21.026746 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-scripts\") pod \"ironic-inspector-db-sync-ht555\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " pod="openstack/ironic-inspector-db-sync-ht555" Mar 12 14:51:21.034368 master-0 kubenswrapper[37036]: I0312 14:51:21.034310 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-combined-ca-bundle\") pod \"ironic-inspector-db-sync-ht555\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " pod="openstack/ironic-inspector-db-sync-ht555" Mar 12 14:51:21.041261 master-0 kubenswrapper[37036]: I0312 14:51:21.041163 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-config\") pod \"ironic-inspector-db-sync-ht555\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " pod="openstack/ironic-inspector-db-sync-ht555" Mar 12 14:51:21.050966 master-0 kubenswrapper[37036]: I0312 14:51:21.049993 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qctl2\" (UniqueName: \"kubernetes.io/projected/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-kube-api-access-qctl2\") pod \"ironic-inspector-db-sync-ht555\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " pod="openstack/ironic-inspector-db-sync-ht555" Mar 12 14:51:21.139429 master-0 kubenswrapper[37036]: I0312 14:51:21.139362 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-ht555" Mar 12 14:51:21.269312 master-0 kubenswrapper[37036]: I0312 14:51:21.269143 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4957b7fc-e353-40a6-b5c5-39b608bb366d" path="/var/lib/kubelet/pods/4957b7fc-e353-40a6-b5c5-39b608bb366d/volumes" Mar 12 14:51:21.475785 master-0 kubenswrapper[37036]: I0312 14:51:21.475667 37036 generic.go:334] "Generic (PLEG): container finished" podID="4f72582b-03ab-4662-bb3e-3683d598e72b" containerID="d25abaf1a0366ce158682d6d8cb39f30e262b0a2dad3fc1200bffeec01596c35" exitCode=1 Mar 12 14:51:21.475785 master-0 kubenswrapper[37036]: I0312 14:51:21.475734 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-587fdb965c-q72qp" event={"ID":"4f72582b-03ab-4662-bb3e-3683d598e72b","Type":"ContainerDied","Data":"d25abaf1a0366ce158682d6d8cb39f30e262b0a2dad3fc1200bffeec01596c35"} Mar 12 14:51:21.476793 master-0 kubenswrapper[37036]: I0312 14:51:21.476762 37036 scope.go:117] "RemoveContainer" containerID="d25abaf1a0366ce158682d6d8cb39f30e262b0a2dad3fc1200bffeec01596c35" Mar 12 14:51:21.965160 master-0 kubenswrapper[37036]: I0312 14:51:21.959677 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 12 14:51:21.965160 master-0 kubenswrapper[37036]: I0312 14:51:21.961587 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 12 14:51:21.965160 master-0 kubenswrapper[37036]: I0312 14:51:21.964771 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 12 14:51:21.965459 master-0 kubenswrapper[37036]: I0312 14:51:21.965373 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 12 14:51:22.002352 master-0 kubenswrapper[37036]: I0312 14:51:22.000956 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 12 14:51:22.053768 master-0 kubenswrapper[37036]: I0312 14:51:22.052087 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ftqn\" (UniqueName: \"kubernetes.io/projected/c3083373-d0bb-4775-b8ea-1d34f46bc0b7-kube-api-access-9ftqn\") pod \"openstackclient\" (UID: \"c3083373-d0bb-4775-b8ea-1d34f46bc0b7\") " pod="openstack/openstackclient" Mar 12 14:51:22.053768 master-0 kubenswrapper[37036]: I0312 14:51:22.052154 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c3083373-d0bb-4775-b8ea-1d34f46bc0b7-openstack-config\") pod \"openstackclient\" (UID: \"c3083373-d0bb-4775-b8ea-1d34f46bc0b7\") " pod="openstack/openstackclient" Mar 12 14:51:22.053768 master-0 kubenswrapper[37036]: I0312 14:51:22.052199 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3083373-d0bb-4775-b8ea-1d34f46bc0b7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c3083373-d0bb-4775-b8ea-1d34f46bc0b7\") " pod="openstack/openstackclient" Mar 12 14:51:22.053768 master-0 kubenswrapper[37036]: I0312 14:51:22.052360 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c3083373-d0bb-4775-b8ea-1d34f46bc0b7-openstack-config-secret\") pod \"openstackclient\" (UID: \"c3083373-d0bb-4775-b8ea-1d34f46bc0b7\") " pod="openstack/openstackclient" Mar 12 14:51:22.154299 master-0 kubenswrapper[37036]: I0312 14:51:22.154109 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c3083373-d0bb-4775-b8ea-1d34f46bc0b7-openstack-config-secret\") pod \"openstackclient\" (UID: \"c3083373-d0bb-4775-b8ea-1d34f46bc0b7\") " pod="openstack/openstackclient" Mar 12 14:51:22.154689 master-0 kubenswrapper[37036]: I0312 14:51:22.154666 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ftqn\" (UniqueName: \"kubernetes.io/projected/c3083373-d0bb-4775-b8ea-1d34f46bc0b7-kube-api-access-9ftqn\") pod \"openstackclient\" (UID: \"c3083373-d0bb-4775-b8ea-1d34f46bc0b7\") " pod="openstack/openstackclient" Mar 12 14:51:22.154760 master-0 kubenswrapper[37036]: I0312 14:51:22.154717 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c3083373-d0bb-4775-b8ea-1d34f46bc0b7-openstack-config\") pod \"openstackclient\" (UID: \"c3083373-d0bb-4775-b8ea-1d34f46bc0b7\") " pod="openstack/openstackclient" Mar 12 14:51:22.154760 master-0 kubenswrapper[37036]: I0312 14:51:22.154756 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3083373-d0bb-4775-b8ea-1d34f46bc0b7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c3083373-d0bb-4775-b8ea-1d34f46bc0b7\") " pod="openstack/openstackclient" Mar 12 14:51:22.155777 master-0 kubenswrapper[37036]: I0312 14:51:22.155748 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c3083373-d0bb-4775-b8ea-1d34f46bc0b7-openstack-config\") pod \"openstackclient\" (UID: \"c3083373-d0bb-4775-b8ea-1d34f46bc0b7\") " pod="openstack/openstackclient" Mar 12 14:51:22.158688 master-0 kubenswrapper[37036]: I0312 14:51:22.158657 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3083373-d0bb-4775-b8ea-1d34f46bc0b7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c3083373-d0bb-4775-b8ea-1d34f46bc0b7\") " pod="openstack/openstackclient" Mar 12 14:51:22.160752 master-0 kubenswrapper[37036]: I0312 14:51:22.160707 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c3083373-d0bb-4775-b8ea-1d34f46bc0b7-openstack-config-secret\") pod \"openstackclient\" (UID: \"c3083373-d0bb-4775-b8ea-1d34f46bc0b7\") " pod="openstack/openstackclient" Mar 12 14:51:22.178273 master-0 kubenswrapper[37036]: I0312 14:51:22.178108 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ftqn\" (UniqueName: \"kubernetes.io/projected/c3083373-d0bb-4775-b8ea-1d34f46bc0b7-kube-api-access-9ftqn\") pod \"openstackclient\" (UID: \"c3083373-d0bb-4775-b8ea-1d34f46bc0b7\") " pod="openstack/openstackclient" Mar 12 14:51:22.316000 master-0 kubenswrapper[37036]: I0312 14:51:22.315879 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 12 14:51:22.363882 master-0 kubenswrapper[37036]: E0312 14:51:22.362959 37036 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fde1591505c12d055b802d151139f4372caac4dda91b98e9941added9fdd34fe is running failed: container process not found" containerID="fde1591505c12d055b802d151139f4372caac4dda91b98e9941added9fdd34fe" cmd=["/bin/true"] Mar 12 14:51:22.363882 master-0 kubenswrapper[37036]: E0312 14:51:22.363116 37036 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fde1591505c12d055b802d151139f4372caac4dda91b98e9941added9fdd34fe is running failed: container process not found" containerID="fde1591505c12d055b802d151139f4372caac4dda91b98e9941added9fdd34fe" cmd=["/bin/true"] Mar 12 14:51:22.363882 master-0 kubenswrapper[37036]: E0312 14:51:22.363592 37036 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fde1591505c12d055b802d151139f4372caac4dda91b98e9941added9fdd34fe is running failed: container process not found" containerID="fde1591505c12d055b802d151139f4372caac4dda91b98e9941added9fdd34fe" cmd=["/bin/true"] Mar 12 14:51:22.363882 master-0 kubenswrapper[37036]: E0312 14:51:22.363723 37036 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fde1591505c12d055b802d151139f4372caac4dda91b98e9941added9fdd34fe is running failed: container process not found" containerID="fde1591505c12d055b802d151139f4372caac4dda91b98e9941added9fdd34fe" cmd=["/bin/true"] Mar 12 14:51:22.366518 master-0 kubenswrapper[37036]: E0312 14:51:22.363891 37036 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fde1591505c12d055b802d151139f4372caac4dda91b98e9941added9fdd34fe is running failed: container process not found" containerID="fde1591505c12d055b802d151139f4372caac4dda91b98e9941added9fdd34fe" cmd=["/bin/true"] Mar 12 14:51:22.366518 master-0 kubenswrapper[37036]: E0312 14:51:22.363943 37036 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fde1591505c12d055b802d151139f4372caac4dda91b98e9941added9fdd34fe is running failed: container process not found" probeType="Readiness" pod="openstack/ironic-neutron-agent-5685659465-xhxkv" podUID="ee3a29d2-bf14-4521-896e-b0169adefcb2" containerName="ironic-neutron-agent" Mar 12 14:51:22.366518 master-0 kubenswrapper[37036]: E0312 14:51:22.364616 37036 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fde1591505c12d055b802d151139f4372caac4dda91b98e9941added9fdd34fe is running failed: container process not found" containerID="fde1591505c12d055b802d151139f4372caac4dda91b98e9941added9fdd34fe" cmd=["/bin/true"] Mar 12 14:51:22.366518 master-0 kubenswrapper[37036]: E0312 14:51:22.364653 37036 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fde1591505c12d055b802d151139f4372caac4dda91b98e9941added9fdd34fe is running failed: container process not found" probeType="Liveness" pod="openstack/ironic-neutron-agent-5685659465-xhxkv" podUID="ee3a29d2-bf14-4521-896e-b0169adefcb2" containerName="ironic-neutron-agent" Mar 12 14:51:22.420301 master-0 kubenswrapper[37036]: I0312 14:51:22.420237 37036 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:23.519266 master-0 kubenswrapper[37036]: I0312 14:51:23.519208 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-587fdb965c-q72qp" event={"ID":"4f72582b-03ab-4662-bb3e-3683d598e72b","Type":"ContainerStarted","Data":"44706190c3f6b38a61382cf5a53cfdeb4c5230c97a3a3bc0a50df2477841a7e6"} Mar 12 14:51:23.523349 master-0 kubenswrapper[37036]: I0312 14:51:23.522631 37036 generic.go:334] "Generic (PLEG): container finished" podID="ee3a29d2-bf14-4521-896e-b0169adefcb2" containerID="fde1591505c12d055b802d151139f4372caac4dda91b98e9941added9fdd34fe" exitCode=1 Mar 12 14:51:23.523349 master-0 kubenswrapper[37036]: I0312 14:51:23.522668 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-5685659465-xhxkv" event={"ID":"ee3a29d2-bf14-4521-896e-b0169adefcb2","Type":"ContainerDied","Data":"fde1591505c12d055b802d151139f4372caac4dda91b98e9941added9fdd34fe"} Mar 12 14:51:23.523628 master-0 kubenswrapper[37036]: I0312 14:51:23.523443 37036 scope.go:117] "RemoveContainer" containerID="fde1591505c12d055b802d151139f4372caac4dda91b98e9941added9fdd34fe" Mar 12 14:51:23.795031 master-0 kubenswrapper[37036]: I0312 14:51:23.791340 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-ht555"] Mar 12 14:51:23.828006 master-0 kubenswrapper[37036]: W0312 14:51:23.827919 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3083373_d0bb_4775_b8ea_1d34f46bc0b7.slice/crio-ee4638e2f626806754936223bd4c681ff66d75256003e90c8c6504929764a344 WatchSource:0}: Error finding container ee4638e2f626806754936223bd4c681ff66d75256003e90c8c6504929764a344: Status 404 returned error can't find the container with id ee4638e2f626806754936223bd4c681ff66d75256003e90c8c6504929764a344 Mar 12 14:51:23.848372 master-0 kubenswrapper[37036]: I0312 14:51:23.848297 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 12 14:51:24.540012 master-0 kubenswrapper[37036]: I0312 14:51:24.537756 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c3083373-d0bb-4775-b8ea-1d34f46bc0b7","Type":"ContainerStarted","Data":"ee4638e2f626806754936223bd4c681ff66d75256003e90c8c6504929764a344"} Mar 12 14:51:24.540012 master-0 kubenswrapper[37036]: I0312 14:51:24.539480 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-ht555" event={"ID":"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463","Type":"ContainerStarted","Data":"f35e822940d527de569ac142409472bf8d229e8af053f8e668cb1dbf5b61d42b"} Mar 12 14:51:24.546723 master-0 kubenswrapper[37036]: I0312 14:51:24.541780 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-5685659465-xhxkv" event={"ID":"ee3a29d2-bf14-4521-896e-b0169adefcb2","Type":"ContainerStarted","Data":"e6668d23518a999e94ef455dd1dbffa2ccd0f155ccfa0d0b3c381d6e799708d0"} Mar 12 14:51:24.546723 master-0 kubenswrapper[37036]: I0312 14:51:24.543058 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-5685659465-xhxkv" Mar 12 14:51:24.550970 master-0 kubenswrapper[37036]: I0312 14:51:24.548035 37036 generic.go:334] "Generic (PLEG): container finished" podID="4f72582b-03ab-4662-bb3e-3683d598e72b" containerID="44706190c3f6b38a61382cf5a53cfdeb4c5230c97a3a3bc0a50df2477841a7e6" exitCode=1 Mar 12 14:51:24.550970 master-0 kubenswrapper[37036]: I0312 14:51:24.548093 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-587fdb965c-q72qp" event={"ID":"4f72582b-03ab-4662-bb3e-3683d598e72b","Type":"ContainerDied","Data":"44706190c3f6b38a61382cf5a53cfdeb4c5230c97a3a3bc0a50df2477841a7e6"} Mar 12 14:51:24.550970 master-0 kubenswrapper[37036]: I0312 14:51:24.548138 37036 scope.go:117] "RemoveContainer" containerID="d25abaf1a0366ce158682d6d8cb39f30e262b0a2dad3fc1200bffeec01596c35" Mar 12 14:51:24.550970 master-0 kubenswrapper[37036]: I0312 14:51:24.549036 37036 scope.go:117] "RemoveContainer" containerID="44706190c3f6b38a61382cf5a53cfdeb4c5230c97a3a3bc0a50df2477841a7e6" Mar 12 14:51:24.550970 master-0 kubenswrapper[37036]: E0312 14:51:24.549620 37036 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-587fdb965c-q72qp_openstack(4f72582b-03ab-4662-bb3e-3683d598e72b)\"" pod="openstack/ironic-587fdb965c-q72qp" podUID="4f72582b-03ab-4662-bb3e-3683d598e72b" Mar 12 14:51:25.595928 master-0 kubenswrapper[37036]: I0312 14:51:25.590230 37036 scope.go:117] "RemoveContainer" containerID="44706190c3f6b38a61382cf5a53cfdeb4c5230c97a3a3bc0a50df2477841a7e6" Mar 12 14:51:25.595928 master-0 kubenswrapper[37036]: E0312 14:51:25.590497 37036 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-587fdb965c-q72qp_openstack(4f72582b-03ab-4662-bb3e-3683d598e72b)\"" pod="openstack/ironic-587fdb965c-q72qp" podUID="4f72582b-03ab-4662-bb3e-3683d598e72b" Mar 12 14:51:25.704193 master-0 kubenswrapper[37036]: I0312 14:51:25.704137 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-865fc75fb8-6hmpx" Mar 12 14:51:27.416576 master-0 kubenswrapper[37036]: I0312 14:51:27.416515 37036 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:27.417210 master-0 kubenswrapper[37036]: I0312 14:51:27.416586 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:27.417670 master-0 kubenswrapper[37036]: I0312 14:51:27.417644 37036 scope.go:117] "RemoveContainer" containerID="44706190c3f6b38a61382cf5a53cfdeb4c5230c97a3a3bc0a50df2477841a7e6" Mar 12 14:51:27.418028 master-0 kubenswrapper[37036]: E0312 14:51:27.417991 37036 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-587fdb965c-q72qp_openstack(4f72582b-03ab-4662-bb3e-3683d598e72b)\"" pod="openstack/ironic-587fdb965c-q72qp" podUID="4f72582b-03ab-4662-bb3e-3683d598e72b" Mar 12 14:51:27.430723 master-0 kubenswrapper[37036]: I0312 14:51:27.428318 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-5685659465-xhxkv" Mar 12 14:51:27.941924 master-0 kubenswrapper[37036]: I0312 14:51:27.941861 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-5df675497f-chr8j" Mar 12 14:51:28.036350 master-0 kubenswrapper[37036]: I0312 14:51:28.035390 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-587fdb965c-q72qp"] Mar 12 14:51:28.036350 master-0 kubenswrapper[37036]: I0312 14:51:28.035621 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-587fdb965c-q72qp" podUID="4f72582b-03ab-4662-bb3e-3683d598e72b" containerName="ironic-api-log" containerID="cri-o://f3c99d4969bf4db83d78362ca1faa18ddc899ee4154b0962c3ab5143b5fd6467" gracePeriod=60 Mar 12 14:51:28.607641 master-0 kubenswrapper[37036]: I0312 14:51:28.607601 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:28.637405 master-0 kubenswrapper[37036]: I0312 14:51:28.636234 37036 generic.go:334] "Generic (PLEG): container finished" podID="4f72582b-03ab-4662-bb3e-3683d598e72b" containerID="f3c99d4969bf4db83d78362ca1faa18ddc899ee4154b0962c3ab5143b5fd6467" exitCode=143 Mar 12 14:51:28.637405 master-0 kubenswrapper[37036]: I0312 14:51:28.636369 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-587fdb965c-q72qp" event={"ID":"4f72582b-03ab-4662-bb3e-3683d598e72b","Type":"ContainerDied","Data":"f3c99d4969bf4db83d78362ca1faa18ddc899ee4154b0962c3ab5143b5fd6467"} Mar 12 14:51:28.637405 master-0 kubenswrapper[37036]: I0312 14:51:28.636409 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-587fdb965c-q72qp" event={"ID":"4f72582b-03ab-4662-bb3e-3683d598e72b","Type":"ContainerDied","Data":"a3645372bfa200ad2a463de128192d5f9784ae804a7c2ba4005a3814b2564bfb"} Mar 12 14:51:28.637405 master-0 kubenswrapper[37036]: I0312 14:51:28.636434 37036 scope.go:117] "RemoveContainer" containerID="44706190c3f6b38a61382cf5a53cfdeb4c5230c97a3a3bc0a50df2477841a7e6" Mar 12 14:51:28.638232 master-0 kubenswrapper[37036]: I0312 14:51:28.637872 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-587fdb965c-q72qp" Mar 12 14:51:28.641681 master-0 kubenswrapper[37036]: I0312 14:51:28.641646 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-ht555" event={"ID":"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463","Type":"ContainerStarted","Data":"88db76d7b96363b0fcc65e3924190a3c9677942fb9909acabbeb90bf48ae17e7"} Mar 12 14:51:28.677408 master-0 kubenswrapper[37036]: I0312 14:51:28.676129 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-sync-ht555" podStartSLOduration=4.574613691 podStartE2EDuration="8.67610939s" podCreationTimestamp="2026-03-12 14:51:20 +0000 UTC" firstStartedPulling="2026-03-12 14:51:23.827085409 +0000 UTC m=+942.834826346" lastFinishedPulling="2026-03-12 14:51:27.928581098 +0000 UTC m=+946.936322045" observedRunningTime="2026-03-12 14:51:28.669449623 +0000 UTC m=+947.677190570" watchObservedRunningTime="2026-03-12 14:51:28.67610939 +0000 UTC m=+947.683850327" Mar 12 14:51:28.677408 master-0 kubenswrapper[37036]: I0312 14:51:28.676274 37036 scope.go:117] "RemoveContainer" containerID="f3c99d4969bf4db83d78362ca1faa18ddc899ee4154b0962c3ab5143b5fd6467" Mar 12 14:51:28.683078 master-0 kubenswrapper[37036]: I0312 14:51:28.681727 37036 generic.go:334] "Generic (PLEG): container finished" podID="ee3a29d2-bf14-4521-896e-b0169adefcb2" containerID="e6668d23518a999e94ef455dd1dbffa2ccd0f155ccfa0d0b3c381d6e799708d0" exitCode=1 Mar 12 14:51:28.683078 master-0 kubenswrapper[37036]: I0312 14:51:28.681777 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-5685659465-xhxkv" event={"ID":"ee3a29d2-bf14-4521-896e-b0169adefcb2","Type":"ContainerDied","Data":"e6668d23518a999e94ef455dd1dbffa2ccd0f155ccfa0d0b3c381d6e799708d0"} Mar 12 14:51:28.683078 master-0 kubenswrapper[37036]: I0312 14:51:28.682515 37036 scope.go:117] "RemoveContainer" containerID="e6668d23518a999e94ef455dd1dbffa2ccd0f155ccfa0d0b3c381d6e799708d0" Mar 12 14:51:28.683078 master-0 kubenswrapper[37036]: E0312 14:51:28.682851 37036 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-5685659465-xhxkv_openstack(ee3a29d2-bf14-4521-896e-b0169adefcb2)\"" pod="openstack/ironic-neutron-agent-5685659465-xhxkv" podUID="ee3a29d2-bf14-4521-896e-b0169adefcb2" Mar 12 14:51:28.720326 master-0 kubenswrapper[37036]: I0312 14:51:28.720285 37036 scope.go:117] "RemoveContainer" containerID="bf4b936a363ad3d4a62443746f8f433ab14a885554c7f3795cec69249496e449" Mar 12 14:51:28.747705 master-0 kubenswrapper[37036]: I0312 14:51:28.747652 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f72582b-03ab-4662-bb3e-3683d598e72b-combined-ca-bundle\") pod \"4f72582b-03ab-4662-bb3e-3683d598e72b\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " Mar 12 14:51:28.748279 master-0 kubenswrapper[37036]: I0312 14:51:28.747779 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f72582b-03ab-4662-bb3e-3683d598e72b-scripts\") pod \"4f72582b-03ab-4662-bb3e-3683d598e72b\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " Mar 12 14:51:28.748279 master-0 kubenswrapper[37036]: I0312 14:51:28.747835 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-728l6\" (UniqueName: \"kubernetes.io/projected/4f72582b-03ab-4662-bb3e-3683d598e72b-kube-api-access-728l6\") pod \"4f72582b-03ab-4662-bb3e-3683d598e72b\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " Mar 12 14:51:28.748279 master-0 kubenswrapper[37036]: I0312 14:51:28.747959 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4f72582b-03ab-4662-bb3e-3683d598e72b-config-data-merged\") pod \"4f72582b-03ab-4662-bb3e-3683d598e72b\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " Mar 12 14:51:28.748279 master-0 kubenswrapper[37036]: I0312 14:51:28.748039 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f72582b-03ab-4662-bb3e-3683d598e72b-config-data\") pod \"4f72582b-03ab-4662-bb3e-3683d598e72b\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " Mar 12 14:51:28.748279 master-0 kubenswrapper[37036]: I0312 14:51:28.748112 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/4f72582b-03ab-4662-bb3e-3683d598e72b-etc-podinfo\") pod \"4f72582b-03ab-4662-bb3e-3683d598e72b\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " Mar 12 14:51:28.748279 master-0 kubenswrapper[37036]: I0312 14:51:28.748214 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f72582b-03ab-4662-bb3e-3683d598e72b-config-data-custom\") pod \"4f72582b-03ab-4662-bb3e-3683d598e72b\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " Mar 12 14:51:28.748279 master-0 kubenswrapper[37036]: I0312 14:51:28.748263 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f72582b-03ab-4662-bb3e-3683d598e72b-logs\") pod \"4f72582b-03ab-4662-bb3e-3683d598e72b\" (UID: \"4f72582b-03ab-4662-bb3e-3683d598e72b\") " Mar 12 14:51:28.750120 master-0 kubenswrapper[37036]: I0312 14:51:28.750087 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f72582b-03ab-4662-bb3e-3683d598e72b-logs" (OuterVolumeSpecName: "logs") pod "4f72582b-03ab-4662-bb3e-3683d598e72b" (UID: "4f72582b-03ab-4662-bb3e-3683d598e72b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:51:28.755683 master-0 kubenswrapper[37036]: I0312 14:51:28.752962 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f72582b-03ab-4662-bb3e-3683d598e72b-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "4f72582b-03ab-4662-bb3e-3683d598e72b" (UID: "4f72582b-03ab-4662-bb3e-3683d598e72b"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:51:28.755867 master-0 kubenswrapper[37036]: I0312 14:51:28.755832 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4f72582b-03ab-4662-bb3e-3683d598e72b-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "4f72582b-03ab-4662-bb3e-3683d598e72b" (UID: "4f72582b-03ab-4662-bb3e-3683d598e72b"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 12 14:51:28.756068 master-0 kubenswrapper[37036]: I0312 14:51:28.756042 37036 scope.go:117] "RemoveContainer" containerID="44706190c3f6b38a61382cf5a53cfdeb4c5230c97a3a3bc0a50df2477841a7e6" Mar 12 14:51:28.756452 master-0 kubenswrapper[37036]: E0312 14:51:28.756432 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44706190c3f6b38a61382cf5a53cfdeb4c5230c97a3a3bc0a50df2477841a7e6\": container with ID starting with 44706190c3f6b38a61382cf5a53cfdeb4c5230c97a3a3bc0a50df2477841a7e6 not found: ID does not exist" containerID="44706190c3f6b38a61382cf5a53cfdeb4c5230c97a3a3bc0a50df2477841a7e6" Mar 12 14:51:28.756514 master-0 kubenswrapper[37036]: I0312 14:51:28.756457 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44706190c3f6b38a61382cf5a53cfdeb4c5230c97a3a3bc0a50df2477841a7e6"} err="failed to get container status \"44706190c3f6b38a61382cf5a53cfdeb4c5230c97a3a3bc0a50df2477841a7e6\": rpc error: code = NotFound desc = could not find container \"44706190c3f6b38a61382cf5a53cfdeb4c5230c97a3a3bc0a50df2477841a7e6\": container with ID starting with 44706190c3f6b38a61382cf5a53cfdeb4c5230c97a3a3bc0a50df2477841a7e6 not found: ID does not exist" Mar 12 14:51:28.756514 master-0 kubenswrapper[37036]: I0312 14:51:28.756477 37036 scope.go:117] "RemoveContainer" containerID="f3c99d4969bf4db83d78362ca1faa18ddc899ee4154b0962c3ab5143b5fd6467" Mar 12 14:51:28.756674 master-0 kubenswrapper[37036]: I0312 14:51:28.756653 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f72582b-03ab-4662-bb3e-3683d598e72b-scripts" (OuterVolumeSpecName: "scripts") pod "4f72582b-03ab-4662-bb3e-3683d598e72b" (UID: "4f72582b-03ab-4662-bb3e-3683d598e72b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:28.756881 master-0 kubenswrapper[37036]: E0312 14:51:28.756859 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3c99d4969bf4db83d78362ca1faa18ddc899ee4154b0962c3ab5143b5fd6467\": container with ID starting with f3c99d4969bf4db83d78362ca1faa18ddc899ee4154b0962c3ab5143b5fd6467 not found: ID does not exist" containerID="f3c99d4969bf4db83d78362ca1faa18ddc899ee4154b0962c3ab5143b5fd6467" Mar 12 14:51:28.757024 master-0 kubenswrapper[37036]: I0312 14:51:28.756996 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3c99d4969bf4db83d78362ca1faa18ddc899ee4154b0962c3ab5143b5fd6467"} err="failed to get container status \"f3c99d4969bf4db83d78362ca1faa18ddc899ee4154b0962c3ab5143b5fd6467\": rpc error: code = NotFound desc = could not find container \"f3c99d4969bf4db83d78362ca1faa18ddc899ee4154b0962c3ab5143b5fd6467\": container with ID starting with f3c99d4969bf4db83d78362ca1faa18ddc899ee4154b0962c3ab5143b5fd6467 not found: ID does not exist" Mar 12 14:51:28.757115 master-0 kubenswrapper[37036]: I0312 14:51:28.757101 37036 scope.go:117] "RemoveContainer" containerID="bf4b936a363ad3d4a62443746f8f433ab14a885554c7f3795cec69249496e449" Mar 12 14:51:28.764845 master-0 kubenswrapper[37036]: E0312 14:51:28.760157 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf4b936a363ad3d4a62443746f8f433ab14a885554c7f3795cec69249496e449\": container with ID starting with bf4b936a363ad3d4a62443746f8f433ab14a885554c7f3795cec69249496e449 not found: ID does not exist" containerID="bf4b936a363ad3d4a62443746f8f433ab14a885554c7f3795cec69249496e449" Mar 12 14:51:28.764845 master-0 kubenswrapper[37036]: I0312 14:51:28.760187 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf4b936a363ad3d4a62443746f8f433ab14a885554c7f3795cec69249496e449"} err="failed to get container status \"bf4b936a363ad3d4a62443746f8f433ab14a885554c7f3795cec69249496e449\": rpc error: code = NotFound desc = could not find container \"bf4b936a363ad3d4a62443746f8f433ab14a885554c7f3795cec69249496e449\": container with ID starting with bf4b936a363ad3d4a62443746f8f433ab14a885554c7f3795cec69249496e449 not found: ID does not exist" Mar 12 14:51:28.764845 master-0 kubenswrapper[37036]: I0312 14:51:28.760204 37036 scope.go:117] "RemoveContainer" containerID="fde1591505c12d055b802d151139f4372caac4dda91b98e9941added9fdd34fe" Mar 12 14:51:28.764845 master-0 kubenswrapper[37036]: I0312 14:51:28.760250 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f72582b-03ab-4662-bb3e-3683d598e72b-kube-api-access-728l6" (OuterVolumeSpecName: "kube-api-access-728l6") pod "4f72582b-03ab-4662-bb3e-3683d598e72b" (UID: "4f72582b-03ab-4662-bb3e-3683d598e72b"). InnerVolumeSpecName "kube-api-access-728l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:28.765961 master-0 kubenswrapper[37036]: I0312 14:51:28.765913 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f72582b-03ab-4662-bb3e-3683d598e72b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4f72582b-03ab-4662-bb3e-3683d598e72b" (UID: "4f72582b-03ab-4662-bb3e-3683d598e72b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:28.798864 master-0 kubenswrapper[37036]: I0312 14:51:28.798796 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f72582b-03ab-4662-bb3e-3683d598e72b-config-data" (OuterVolumeSpecName: "config-data") pod "4f72582b-03ab-4662-bb3e-3683d598e72b" (UID: "4f72582b-03ab-4662-bb3e-3683d598e72b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:28.834744 master-0 kubenswrapper[37036]: I0312 14:51:28.834644 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f72582b-03ab-4662-bb3e-3683d598e72b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f72582b-03ab-4662-bb3e-3683d598e72b" (UID: "4f72582b-03ab-4662-bb3e-3683d598e72b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:28.851159 master-0 kubenswrapper[37036]: I0312 14:51:28.851044 37036 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4f72582b-03ab-4662-bb3e-3683d598e72b-config-data-merged\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:28.851159 master-0 kubenswrapper[37036]: I0312 14:51:28.851093 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f72582b-03ab-4662-bb3e-3683d598e72b-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:28.851159 master-0 kubenswrapper[37036]: I0312 14:51:28.851109 37036 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/4f72582b-03ab-4662-bb3e-3683d598e72b-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:28.851159 master-0 kubenswrapper[37036]: I0312 14:51:28.851124 37036 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f72582b-03ab-4662-bb3e-3683d598e72b-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:28.851159 master-0 kubenswrapper[37036]: I0312 14:51:28.851135 37036 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f72582b-03ab-4662-bb3e-3683d598e72b-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:28.851159 master-0 kubenswrapper[37036]: I0312 14:51:28.851146 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f72582b-03ab-4662-bb3e-3683d598e72b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:28.851159 master-0 kubenswrapper[37036]: I0312 14:51:28.851158 37036 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f72582b-03ab-4662-bb3e-3683d598e72b-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:28.851526 master-0 kubenswrapper[37036]: I0312 14:51:28.851170 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-728l6\" (UniqueName: \"kubernetes.io/projected/4f72582b-03ab-4662-bb3e-3683d598e72b-kube-api-access-728l6\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:28.990494 master-0 kubenswrapper[37036]: I0312 14:51:28.990412 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-75dfc444b6-mtcqr"] Mar 12 14:51:28.991209 master-0 kubenswrapper[37036]: E0312 14:51:28.991174 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f72582b-03ab-4662-bb3e-3683d598e72b" containerName="ironic-api" Mar 12 14:51:28.991209 master-0 kubenswrapper[37036]: I0312 14:51:28.991200 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f72582b-03ab-4662-bb3e-3683d598e72b" containerName="ironic-api" Mar 12 14:51:28.991373 master-0 kubenswrapper[37036]: E0312 14:51:28.991228 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f72582b-03ab-4662-bb3e-3683d598e72b" containerName="init" Mar 12 14:51:28.991373 master-0 kubenswrapper[37036]: I0312 14:51:28.991237 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f72582b-03ab-4662-bb3e-3683d598e72b" containerName="init" Mar 12 14:51:28.991373 master-0 kubenswrapper[37036]: E0312 14:51:28.991251 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f72582b-03ab-4662-bb3e-3683d598e72b" containerName="ironic-api-log" Mar 12 14:51:28.991373 master-0 kubenswrapper[37036]: I0312 14:51:28.991260 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f72582b-03ab-4662-bb3e-3683d598e72b" containerName="ironic-api-log" Mar 12 14:51:28.991373 master-0 kubenswrapper[37036]: E0312 14:51:28.991282 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f72582b-03ab-4662-bb3e-3683d598e72b" containerName="init" Mar 12 14:51:28.991373 master-0 kubenswrapper[37036]: I0312 14:51:28.991290 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f72582b-03ab-4662-bb3e-3683d598e72b" containerName="init" Mar 12 14:51:28.991715 master-0 kubenswrapper[37036]: I0312 14:51:28.991624 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f72582b-03ab-4662-bb3e-3683d598e72b" containerName="ironic-api" Mar 12 14:51:28.991715 master-0 kubenswrapper[37036]: I0312 14:51:28.991666 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f72582b-03ab-4662-bb3e-3683d598e72b" containerName="ironic-api-log" Mar 12 14:51:28.993252 master-0 kubenswrapper[37036]: E0312 14:51:28.992000 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f72582b-03ab-4662-bb3e-3683d598e72b" containerName="ironic-api" Mar 12 14:51:28.993252 master-0 kubenswrapper[37036]: I0312 14:51:28.992025 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f72582b-03ab-4662-bb3e-3683d598e72b" containerName="ironic-api" Mar 12 14:51:28.993252 master-0 kubenswrapper[37036]: I0312 14:51:28.992337 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f72582b-03ab-4662-bb3e-3683d598e72b" containerName="ironic-api" Mar 12 14:51:28.993564 master-0 kubenswrapper[37036]: I0312 14:51:28.993286 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:28.998784 master-0 kubenswrapper[37036]: I0312 14:51:28.998737 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 12 14:51:28.998784 master-0 kubenswrapper[37036]: I0312 14:51:28.998815 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 12 14:51:28.999172 master-0 kubenswrapper[37036]: I0312 14:51:28.999064 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 12 14:51:29.018051 master-0 kubenswrapper[37036]: I0312 14:51:29.017979 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-75dfc444b6-mtcqr"] Mar 12 14:51:29.029942 master-0 kubenswrapper[37036]: I0312 14:51:29.026996 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-587fdb965c-q72qp"] Mar 12 14:51:29.041162 master-0 kubenswrapper[37036]: I0312 14:51:29.041085 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-587fdb965c-q72qp"] Mar 12 14:51:29.065096 master-0 kubenswrapper[37036]: I0312 14:51:29.064821 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8shqv\" (UniqueName: \"kubernetes.io/projected/649018e4-7368-455c-8b92-fae29b1b01ec-kube-api-access-8shqv\") pod \"swift-proxy-75dfc444b6-mtcqr\" (UID: \"649018e4-7368-455c-8b92-fae29b1b01ec\") " pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:29.065096 master-0 kubenswrapper[37036]: I0312 14:51:29.064953 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/649018e4-7368-455c-8b92-fae29b1b01ec-combined-ca-bundle\") pod \"swift-proxy-75dfc444b6-mtcqr\" (UID: \"649018e4-7368-455c-8b92-fae29b1b01ec\") " pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:29.065096 master-0 kubenswrapper[37036]: I0312 14:51:29.065013 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/649018e4-7368-455c-8b92-fae29b1b01ec-config-data\") pod \"swift-proxy-75dfc444b6-mtcqr\" (UID: \"649018e4-7368-455c-8b92-fae29b1b01ec\") " pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:29.065628 master-0 kubenswrapper[37036]: I0312 14:51:29.065261 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/649018e4-7368-455c-8b92-fae29b1b01ec-internal-tls-certs\") pod \"swift-proxy-75dfc444b6-mtcqr\" (UID: \"649018e4-7368-455c-8b92-fae29b1b01ec\") " pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:29.065628 master-0 kubenswrapper[37036]: I0312 14:51:29.065367 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/649018e4-7368-455c-8b92-fae29b1b01ec-etc-swift\") pod \"swift-proxy-75dfc444b6-mtcqr\" (UID: \"649018e4-7368-455c-8b92-fae29b1b01ec\") " pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:29.065628 master-0 kubenswrapper[37036]: I0312 14:51:29.065499 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/649018e4-7368-455c-8b92-fae29b1b01ec-run-httpd\") pod \"swift-proxy-75dfc444b6-mtcqr\" (UID: \"649018e4-7368-455c-8b92-fae29b1b01ec\") " pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:29.065628 master-0 kubenswrapper[37036]: I0312 14:51:29.065581 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/649018e4-7368-455c-8b92-fae29b1b01ec-log-httpd\") pod \"swift-proxy-75dfc444b6-mtcqr\" (UID: \"649018e4-7368-455c-8b92-fae29b1b01ec\") " pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:29.066423 master-0 kubenswrapper[37036]: I0312 14:51:29.066382 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/649018e4-7368-455c-8b92-fae29b1b01ec-public-tls-certs\") pod \"swift-proxy-75dfc444b6-mtcqr\" (UID: \"649018e4-7368-455c-8b92-fae29b1b01ec\") " pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:29.168639 master-0 kubenswrapper[37036]: I0312 14:51:29.168509 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-65b67d6cc7-hwxhs" Mar 12 14:51:29.169333 master-0 kubenswrapper[37036]: I0312 14:51:29.169280 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/649018e4-7368-455c-8b92-fae29b1b01ec-etc-swift\") pod \"swift-proxy-75dfc444b6-mtcqr\" (UID: \"649018e4-7368-455c-8b92-fae29b1b01ec\") " pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:29.169423 master-0 kubenswrapper[37036]: I0312 14:51:29.169394 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/649018e4-7368-455c-8b92-fae29b1b01ec-run-httpd\") pod \"swift-proxy-75dfc444b6-mtcqr\" (UID: \"649018e4-7368-455c-8b92-fae29b1b01ec\") " pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:29.169492 master-0 kubenswrapper[37036]: I0312 14:51:29.169445 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/649018e4-7368-455c-8b92-fae29b1b01ec-log-httpd\") pod \"swift-proxy-75dfc444b6-mtcqr\" (UID: \"649018e4-7368-455c-8b92-fae29b1b01ec\") " pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:29.172929 master-0 kubenswrapper[37036]: I0312 14:51:29.169552 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/649018e4-7368-455c-8b92-fae29b1b01ec-public-tls-certs\") pod \"swift-proxy-75dfc444b6-mtcqr\" (UID: \"649018e4-7368-455c-8b92-fae29b1b01ec\") " pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:29.172929 master-0 kubenswrapper[37036]: I0312 14:51:29.169616 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8shqv\" (UniqueName: \"kubernetes.io/projected/649018e4-7368-455c-8b92-fae29b1b01ec-kube-api-access-8shqv\") pod \"swift-proxy-75dfc444b6-mtcqr\" (UID: \"649018e4-7368-455c-8b92-fae29b1b01ec\") " pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:29.172929 master-0 kubenswrapper[37036]: I0312 14:51:29.169658 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/649018e4-7368-455c-8b92-fae29b1b01ec-combined-ca-bundle\") pod \"swift-proxy-75dfc444b6-mtcqr\" (UID: \"649018e4-7368-455c-8b92-fae29b1b01ec\") " pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:29.172929 master-0 kubenswrapper[37036]: I0312 14:51:29.169688 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/649018e4-7368-455c-8b92-fae29b1b01ec-config-data\") pod \"swift-proxy-75dfc444b6-mtcqr\" (UID: \"649018e4-7368-455c-8b92-fae29b1b01ec\") " pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:29.172929 master-0 kubenswrapper[37036]: I0312 14:51:29.169797 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/649018e4-7368-455c-8b92-fae29b1b01ec-internal-tls-certs\") pod \"swift-proxy-75dfc444b6-mtcqr\" (UID: \"649018e4-7368-455c-8b92-fae29b1b01ec\") " pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:29.172929 master-0 kubenswrapper[37036]: I0312 14:51:29.170556 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/649018e4-7368-455c-8b92-fae29b1b01ec-log-httpd\") pod \"swift-proxy-75dfc444b6-mtcqr\" (UID: \"649018e4-7368-455c-8b92-fae29b1b01ec\") " pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:29.172929 master-0 kubenswrapper[37036]: I0312 14:51:29.171775 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/649018e4-7368-455c-8b92-fae29b1b01ec-run-httpd\") pod \"swift-proxy-75dfc444b6-mtcqr\" (UID: \"649018e4-7368-455c-8b92-fae29b1b01ec\") " pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:29.176326 master-0 kubenswrapper[37036]: I0312 14:51:29.176274 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/649018e4-7368-455c-8b92-fae29b1b01ec-config-data\") pod \"swift-proxy-75dfc444b6-mtcqr\" (UID: \"649018e4-7368-455c-8b92-fae29b1b01ec\") " pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:29.176776 master-0 kubenswrapper[37036]: I0312 14:51:29.176674 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/649018e4-7368-455c-8b92-fae29b1b01ec-etc-swift\") pod \"swift-proxy-75dfc444b6-mtcqr\" (UID: \"649018e4-7368-455c-8b92-fae29b1b01ec\") " pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:29.178066 master-0 kubenswrapper[37036]: I0312 14:51:29.178035 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/649018e4-7368-455c-8b92-fae29b1b01ec-public-tls-certs\") pod \"swift-proxy-75dfc444b6-mtcqr\" (UID: \"649018e4-7368-455c-8b92-fae29b1b01ec\") " pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:29.179572 master-0 kubenswrapper[37036]: I0312 14:51:29.179531 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/649018e4-7368-455c-8b92-fae29b1b01ec-internal-tls-certs\") pod \"swift-proxy-75dfc444b6-mtcqr\" (UID: \"649018e4-7368-455c-8b92-fae29b1b01ec\") " pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:29.182938 master-0 kubenswrapper[37036]: I0312 14:51:29.181329 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/649018e4-7368-455c-8b92-fae29b1b01ec-combined-ca-bundle\") pod \"swift-proxy-75dfc444b6-mtcqr\" (UID: \"649018e4-7368-455c-8b92-fae29b1b01ec\") " pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:29.198644 master-0 kubenswrapper[37036]: I0312 14:51:29.197609 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8shqv\" (UniqueName: \"kubernetes.io/projected/649018e4-7368-455c-8b92-fae29b1b01ec-kube-api-access-8shqv\") pod \"swift-proxy-75dfc444b6-mtcqr\" (UID: \"649018e4-7368-455c-8b92-fae29b1b01ec\") " pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:29.328939 master-0 kubenswrapper[37036]: I0312 14:51:29.327287 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f72582b-03ab-4662-bb3e-3683d598e72b" path="/var/lib/kubelet/pods/4f72582b-03ab-4662-bb3e-3683d598e72b/volumes" Mar 12 14:51:29.328939 master-0 kubenswrapper[37036]: I0312 14:51:29.328299 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-865fc75fb8-6hmpx"] Mar 12 14:51:29.328939 master-0 kubenswrapper[37036]: I0312 14:51:29.328529 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-865fc75fb8-6hmpx" podUID="0289ee73-116b-4f34-ae6e-5560906a2df8" containerName="neutron-api" containerID="cri-o://fd5bbad93f3b715cb2cff75c5354ceb537717061278c7c8765fe906f2526900e" gracePeriod=30 Mar 12 14:51:29.328939 master-0 kubenswrapper[37036]: I0312 14:51:29.328878 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-865fc75fb8-6hmpx" podUID="0289ee73-116b-4f34-ae6e-5560906a2df8" containerName="neutron-httpd" containerID="cri-o://4bbe3b4a0e9688f41597323db7c8d29bbb53026d0fbd65feee38b96c8e042453" gracePeriod=30 Mar 12 14:51:29.358002 master-0 kubenswrapper[37036]: I0312 14:51:29.357185 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:29.707056 master-0 kubenswrapper[37036]: I0312 14:51:29.706850 37036 generic.go:334] "Generic (PLEG): container finished" podID="0289ee73-116b-4f34-ae6e-5560906a2df8" containerID="4bbe3b4a0e9688f41597323db7c8d29bbb53026d0fbd65feee38b96c8e042453" exitCode=0 Mar 12 14:51:29.707056 master-0 kubenswrapper[37036]: I0312 14:51:29.706934 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-865fc75fb8-6hmpx" event={"ID":"0289ee73-116b-4f34-ae6e-5560906a2df8","Type":"ContainerDied","Data":"4bbe3b4a0e9688f41597323db7c8d29bbb53026d0fbd65feee38b96c8e042453"} Mar 12 14:51:29.867007 master-0 kubenswrapper[37036]: W0312 14:51:29.866596 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod649018e4_7368_455c_8b92_fae29b1b01ec.slice/crio-f26f165e726015fbac6a0ff605cccc06b97e30e94f85b677f384ee4d6c55f4e4 WatchSource:0}: Error finding container f26f165e726015fbac6a0ff605cccc06b97e30e94f85b677f384ee4d6c55f4e4: Status 404 returned error can't find the container with id f26f165e726015fbac6a0ff605cccc06b97e30e94f85b677f384ee4d6c55f4e4 Mar 12 14:51:29.868296 master-0 kubenswrapper[37036]: I0312 14:51:29.867824 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-75dfc444b6-mtcqr"] Mar 12 14:51:30.728766 master-0 kubenswrapper[37036]: I0312 14:51:30.728708 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-75dfc444b6-mtcqr" event={"ID":"649018e4-7368-455c-8b92-fae29b1b01ec","Type":"ContainerStarted","Data":"8bb70a40abb6c6173e6eaee23ce2ac81cac38617b2969154a58bd2fc9e8b547f"} Mar 12 14:51:30.728766 master-0 kubenswrapper[37036]: I0312 14:51:30.728770 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-75dfc444b6-mtcqr" event={"ID":"649018e4-7368-455c-8b92-fae29b1b01ec","Type":"ContainerStarted","Data":"f26f165e726015fbac6a0ff605cccc06b97e30e94f85b677f384ee4d6c55f4e4"} Mar 12 14:51:32.360553 master-0 kubenswrapper[37036]: I0312 14:51:32.360486 37036 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-5685659465-xhxkv" Mar 12 14:51:32.361369 master-0 kubenswrapper[37036]: I0312 14:51:32.361309 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-5685659465-xhxkv" Mar 12 14:51:32.361807 master-0 kubenswrapper[37036]: I0312 14:51:32.361753 37036 scope.go:117] "RemoveContainer" containerID="e6668d23518a999e94ef455dd1dbffa2ccd0f155ccfa0d0b3c381d6e799708d0" Mar 12 14:51:32.362305 master-0 kubenswrapper[37036]: E0312 14:51:32.362280 37036 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-5685659465-xhxkv_openstack(ee3a29d2-bf14-4521-896e-b0169adefcb2)\"" pod="openstack/ironic-neutron-agent-5685659465-xhxkv" podUID="ee3a29d2-bf14-4521-896e-b0169adefcb2" Mar 12 14:51:32.761338 master-0 kubenswrapper[37036]: I0312 14:51:32.761284 37036 scope.go:117] "RemoveContainer" containerID="e6668d23518a999e94ef455dd1dbffa2ccd0f155ccfa0d0b3c381d6e799708d0" Mar 12 14:51:32.761858 master-0 kubenswrapper[37036]: E0312 14:51:32.761610 37036 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-5685659465-xhxkv_openstack(ee3a29d2-bf14-4521-896e-b0169adefcb2)\"" pod="openstack/ironic-neutron-agent-5685659465-xhxkv" podUID="ee3a29d2-bf14-4521-896e-b0169adefcb2" Mar 12 14:51:34.035030 master-0 kubenswrapper[37036]: I0312 14:51:34.033235 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-bc20e-default-external-api-0"] Mar 12 14:51:34.035030 master-0 kubenswrapper[37036]: I0312 14:51:34.033502 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-bc20e-default-external-api-0" podUID="3a5b885c-0466-4883-9af2-c8942c5b700c" containerName="glance-log" containerID="cri-o://4a5d7cdb26d1dba2275f36ad028c23931eadc88d305215e98e2edefe9cf43015" gracePeriod=30 Mar 12 14:51:34.035030 master-0 kubenswrapper[37036]: I0312 14:51:34.033685 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-bc20e-default-external-api-0" podUID="3a5b885c-0466-4883-9af2-c8942c5b700c" containerName="glance-httpd" containerID="cri-o://c8ba044ac56699d5d1fefb52ed073dbfee76f81402b701b3312728e398391369" gracePeriod=30 Mar 12 14:51:36.330118 master-0 kubenswrapper[37036]: I0312 14:51:36.328695 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-fqxgc"] Mar 12 14:51:36.333271 master-0 kubenswrapper[37036]: I0312 14:51:36.333224 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fqxgc" Mar 12 14:51:36.352094 master-0 kubenswrapper[37036]: I0312 14:51:36.352024 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-fqxgc"] Mar 12 14:51:36.429909 master-0 kubenswrapper[37036]: I0312 14:51:36.429697 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e762e1b3-ab0c-47b4-88a1-4e4030b12ed4-operator-scripts\") pod \"nova-api-db-create-fqxgc\" (UID: \"e762e1b3-ab0c-47b4-88a1-4e4030b12ed4\") " pod="openstack/nova-api-db-create-fqxgc" Mar 12 14:51:36.439673 master-0 kubenswrapper[37036]: I0312 14:51:36.439616 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dncv\" (UniqueName: \"kubernetes.io/projected/e762e1b3-ab0c-47b4-88a1-4e4030b12ed4-kube-api-access-4dncv\") pod \"nova-api-db-create-fqxgc\" (UID: \"e762e1b3-ab0c-47b4-88a1-4e4030b12ed4\") " pod="openstack/nova-api-db-create-fqxgc" Mar 12 14:51:36.456115 master-0 kubenswrapper[37036]: I0312 14:51:36.456054 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-9dzk7"] Mar 12 14:51:36.457807 master-0 kubenswrapper[37036]: I0312 14:51:36.457771 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9dzk7" Mar 12 14:51:36.492325 master-0 kubenswrapper[37036]: I0312 14:51:36.492234 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-9dzk7"] Mar 12 14:51:36.524852 master-0 kubenswrapper[37036]: I0312 14:51:36.522614 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-a000-account-create-update-t5sxm"] Mar 12 14:51:36.524852 master-0 kubenswrapper[37036]: I0312 14:51:36.524556 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a000-account-create-update-t5sxm" Mar 12 14:51:36.540983 master-0 kubenswrapper[37036]: I0312 14:51:36.540931 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 12 14:51:36.541187 master-0 kubenswrapper[37036]: I0312 14:51:36.541139 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a000-account-create-update-t5sxm"] Mar 12 14:51:36.556265 master-0 kubenswrapper[37036]: I0312 14:51:36.556216 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e762e1b3-ab0c-47b4-88a1-4e4030b12ed4-operator-scripts\") pod \"nova-api-db-create-fqxgc\" (UID: \"e762e1b3-ab0c-47b4-88a1-4e4030b12ed4\") " pod="openstack/nova-api-db-create-fqxgc" Mar 12 14:51:36.556570 master-0 kubenswrapper[37036]: I0312 14:51:36.556555 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dncv\" (UniqueName: \"kubernetes.io/projected/e762e1b3-ab0c-47b4-88a1-4e4030b12ed4-kube-api-access-4dncv\") pod \"nova-api-db-create-fqxgc\" (UID: \"e762e1b3-ab0c-47b4-88a1-4e4030b12ed4\") " pod="openstack/nova-api-db-create-fqxgc" Mar 12 14:51:36.556693 master-0 kubenswrapper[37036]: I0312 14:51:36.556676 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72fa304a-a97e-4350-81f5-6180bc4ba594-operator-scripts\") pod \"nova-cell0-db-create-9dzk7\" (UID: \"72fa304a-a97e-4350-81f5-6180bc4ba594\") " pod="openstack/nova-cell0-db-create-9dzk7" Mar 12 14:51:36.556802 master-0 kubenswrapper[37036]: I0312 14:51:36.556789 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hj8k\" (UniqueName: \"kubernetes.io/projected/72fa304a-a97e-4350-81f5-6180bc4ba594-kube-api-access-9hj8k\") pod \"nova-cell0-db-create-9dzk7\" (UID: \"72fa304a-a97e-4350-81f5-6180bc4ba594\") " pod="openstack/nova-cell0-db-create-9dzk7" Mar 12 14:51:36.563299 master-0 kubenswrapper[37036]: I0312 14:51:36.557046 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e762e1b3-ab0c-47b4-88a1-4e4030b12ed4-operator-scripts\") pod \"nova-api-db-create-fqxgc\" (UID: \"e762e1b3-ab0c-47b4-88a1-4e4030b12ed4\") " pod="openstack/nova-api-db-create-fqxgc" Mar 12 14:51:36.566168 master-0 kubenswrapper[37036]: I0312 14:51:36.566111 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-zktq7"] Mar 12 14:51:36.571354 master-0 kubenswrapper[37036]: I0312 14:51:36.568244 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zktq7" Mar 12 14:51:36.593428 master-0 kubenswrapper[37036]: I0312 14:51:36.592663 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-zktq7"] Mar 12 14:51:36.593428 master-0 kubenswrapper[37036]: I0312 14:51:36.593357 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dncv\" (UniqueName: \"kubernetes.io/projected/e762e1b3-ab0c-47b4-88a1-4e4030b12ed4-kube-api-access-4dncv\") pod \"nova-api-db-create-fqxgc\" (UID: \"e762e1b3-ab0c-47b4-88a1-4e4030b12ed4\") " pod="openstack/nova-api-db-create-fqxgc" Mar 12 14:51:36.665224 master-0 kubenswrapper[37036]: I0312 14:51:36.659739 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcnwf\" (UniqueName: \"kubernetes.io/projected/eb4a72f6-6d97-4d7b-a538-11604e6144ea-kube-api-access-bcnwf\") pod \"nova-api-a000-account-create-update-t5sxm\" (UID: \"eb4a72f6-6d97-4d7b-a538-11604e6144ea\") " pod="openstack/nova-api-a000-account-create-update-t5sxm" Mar 12 14:51:36.665224 master-0 kubenswrapper[37036]: I0312 14:51:36.659822 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72fa304a-a97e-4350-81f5-6180bc4ba594-operator-scripts\") pod \"nova-cell0-db-create-9dzk7\" (UID: \"72fa304a-a97e-4350-81f5-6180bc4ba594\") " pod="openstack/nova-cell0-db-create-9dzk7" Mar 12 14:51:36.665224 master-0 kubenswrapper[37036]: I0312 14:51:36.659858 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hj8k\" (UniqueName: \"kubernetes.io/projected/72fa304a-a97e-4350-81f5-6180bc4ba594-kube-api-access-9hj8k\") pod \"nova-cell0-db-create-9dzk7\" (UID: \"72fa304a-a97e-4350-81f5-6180bc4ba594\") " pod="openstack/nova-cell0-db-create-9dzk7" Mar 12 14:51:36.665224 master-0 kubenswrapper[37036]: I0312 14:51:36.660005 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8g48\" (UniqueName: \"kubernetes.io/projected/f92f3efc-76bc-40a0-b3c3-8da77d03c022-kube-api-access-m8g48\") pod \"nova-cell1-db-create-zktq7\" (UID: \"f92f3efc-76bc-40a0-b3c3-8da77d03c022\") " pod="openstack/nova-cell1-db-create-zktq7" Mar 12 14:51:36.665224 master-0 kubenswrapper[37036]: I0312 14:51:36.660052 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb4a72f6-6d97-4d7b-a538-11604e6144ea-operator-scripts\") pod \"nova-api-a000-account-create-update-t5sxm\" (UID: \"eb4a72f6-6d97-4d7b-a538-11604e6144ea\") " pod="openstack/nova-api-a000-account-create-update-t5sxm" Mar 12 14:51:36.665224 master-0 kubenswrapper[37036]: I0312 14:51:36.660116 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f92f3efc-76bc-40a0-b3c3-8da77d03c022-operator-scripts\") pod \"nova-cell1-db-create-zktq7\" (UID: \"f92f3efc-76bc-40a0-b3c3-8da77d03c022\") " pod="openstack/nova-cell1-db-create-zktq7" Mar 12 14:51:36.665224 master-0 kubenswrapper[37036]: I0312 14:51:36.660842 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72fa304a-a97e-4350-81f5-6180bc4ba594-operator-scripts\") pod \"nova-cell0-db-create-9dzk7\" (UID: \"72fa304a-a97e-4350-81f5-6180bc4ba594\") " pod="openstack/nova-cell0-db-create-9dzk7" Mar 12 14:51:36.677153 master-0 kubenswrapper[37036]: I0312 14:51:36.675981 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-994d-account-create-update-h6mj5"] Mar 12 14:51:36.680871 master-0 kubenswrapper[37036]: I0312 14:51:36.680852 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-994d-account-create-update-h6mj5" Mar 12 14:51:36.683937 master-0 kubenswrapper[37036]: I0312 14:51:36.683862 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hj8k\" (UniqueName: \"kubernetes.io/projected/72fa304a-a97e-4350-81f5-6180bc4ba594-kube-api-access-9hj8k\") pod \"nova-cell0-db-create-9dzk7\" (UID: \"72fa304a-a97e-4350-81f5-6180bc4ba594\") " pod="openstack/nova-cell0-db-create-9dzk7" Mar 12 14:51:36.685580 master-0 kubenswrapper[37036]: I0312 14:51:36.685448 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-994d-account-create-update-h6mj5"] Mar 12 14:51:36.686337 master-0 kubenswrapper[37036]: I0312 14:51:36.686231 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 12 14:51:36.725919 master-0 kubenswrapper[37036]: I0312 14:51:36.721051 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fqxgc" Mar 12 14:51:36.765922 master-0 kubenswrapper[37036]: I0312 14:51:36.765239 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8g48\" (UniqueName: \"kubernetes.io/projected/f92f3efc-76bc-40a0-b3c3-8da77d03c022-kube-api-access-m8g48\") pod \"nova-cell1-db-create-zktq7\" (UID: \"f92f3efc-76bc-40a0-b3c3-8da77d03c022\") " pod="openstack/nova-cell1-db-create-zktq7" Mar 12 14:51:36.765922 master-0 kubenswrapper[37036]: I0312 14:51:36.765336 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb4a72f6-6d97-4d7b-a538-11604e6144ea-operator-scripts\") pod \"nova-api-a000-account-create-update-t5sxm\" (UID: \"eb4a72f6-6d97-4d7b-a538-11604e6144ea\") " pod="openstack/nova-api-a000-account-create-update-t5sxm" Mar 12 14:51:36.765922 master-0 kubenswrapper[37036]: I0312 14:51:36.765411 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/112425ab-cbf2-468c-b40c-e64aa339389c-operator-scripts\") pod \"nova-cell0-994d-account-create-update-h6mj5\" (UID: \"112425ab-cbf2-468c-b40c-e64aa339389c\") " pod="openstack/nova-cell0-994d-account-create-update-h6mj5" Mar 12 14:51:36.765922 master-0 kubenswrapper[37036]: I0312 14:51:36.765474 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f92f3efc-76bc-40a0-b3c3-8da77d03c022-operator-scripts\") pod \"nova-cell1-db-create-zktq7\" (UID: \"f92f3efc-76bc-40a0-b3c3-8da77d03c022\") " pod="openstack/nova-cell1-db-create-zktq7" Mar 12 14:51:36.765922 master-0 kubenswrapper[37036]: I0312 14:51:36.765539 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcnwf\" (UniqueName: \"kubernetes.io/projected/eb4a72f6-6d97-4d7b-a538-11604e6144ea-kube-api-access-bcnwf\") pod \"nova-api-a000-account-create-update-t5sxm\" (UID: \"eb4a72f6-6d97-4d7b-a538-11604e6144ea\") " pod="openstack/nova-api-a000-account-create-update-t5sxm" Mar 12 14:51:36.765922 master-0 kubenswrapper[37036]: I0312 14:51:36.765831 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6mhk\" (UniqueName: \"kubernetes.io/projected/112425ab-cbf2-468c-b40c-e64aa339389c-kube-api-access-t6mhk\") pod \"nova-cell0-994d-account-create-update-h6mj5\" (UID: \"112425ab-cbf2-468c-b40c-e64aa339389c\") " pod="openstack/nova-cell0-994d-account-create-update-h6mj5" Mar 12 14:51:36.766448 master-0 kubenswrapper[37036]: I0312 14:51:36.766037 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb4a72f6-6d97-4d7b-a538-11604e6144ea-operator-scripts\") pod \"nova-api-a000-account-create-update-t5sxm\" (UID: \"eb4a72f6-6d97-4d7b-a538-11604e6144ea\") " pod="openstack/nova-api-a000-account-create-update-t5sxm" Mar 12 14:51:36.766448 master-0 kubenswrapper[37036]: I0312 14:51:36.766410 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f92f3efc-76bc-40a0-b3c3-8da77d03c022-operator-scripts\") pod \"nova-cell1-db-create-zktq7\" (UID: \"f92f3efc-76bc-40a0-b3c3-8da77d03c022\") " pod="openstack/nova-cell1-db-create-zktq7" Mar 12 14:51:36.782688 master-0 kubenswrapper[37036]: I0312 14:51:36.782641 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8g48\" (UniqueName: \"kubernetes.io/projected/f92f3efc-76bc-40a0-b3c3-8da77d03c022-kube-api-access-m8g48\") pod \"nova-cell1-db-create-zktq7\" (UID: \"f92f3efc-76bc-40a0-b3c3-8da77d03c022\") " pod="openstack/nova-cell1-db-create-zktq7" Mar 12 14:51:36.784669 master-0 kubenswrapper[37036]: I0312 14:51:36.784606 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcnwf\" (UniqueName: \"kubernetes.io/projected/eb4a72f6-6d97-4d7b-a538-11604e6144ea-kube-api-access-bcnwf\") pod \"nova-api-a000-account-create-update-t5sxm\" (UID: \"eb4a72f6-6d97-4d7b-a538-11604e6144ea\") " pod="openstack/nova-api-a000-account-create-update-t5sxm" Mar 12 14:51:36.803584 master-0 kubenswrapper[37036]: I0312 14:51:36.803518 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9dzk7" Mar 12 14:51:36.841042 master-0 kubenswrapper[37036]: I0312 14:51:36.838795 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-d364-account-create-update-dznnx"] Mar 12 14:51:36.843140 master-0 kubenswrapper[37036]: I0312 14:51:36.841299 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d364-account-create-update-dznnx" Mar 12 14:51:36.846460 master-0 kubenswrapper[37036]: I0312 14:51:36.845492 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 12 14:51:36.856525 master-0 kubenswrapper[37036]: I0312 14:51:36.856450 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d364-account-create-update-dznnx"] Mar 12 14:51:36.872385 master-0 kubenswrapper[37036]: I0312 14:51:36.869652 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/112425ab-cbf2-468c-b40c-e64aa339389c-operator-scripts\") pod \"nova-cell0-994d-account-create-update-h6mj5\" (UID: \"112425ab-cbf2-468c-b40c-e64aa339389c\") " pod="openstack/nova-cell0-994d-account-create-update-h6mj5" Mar 12 14:51:36.872385 master-0 kubenswrapper[37036]: I0312 14:51:36.869919 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6mhk\" (UniqueName: \"kubernetes.io/projected/112425ab-cbf2-468c-b40c-e64aa339389c-kube-api-access-t6mhk\") pod \"nova-cell0-994d-account-create-update-h6mj5\" (UID: \"112425ab-cbf2-468c-b40c-e64aa339389c\") " pod="openstack/nova-cell0-994d-account-create-update-h6mj5" Mar 12 14:51:36.872385 master-0 kubenswrapper[37036]: I0312 14:51:36.870705 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/112425ab-cbf2-468c-b40c-e64aa339389c-operator-scripts\") pod \"nova-cell0-994d-account-create-update-h6mj5\" (UID: \"112425ab-cbf2-468c-b40c-e64aa339389c\") " pod="openstack/nova-cell0-994d-account-create-update-h6mj5" Mar 12 14:51:36.892555 master-0 kubenswrapper[37036]: I0312 14:51:36.892502 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6mhk\" (UniqueName: \"kubernetes.io/projected/112425ab-cbf2-468c-b40c-e64aa339389c-kube-api-access-t6mhk\") pod \"nova-cell0-994d-account-create-update-h6mj5\" (UID: \"112425ab-cbf2-468c-b40c-e64aa339389c\") " pod="openstack/nova-cell0-994d-account-create-update-h6mj5" Mar 12 14:51:36.952759 master-0 kubenswrapper[37036]: I0312 14:51:36.952644 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a000-account-create-update-t5sxm" Mar 12 14:51:36.972466 master-0 kubenswrapper[37036]: I0312 14:51:36.972399 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14afc361-6f21-415e-af7b-7ed3a4f9c48b-operator-scripts\") pod \"nova-cell1-d364-account-create-update-dznnx\" (UID: \"14afc361-6f21-415e-af7b-7ed3a4f9c48b\") " pod="openstack/nova-cell1-d364-account-create-update-dznnx" Mar 12 14:51:36.972744 master-0 kubenswrapper[37036]: I0312 14:51:36.972541 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxh2r\" (UniqueName: \"kubernetes.io/projected/14afc361-6f21-415e-af7b-7ed3a4f9c48b-kube-api-access-xxh2r\") pod \"nova-cell1-d364-account-create-update-dznnx\" (UID: \"14afc361-6f21-415e-af7b-7ed3a4f9c48b\") " pod="openstack/nova-cell1-d364-account-create-update-dznnx" Mar 12 14:51:36.985880 master-0 kubenswrapper[37036]: I0312 14:51:36.985812 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zktq7" Mar 12 14:51:37.052292 master-0 kubenswrapper[37036]: I0312 14:51:37.052250 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-994d-account-create-update-h6mj5" Mar 12 14:51:37.075070 master-0 kubenswrapper[37036]: I0312 14:51:37.075007 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14afc361-6f21-415e-af7b-7ed3a4f9c48b-operator-scripts\") pod \"nova-cell1-d364-account-create-update-dznnx\" (UID: \"14afc361-6f21-415e-af7b-7ed3a4f9c48b\") " pod="openstack/nova-cell1-d364-account-create-update-dznnx" Mar 12 14:51:37.075308 master-0 kubenswrapper[37036]: I0312 14:51:37.075118 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxh2r\" (UniqueName: \"kubernetes.io/projected/14afc361-6f21-415e-af7b-7ed3a4f9c48b-kube-api-access-xxh2r\") pod \"nova-cell1-d364-account-create-update-dznnx\" (UID: \"14afc361-6f21-415e-af7b-7ed3a4f9c48b\") " pod="openstack/nova-cell1-d364-account-create-update-dznnx" Mar 12 14:51:37.076211 master-0 kubenswrapper[37036]: I0312 14:51:37.076181 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14afc361-6f21-415e-af7b-7ed3a4f9c48b-operator-scripts\") pod \"nova-cell1-d364-account-create-update-dznnx\" (UID: \"14afc361-6f21-415e-af7b-7ed3a4f9c48b\") " pod="openstack/nova-cell1-d364-account-create-update-dznnx" Mar 12 14:51:37.095424 master-0 kubenswrapper[37036]: I0312 14:51:37.095319 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxh2r\" (UniqueName: \"kubernetes.io/projected/14afc361-6f21-415e-af7b-7ed3a4f9c48b-kube-api-access-xxh2r\") pod \"nova-cell1-d364-account-create-update-dznnx\" (UID: \"14afc361-6f21-415e-af7b-7ed3a4f9c48b\") " pod="openstack/nova-cell1-d364-account-create-update-dznnx" Mar 12 14:51:37.248672 master-0 kubenswrapper[37036]: I0312 14:51:37.248611 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d364-account-create-update-dznnx" Mar 12 14:51:38.399074 master-0 kubenswrapper[37036]: I0312 14:51:38.395214 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-bc20e-default-internal-api-0"] Mar 12 14:51:38.399074 master-0 kubenswrapper[37036]: I0312 14:51:38.395517 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-bc20e-default-internal-api-0" podUID="9e8b1936-ecd2-4fec-bce6-88bd240ea0ae" containerName="glance-log" containerID="cri-o://a4712f26a1b45f8060cab3e25f59a258c36af65889631302c1c3828633456a0f" gracePeriod=30 Mar 12 14:51:38.399074 master-0 kubenswrapper[37036]: I0312 14:51:38.396416 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-bc20e-default-internal-api-0" podUID="9e8b1936-ecd2-4fec-bce6-88bd240ea0ae" containerName="glance-httpd" containerID="cri-o://b6861f8141728bac76ef239d5809958cef5345452e37261a4216bcbf824f1dba" gracePeriod=30 Mar 12 14:51:38.856448 master-0 kubenswrapper[37036]: I0312 14:51:38.856380 37036 generic.go:334] "Generic (PLEG): container finished" podID="3a5b885c-0466-4883-9af2-c8942c5b700c" containerID="c8ba044ac56699d5d1fefb52ed073dbfee76f81402b701b3312728e398391369" exitCode=0 Mar 12 14:51:38.856448 master-0 kubenswrapper[37036]: I0312 14:51:38.856420 37036 generic.go:334] "Generic (PLEG): container finished" podID="3a5b885c-0466-4883-9af2-c8942c5b700c" containerID="4a5d7cdb26d1dba2275f36ad028c23931eadc88d305215e98e2edefe9cf43015" exitCode=143 Mar 12 14:51:38.856448 master-0 kubenswrapper[37036]: I0312 14:51:38.856454 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bc20e-default-external-api-0" event={"ID":"3a5b885c-0466-4883-9af2-c8942c5b700c","Type":"ContainerDied","Data":"c8ba044ac56699d5d1fefb52ed073dbfee76f81402b701b3312728e398391369"} Mar 12 14:51:38.856771 master-0 kubenswrapper[37036]: I0312 14:51:38.856481 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bc20e-default-external-api-0" event={"ID":"3a5b885c-0466-4883-9af2-c8942c5b700c","Type":"ContainerDied","Data":"4a5d7cdb26d1dba2275f36ad028c23931eadc88d305215e98e2edefe9cf43015"} Mar 12 14:51:38.859442 master-0 kubenswrapper[37036]: I0312 14:51:38.859405 37036 generic.go:334] "Generic (PLEG): container finished" podID="d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463" containerID="88db76d7b96363b0fcc65e3924190a3c9677942fb9909acabbeb90bf48ae17e7" exitCode=0 Mar 12 14:51:38.859544 master-0 kubenswrapper[37036]: I0312 14:51:38.859447 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-ht555" event={"ID":"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463","Type":"ContainerDied","Data":"88db76d7b96363b0fcc65e3924190a3c9677942fb9909acabbeb90bf48ae17e7"} Mar 12 14:51:38.862928 master-0 kubenswrapper[37036]: I0312 14:51:38.862863 37036 generic.go:334] "Generic (PLEG): container finished" podID="0289ee73-116b-4f34-ae6e-5560906a2df8" containerID="fd5bbad93f3b715cb2cff75c5354ceb537717061278c7c8765fe906f2526900e" exitCode=0 Mar 12 14:51:38.863021 master-0 kubenswrapper[37036]: I0312 14:51:38.862928 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-865fc75fb8-6hmpx" event={"ID":"0289ee73-116b-4f34-ae6e-5560906a2df8","Type":"ContainerDied","Data":"fd5bbad93f3b715cb2cff75c5354ceb537717061278c7c8765fe906f2526900e"} Mar 12 14:51:42.946859 master-0 kubenswrapper[37036]: I0312 14:51:42.936732 37036 generic.go:334] "Generic (PLEG): container finished" podID="9e8b1936-ecd2-4fec-bce6-88bd240ea0ae" containerID="b6861f8141728bac76ef239d5809958cef5345452e37261a4216bcbf824f1dba" exitCode=0 Mar 12 14:51:42.946859 master-0 kubenswrapper[37036]: I0312 14:51:42.936781 37036 generic.go:334] "Generic (PLEG): container finished" podID="9e8b1936-ecd2-4fec-bce6-88bd240ea0ae" containerID="a4712f26a1b45f8060cab3e25f59a258c36af65889631302c1c3828633456a0f" exitCode=143 Mar 12 14:51:42.946859 master-0 kubenswrapper[37036]: I0312 14:51:42.936823 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bc20e-default-internal-api-0" event={"ID":"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae","Type":"ContainerDied","Data":"b6861f8141728bac76ef239d5809958cef5345452e37261a4216bcbf824f1dba"} Mar 12 14:51:42.946859 master-0 kubenswrapper[37036]: I0312 14:51:42.936854 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bc20e-default-internal-api-0" event={"ID":"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae","Type":"ContainerDied","Data":"a4712f26a1b45f8060cab3e25f59a258c36af65889631302c1c3828633456a0f"} Mar 12 14:51:42.946859 master-0 kubenswrapper[37036]: I0312 14:51:42.939454 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-865fc75fb8-6hmpx" event={"ID":"0289ee73-116b-4f34-ae6e-5560906a2df8","Type":"ContainerDied","Data":"4bf401abfc578e15d896145696115cc76e0aede234da331dda161d47ea4abc22"} Mar 12 14:51:42.946859 master-0 kubenswrapper[37036]: I0312 14:51:42.939503 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bf401abfc578e15d896145696115cc76e0aede234da331dda161d47ea4abc22" Mar 12 14:51:42.954790 master-0 kubenswrapper[37036]: I0312 14:51:42.954575 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bc20e-default-external-api-0" event={"ID":"3a5b885c-0466-4883-9af2-c8942c5b700c","Type":"ContainerDied","Data":"2532e412223bca47be9e5608f3bc52f131cab2bb7d78bc36893204c82a46cb2d"} Mar 12 14:51:42.954790 master-0 kubenswrapper[37036]: I0312 14:51:42.954631 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2532e412223bca47be9e5608f3bc52f131cab2bb7d78bc36893204c82a46cb2d" Mar 12 14:51:42.956988 master-0 kubenswrapper[37036]: I0312 14:51:42.956775 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-ht555" event={"ID":"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463","Type":"ContainerDied","Data":"f35e822940d527de569ac142409472bf8d229e8af053f8e668cb1dbf5b61d42b"} Mar 12 14:51:42.956988 master-0 kubenswrapper[37036]: I0312 14:51:42.956814 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f35e822940d527de569ac142409472bf8d229e8af053f8e668cb1dbf5b61d42b" Mar 12 14:51:43.083879 master-0 kubenswrapper[37036]: I0312 14:51:43.081834 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-ht555" Mar 12 14:51:43.087929 master-0 kubenswrapper[37036]: I0312 14:51:43.086441 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-865fc75fb8-6hmpx" Mar 12 14:51:43.099921 master-0 kubenswrapper[37036]: I0312 14:51:43.097182 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:43.258002 master-0 kubenswrapper[37036]: I0312 14:51:43.250023 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a5b885c-0466-4883-9af2-c8942c5b700c-public-tls-certs\") pod \"3a5b885c-0466-4883-9af2-c8942c5b700c\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " Mar 12 14:51:43.258002 master-0 kubenswrapper[37036]: I0312 14:51:43.250107 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3a5b885c-0466-4883-9af2-c8942c5b700c-httpd-run\") pod \"3a5b885c-0466-4883-9af2-c8942c5b700c\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " Mar 12 14:51:43.258002 master-0 kubenswrapper[37036]: I0312 14:51:43.250128 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " Mar 12 14:51:43.258002 master-0 kubenswrapper[37036]: I0312 14:51:43.250203 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a5b885c-0466-4883-9af2-c8942c5b700c-logs\") pod \"3a5b885c-0466-4883-9af2-c8942c5b700c\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " Mar 12 14:51:43.258002 master-0 kubenswrapper[37036]: I0312 14:51:43.250233 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-etc-podinfo\") pod \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " Mar 12 14:51:43.258002 master-0 kubenswrapper[37036]: I0312 14:51:43.252981 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b388bada-a531-4c3f-bf6b-3b84af4376f1\") pod \"3a5b885c-0466-4883-9af2-c8942c5b700c\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " Mar 12 14:51:43.258002 master-0 kubenswrapper[37036]: I0312 14:51:43.253053 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0289ee73-116b-4f34-ae6e-5560906a2df8-config\") pod \"0289ee73-116b-4f34-ae6e-5560906a2df8\" (UID: \"0289ee73-116b-4f34-ae6e-5560906a2df8\") " Mar 12 14:51:43.258002 master-0 kubenswrapper[37036]: I0312 14:51:43.253071 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-scripts\") pod \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " Mar 12 14:51:43.258002 master-0 kubenswrapper[37036]: I0312 14:51:43.253174 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5b885c-0466-4883-9af2-c8942c5b700c-config-data\") pod \"3a5b885c-0466-4883-9af2-c8942c5b700c\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " Mar 12 14:51:43.258002 master-0 kubenswrapper[37036]: I0312 14:51:43.253196 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0289ee73-116b-4f34-ae6e-5560906a2df8-ovndb-tls-certs\") pod \"0289ee73-116b-4f34-ae6e-5560906a2df8\" (UID: \"0289ee73-116b-4f34-ae6e-5560906a2df8\") " Mar 12 14:51:43.258002 master-0 kubenswrapper[37036]: I0312 14:51:43.253219 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-combined-ca-bundle\") pod \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " Mar 12 14:51:43.258002 master-0 kubenswrapper[37036]: I0312 14:51:43.253269 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a5b885c-0466-4883-9af2-c8942c5b700c-scripts\") pod \"3a5b885c-0466-4883-9af2-c8942c5b700c\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " Mar 12 14:51:43.258002 master-0 kubenswrapper[37036]: I0312 14:51:43.253288 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-config\") pod \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " Mar 12 14:51:43.258002 master-0 kubenswrapper[37036]: I0312 14:51:43.253340 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0289ee73-116b-4f34-ae6e-5560906a2df8-combined-ca-bundle\") pod \"0289ee73-116b-4f34-ae6e-5560906a2df8\" (UID: \"0289ee73-116b-4f34-ae6e-5560906a2df8\") " Mar 12 14:51:43.258002 master-0 kubenswrapper[37036]: I0312 14:51:43.253365 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0289ee73-116b-4f34-ae6e-5560906a2df8-httpd-config\") pod \"0289ee73-116b-4f34-ae6e-5560906a2df8\" (UID: \"0289ee73-116b-4f34-ae6e-5560906a2df8\") " Mar 12 14:51:43.258002 master-0 kubenswrapper[37036]: I0312 14:51:43.253387 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96j5f\" (UniqueName: \"kubernetes.io/projected/3a5b885c-0466-4883-9af2-c8942c5b700c-kube-api-access-96j5f\") pod \"3a5b885c-0466-4883-9af2-c8942c5b700c\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " Mar 12 14:51:43.258002 master-0 kubenswrapper[37036]: I0312 14:51:43.253420 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qctl2\" (UniqueName: \"kubernetes.io/projected/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-kube-api-access-qctl2\") pod \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " Mar 12 14:51:43.258002 master-0 kubenswrapper[37036]: I0312 14:51:43.253451 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-var-lib-ironic\") pod \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\" (UID: \"d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463\") " Mar 12 14:51:43.258002 master-0 kubenswrapper[37036]: I0312 14:51:43.253472 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5b885c-0466-4883-9af2-c8942c5b700c-combined-ca-bundle\") pod \"3a5b885c-0466-4883-9af2-c8942c5b700c\" (UID: \"3a5b885c-0466-4883-9af2-c8942c5b700c\") " Mar 12 14:51:43.258002 master-0 kubenswrapper[37036]: I0312 14:51:43.253490 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xttbh\" (UniqueName: \"kubernetes.io/projected/0289ee73-116b-4f34-ae6e-5560906a2df8-kube-api-access-xttbh\") pod \"0289ee73-116b-4f34-ae6e-5560906a2df8\" (UID: \"0289ee73-116b-4f34-ae6e-5560906a2df8\") " Mar 12 14:51:43.258002 master-0 kubenswrapper[37036]: I0312 14:51:43.256755 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463" (UID: "d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:51:43.285276 master-0 kubenswrapper[37036]: I0312 14:51:43.275626 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0289ee73-116b-4f34-ae6e-5560906a2df8-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0289ee73-116b-4f34-ae6e-5560906a2df8" (UID: "0289ee73-116b-4f34-ae6e-5560906a2df8"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:43.285276 master-0 kubenswrapper[37036]: I0312 14:51:43.275812 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0289ee73-116b-4f34-ae6e-5560906a2df8-kube-api-access-xttbh" (OuterVolumeSpecName: "kube-api-access-xttbh") pod "0289ee73-116b-4f34-ae6e-5560906a2df8" (UID: "0289ee73-116b-4f34-ae6e-5560906a2df8"). InnerVolumeSpecName "kube-api-access-xttbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:43.285276 master-0 kubenswrapper[37036]: I0312 14:51:43.275935 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-kube-api-access-qctl2" (OuterVolumeSpecName: "kube-api-access-qctl2") pod "d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463" (UID: "d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463"). InnerVolumeSpecName "kube-api-access-qctl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:43.290908 master-0 kubenswrapper[37036]: I0312 14:51:43.290830 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a5b885c-0466-4883-9af2-c8942c5b700c-scripts" (OuterVolumeSpecName: "scripts") pod "3a5b885c-0466-4883-9af2-c8942c5b700c" (UID: "3a5b885c-0466-4883-9af2-c8942c5b700c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:43.291850 master-0 kubenswrapper[37036]: I0312 14:51:43.291803 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463" (UID: "d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:51:43.292580 master-0 kubenswrapper[37036]: I0312 14:51:43.292550 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a5b885c-0466-4883-9af2-c8942c5b700c-logs" (OuterVolumeSpecName: "logs") pod "3a5b885c-0466-4883-9af2-c8942c5b700c" (UID: "3a5b885c-0466-4883-9af2-c8942c5b700c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:51:43.295253 master-0 kubenswrapper[37036]: I0312 14:51:43.295193 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a5b885c-0466-4883-9af2-c8942c5b700c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3a5b885c-0466-4883-9af2-c8942c5b700c" (UID: "3a5b885c-0466-4883-9af2-c8942c5b700c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:51:43.302039 master-0 kubenswrapper[37036]: I0312 14:51:43.301983 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a5b885c-0466-4883-9af2-c8942c5b700c-kube-api-access-96j5f" (OuterVolumeSpecName: "kube-api-access-96j5f") pod "3a5b885c-0466-4883-9af2-c8942c5b700c" (UID: "3a5b885c-0466-4883-9af2-c8942c5b700c"). InnerVolumeSpecName "kube-api-access-96j5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:43.310018 master-0 kubenswrapper[37036]: I0312 14:51:43.309950 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-scripts" (OuterVolumeSpecName: "scripts") pod "d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463" (UID: "d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:43.327302 master-0 kubenswrapper[37036]: I0312 14:51:43.327098 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-config" (OuterVolumeSpecName: "config") pod "d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463" (UID: "d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:43.336334 master-0 kubenswrapper[37036]: I0312 14:51:43.336273 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^b388bada-a531-4c3f-bf6b-3b84af4376f1" (OuterVolumeSpecName: "glance") pod "3a5b885c-0466-4883-9af2-c8942c5b700c" (UID: "3a5b885c-0466-4883-9af2-c8942c5b700c"). InnerVolumeSpecName "pvc-a339e40b-4843-4796-8fbe-a3a0ca45a5a2". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 14:51:43.344242 master-0 kubenswrapper[37036]: I0312 14:51:43.344165 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463" (UID: "d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 12 14:51:43.392137 master-0 kubenswrapper[37036]: I0312 14:51:43.360593 37036 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3a5b885c-0466-4883-9af2-c8942c5b700c-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:43.392137 master-0 kubenswrapper[37036]: I0312 14:51:43.390363 37036 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:43.392137 master-0 kubenswrapper[37036]: I0312 14:51:43.390423 37036 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a5b885c-0466-4883-9af2-c8942c5b700c-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:43.392137 master-0 kubenswrapper[37036]: I0312 14:51:43.390447 37036 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:43.392137 master-0 kubenswrapper[37036]: I0312 14:51:43.390490 37036 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a339e40b-4843-4796-8fbe-a3a0ca45a5a2\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b388bada-a531-4c3f-bf6b-3b84af4376f1\") on node \"master-0\" " Mar 12 14:51:43.392137 master-0 kubenswrapper[37036]: I0312 14:51:43.390505 37036 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:43.392137 master-0 kubenswrapper[37036]: I0312 14:51:43.390524 37036 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a5b885c-0466-4883-9af2-c8942c5b700c-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:43.392137 master-0 kubenswrapper[37036]: I0312 14:51:43.390534 37036 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:43.392137 master-0 kubenswrapper[37036]: I0312 14:51:43.390544 37036 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0289ee73-116b-4f34-ae6e-5560906a2df8-httpd-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:43.392137 master-0 kubenswrapper[37036]: I0312 14:51:43.390556 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96j5f\" (UniqueName: \"kubernetes.io/projected/3a5b885c-0466-4883-9af2-c8942c5b700c-kube-api-access-96j5f\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:43.392137 master-0 kubenswrapper[37036]: I0312 14:51:43.390800 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qctl2\" (UniqueName: \"kubernetes.io/projected/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-kube-api-access-qctl2\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:43.392137 master-0 kubenswrapper[37036]: I0312 14:51:43.390816 37036 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:43.392137 master-0 kubenswrapper[37036]: I0312 14:51:43.390828 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xttbh\" (UniqueName: \"kubernetes.io/projected/0289ee73-116b-4f34-ae6e-5560906a2df8-kube-api-access-xttbh\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:43.435771 master-0 kubenswrapper[37036]: I0312 14:51:43.435689 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a5b885c-0466-4883-9af2-c8942c5b700c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a5b885c-0466-4883-9af2-c8942c5b700c" (UID: "3a5b885c-0466-4883-9af2-c8942c5b700c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:43.448472 master-0 kubenswrapper[37036]: I0312 14:51:43.447911 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a5b885c-0466-4883-9af2-c8942c5b700c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3a5b885c-0466-4883-9af2-c8942c5b700c" (UID: "3a5b885c-0466-4883-9af2-c8942c5b700c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:43.457494 master-0 kubenswrapper[37036]: I0312 14:51:43.455022 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463" (UID: "d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:43.496560 master-0 kubenswrapper[37036]: I0312 14:51:43.496380 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:43.496560 master-0 kubenswrapper[37036]: I0312 14:51:43.496433 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5b885c-0466-4883-9af2-c8942c5b700c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:43.496560 master-0 kubenswrapper[37036]: I0312 14:51:43.496449 37036 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a5b885c-0466-4883-9af2-c8942c5b700c-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:43.502636 master-0 kubenswrapper[37036]: I0312 14:51:43.502181 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0289ee73-116b-4f34-ae6e-5560906a2df8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0289ee73-116b-4f34-ae6e-5560906a2df8" (UID: "0289ee73-116b-4f34-ae6e-5560906a2df8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:43.550463 master-0 kubenswrapper[37036]: I0312 14:51:43.550397 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0289ee73-116b-4f34-ae6e-5560906a2df8-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0289ee73-116b-4f34-ae6e-5560906a2df8" (UID: "0289ee73-116b-4f34-ae6e-5560906a2df8"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:43.576399 master-0 kubenswrapper[37036]: I0312 14:51:43.576320 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0289ee73-116b-4f34-ae6e-5560906a2df8-config" (OuterVolumeSpecName: "config") pod "0289ee73-116b-4f34-ae6e-5560906a2df8" (UID: "0289ee73-116b-4f34-ae6e-5560906a2df8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:43.600230 master-0 kubenswrapper[37036]: I0312 14:51:43.600077 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0289ee73-116b-4f34-ae6e-5560906a2df8-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:43.600230 master-0 kubenswrapper[37036]: I0312 14:51:43.600127 37036 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0289ee73-116b-4f34-ae6e-5560906a2df8-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:43.600230 master-0 kubenswrapper[37036]: I0312 14:51:43.600140 37036 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0289ee73-116b-4f34-ae6e-5560906a2df8-ovndb-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:43.631660 master-0 kubenswrapper[37036]: I0312 14:51:43.631452 37036 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 14:51:43.632741 master-0 kubenswrapper[37036]: I0312 14:51:43.632715 37036 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a339e40b-4843-4796-8fbe-a3a0ca45a5a2" (UniqueName: "kubernetes.io/csi/topolvm.io^b388bada-a531-4c3f-bf6b-3b84af4376f1") on node "master-0" Mar 12 14:51:43.634329 master-0 kubenswrapper[37036]: I0312 14:51:43.634144 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a5b885c-0466-4883-9af2-c8942c5b700c-config-data" (OuterVolumeSpecName: "config-data") pod "3a5b885c-0466-4883-9af2-c8942c5b700c" (UID: "3a5b885c-0466-4883-9af2-c8942c5b700c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:43.659002 master-0 kubenswrapper[37036]: I0312 14:51:43.658962 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:43.675336 master-0 kubenswrapper[37036]: I0312 14:51:43.675226 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-fqxgc"] Mar 12 14:51:43.723984 master-0 kubenswrapper[37036]: I0312 14:51:43.721844 37036 reconciler_common.go:293] "Volume detached for volume \"pvc-a339e40b-4843-4796-8fbe-a3a0ca45a5a2\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b388bada-a531-4c3f-bf6b-3b84af4376f1\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:43.723984 master-0 kubenswrapper[37036]: I0312 14:51:43.721888 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5b885c-0466-4883-9af2-c8942c5b700c-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:43.804162 master-0 kubenswrapper[37036]: I0312 14:51:43.804084 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d364-account-create-update-dznnx"] Mar 12 14:51:43.816320 master-0 kubenswrapper[37036]: I0312 14:51:43.816294 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 12 14:51:43.818178 master-0 kubenswrapper[37036]: I0312 14:51:43.818158 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 12 14:51:43.823349 master-0 kubenswrapper[37036]: I0312 14:51:43.823316 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-httpd-run\") pod \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " Mar 12 14:51:43.824835 master-0 kubenswrapper[37036]: I0312 14:51:43.824610 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9e8b1936-ecd2-4fec-bce6-88bd240ea0ae" (UID: "9e8b1936-ecd2-4fec-bce6-88bd240ea0ae"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:51:43.826977 master-0 kubenswrapper[37036]: I0312 14:51:43.826956 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dbac9550-b4fd-4c80-96a6-54c391bed946\") pod \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " Mar 12 14:51:43.834293 master-0 kubenswrapper[37036]: I0312 14:51:43.834250 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:43.841795 master-0 kubenswrapper[37036]: I0312 14:51:43.841752 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-combined-ca-bundle\") pod \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " Mar 12 14:51:43.841886 master-0 kubenswrapper[37036]: I0312 14:51:43.841815 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-logs\") pod \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " Mar 12 14:51:43.841886 master-0 kubenswrapper[37036]: I0312 14:51:43.841862 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-844f2\" (UniqueName: \"kubernetes.io/projected/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-kube-api-access-844f2\") pod \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " Mar 12 14:51:43.841986 master-0 kubenswrapper[37036]: I0312 14:51:43.841909 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-internal-tls-certs\") pod \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " Mar 12 14:51:43.842106 master-0 kubenswrapper[37036]: I0312 14:51:43.842088 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-config-data\") pod \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " Mar 12 14:51:43.842153 master-0 kubenswrapper[37036]: I0312 14:51:43.842125 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-scripts\") pod \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\" (UID: \"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae\") " Mar 12 14:51:43.843910 master-0 kubenswrapper[37036]: I0312 14:51:43.843646 37036 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:43.843910 master-0 kubenswrapper[37036]: I0312 14:51:43.843776 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-logs" (OuterVolumeSpecName: "logs") pod "9e8b1936-ecd2-4fec-bce6-88bd240ea0ae" (UID: "9e8b1936-ecd2-4fec-bce6-88bd240ea0ae"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:51:43.844345 master-0 kubenswrapper[37036]: I0312 14:51:43.844256 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-867bb94d6d-fmw6x" Mar 12 14:51:43.855228 master-0 kubenswrapper[37036]: I0312 14:51:43.855176 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a000-account-create-update-t5sxm"] Mar 12 14:51:43.859228 master-0 kubenswrapper[37036]: I0312 14:51:43.859178 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-kube-api-access-844f2" (OuterVolumeSpecName: "kube-api-access-844f2") pod "9e8b1936-ecd2-4fec-bce6-88bd240ea0ae" (UID: "9e8b1936-ecd2-4fec-bce6-88bd240ea0ae"). InnerVolumeSpecName "kube-api-access-844f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:43.860074 master-0 kubenswrapper[37036]: I0312 14:51:43.859936 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-scripts" (OuterVolumeSpecName: "scripts") pod "9e8b1936-ecd2-4fec-bce6-88bd240ea0ae" (UID: "9e8b1936-ecd2-4fec-bce6-88bd240ea0ae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:43.872395 master-0 kubenswrapper[37036]: I0312 14:51:43.872202 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^dbac9550-b4fd-4c80-96a6-54c391bed946" (OuterVolumeSpecName: "glance") pod "9e8b1936-ecd2-4fec-bce6-88bd240ea0ae" (UID: "9e8b1936-ecd2-4fec-bce6-88bd240ea0ae"). InnerVolumeSpecName "pvc-c23b5911-fb8a-49ca-a229-b24d2fc68f14". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 14:51:43.901588 master-0 kubenswrapper[37036]: I0312 14:51:43.901466 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e8b1936-ecd2-4fec-bce6-88bd240ea0ae" (UID: "9e8b1936-ecd2-4fec-bce6-88bd240ea0ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:43.969377 master-0 kubenswrapper[37036]: I0312 14:51:43.959141 37036 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:43.969377 master-0 kubenswrapper[37036]: I0312 14:51:43.959196 37036 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c23b5911-fb8a-49ca-a229-b24d2fc68f14\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dbac9550-b4fd-4c80-96a6-54c391bed946\") on node \"master-0\" " Mar 12 14:51:43.969377 master-0 kubenswrapper[37036]: I0312 14:51:43.959208 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:43.969377 master-0 kubenswrapper[37036]: I0312 14:51:43.959218 37036 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:43.969377 master-0 kubenswrapper[37036]: I0312 14:51:43.959229 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-844f2\" (UniqueName: \"kubernetes.io/projected/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-kube-api-access-844f2\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:43.989286 master-0 kubenswrapper[37036]: I0312 14:51:43.989226 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-b5866567d-h9t4r"] Mar 12 14:51:43.989614 master-0 kubenswrapper[37036]: I0312 14:51:43.989539 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-b5866567d-h9t4r" podUID="78f8830e-f634-424d-b7b7-606453255117" containerName="placement-log" containerID="cri-o://493a3f7dd30049dbc9300fbe1793f1c92e24f73ae60232387d925f9a1db82115" gracePeriod=30 Mar 12 14:51:43.990353 master-0 kubenswrapper[37036]: I0312 14:51:43.990089 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-b5866567d-h9t4r" podUID="78f8830e-f634-424d-b7b7-606453255117" containerName="placement-api" containerID="cri-o://968ac1a419ce83c9df0b7935a1d2d00b1c77d46fefb605ce697a12a530f04a12" gracePeriod=30 Mar 12 14:51:44.007691 master-0 kubenswrapper[37036]: I0312 14:51:44.007642 37036 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 14:51:44.007973 master-0 kubenswrapper[37036]: I0312 14:51:44.007862 37036 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c23b5911-fb8a-49ca-a229-b24d2fc68f14" (UniqueName: "kubernetes.io/csi/topolvm.io^dbac9550-b4fd-4c80-96a6-54c391bed946") on node "master-0" Mar 12 14:51:44.026494 master-0 kubenswrapper[37036]: I0312 14:51:44.024255 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c3083373-d0bb-4775-b8ea-1d34f46bc0b7","Type":"ContainerStarted","Data":"b97decea461474c3654638b3fe29c778a36b7d44e4358142e07754a65f10fae0"} Mar 12 14:51:44.041302 master-0 kubenswrapper[37036]: I0312 14:51:44.041247 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a000-account-create-update-t5sxm" event={"ID":"eb4a72f6-6d97-4d7b-a538-11604e6144ea","Type":"ContainerStarted","Data":"eeaffe7d7209188c696e9a8baca9c2cde078e79aaff725db51e3a35adb785f89"} Mar 12 14:51:44.060573 master-0 kubenswrapper[37036]: I0312 14:51:44.054138 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fqxgc" event={"ID":"e762e1b3-ab0c-47b4-88a1-4e4030b12ed4","Type":"ContainerStarted","Data":"a366e663d10b8d6c064cb828dfc4cc618c3bb8b2bc76751191a7d69b2339ae6e"} Mar 12 14:51:44.069847 master-0 kubenswrapper[37036]: I0312 14:51:44.067936 37036 reconciler_common.go:293] "Volume detached for volume \"pvc-c23b5911-fb8a-49ca-a229-b24d2fc68f14\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dbac9550-b4fd-4c80-96a6-54c391bed946\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:44.081607 master-0 kubenswrapper[37036]: I0312 14:51:44.071435 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=4.25376277 podStartE2EDuration="23.071417017s" podCreationTimestamp="2026-03-12 14:51:21 +0000 UTC" firstStartedPulling="2026-03-12 14:51:23.83053685 +0000 UTC m=+942.838277777" lastFinishedPulling="2026-03-12 14:51:42.648191097 +0000 UTC m=+961.655932024" observedRunningTime="2026-03-12 14:51:44.055064812 +0000 UTC m=+963.062805749" watchObservedRunningTime="2026-03-12 14:51:44.071417017 +0000 UTC m=+963.079157954" Mar 12 14:51:44.095797 master-0 kubenswrapper[37036]: I0312 14:51:44.089724 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9e8b1936-ecd2-4fec-bce6-88bd240ea0ae" (UID: "9e8b1936-ecd2-4fec-bce6-88bd240ea0ae"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:44.095797 master-0 kubenswrapper[37036]: I0312 14:51:44.091255 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-75dfc444b6-mtcqr" event={"ID":"649018e4-7368-455c-8b92-fae29b1b01ec","Type":"ContainerStarted","Data":"49aef93133e40dc464668aeeb32033febe7595bbb4c3c33a521dbfd42c46f885"} Mar 12 14:51:44.095797 master-0 kubenswrapper[37036]: I0312 14:51:44.093074 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:44.095797 master-0 kubenswrapper[37036]: I0312 14:51:44.093578 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:44.114589 master-0 kubenswrapper[37036]: I0312 14:51:44.110854 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d364-account-create-update-dznnx" event={"ID":"14afc361-6f21-415e-af7b-7ed3a4f9c48b","Type":"ContainerStarted","Data":"8eb3a8d223a44d45be1ac9f6c12a1021522a12d04b6ded12df6cb1901a54fa48"} Mar 12 14:51:44.157752 master-0 kubenswrapper[37036]: I0312 14:51:44.157427 37036 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-75dfc444b6-mtcqr" podUID="649018e4-7368-455c-8b92-fae29b1b01ec" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 14:51:44.175025 master-0 kubenswrapper[37036]: I0312 14:51:44.174445 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-75dfc444b6-mtcqr" podStartSLOduration=16.174426413 podStartE2EDuration="16.174426413s" podCreationTimestamp="2026-03-12 14:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:44.157374706 +0000 UTC m=+963.165115643" watchObservedRunningTime="2026-03-12 14:51:44.174426413 +0000 UTC m=+963.182167350" Mar 12 14:51:44.178655 master-0 kubenswrapper[37036]: I0312 14:51:44.178457 37036 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:44.178655 master-0 kubenswrapper[37036]: I0312 14:51:44.178531 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:44.179075 master-0 kubenswrapper[37036]: I0312 14:51:44.179024 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bc20e-default-internal-api-0" event={"ID":"9e8b1936-ecd2-4fec-bce6-88bd240ea0ae","Type":"ContainerDied","Data":"7e0f8571fa2b0b5c693d201f6033339b5542443ef2433e90927fdb2f03e11ec6"} Mar 12 14:51:44.179142 master-0 kubenswrapper[37036]: I0312 14:51:44.179093 37036 scope.go:117] "RemoveContainer" containerID="b6861f8141728bac76ef239d5809958cef5345452e37261a4216bcbf824f1dba" Mar 12 14:51:44.185976 master-0 kubenswrapper[37036]: I0312 14:51:44.180378 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-ht555" Mar 12 14:51:44.185976 master-0 kubenswrapper[37036]: I0312 14:51:44.180439 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:44.185976 master-0 kubenswrapper[37036]: I0312 14:51:44.180475 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-865fc75fb8-6hmpx" Mar 12 14:51:44.189751 master-0 kubenswrapper[37036]: I0312 14:51:44.189698 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-config-data" (OuterVolumeSpecName: "config-data") pod "9e8b1936-ecd2-4fec-bce6-88bd240ea0ae" (UID: "9e8b1936-ecd2-4fec-bce6-88bd240ea0ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:44.290860 master-0 kubenswrapper[37036]: I0312 14:51:44.289446 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-zktq7"] Mar 12 14:51:44.305869 master-0 kubenswrapper[37036]: I0312 14:51:44.305264 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:44.370577 master-0 kubenswrapper[37036]: I0312 14:51:44.365117 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-994d-account-create-update-h6mj5"] Mar 12 14:51:44.393739 master-0 kubenswrapper[37036]: W0312 14:51:44.393578 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf92f3efc_76bc_40a0_b3c3_8da77d03c022.slice/crio-8d8c00c7b8c1575c747d8226104e44e0b329740cd1536214a4a38c1f175de102 WatchSource:0}: Error finding container 8d8c00c7b8c1575c747d8226104e44e0b329740cd1536214a4a38c1f175de102: Status 404 returned error can't find the container with id 8d8c00c7b8c1575c747d8226104e44e0b329740cd1536214a4a38c1f175de102 Mar 12 14:51:44.393739 master-0 kubenswrapper[37036]: I0312 14:51:44.393618 37036 scope.go:117] "RemoveContainer" containerID="a4712f26a1b45f8060cab3e25f59a258c36af65889631302c1c3828633456a0f" Mar 12 14:51:44.409089 master-0 kubenswrapper[37036]: I0312 14:51:44.408247 37036 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-75dfc444b6-mtcqr" podUID="649018e4-7368-455c-8b92-fae29b1b01ec" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 14:51:44.409089 master-0 kubenswrapper[37036]: I0312 14:51:44.409001 37036 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-75dfc444b6-mtcqr" podUID="649018e4-7368-455c-8b92-fae29b1b01ec" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 14:51:44.428355 master-0 kubenswrapper[37036]: W0312 14:51:44.428296 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72fa304a_a97e_4350_81f5_6180bc4ba594.slice/crio-77c5c7dafde3597464c5487448b06b959627f6cd1707bb39630f0b02fe327954 WatchSource:0}: Error finding container 77c5c7dafde3597464c5487448b06b959627f6cd1707bb39630f0b02fe327954: Status 404 returned error can't find the container with id 77c5c7dafde3597464c5487448b06b959627f6cd1707bb39630f0b02fe327954 Mar 12 14:51:44.436247 master-0 kubenswrapper[37036]: I0312 14:51:44.435513 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 12 14:51:44.437378 master-0 kubenswrapper[37036]: I0312 14:51:44.437103 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-865fc75fb8-6hmpx"] Mar 12 14:51:44.481678 master-0 kubenswrapper[37036]: I0312 14:51:44.481600 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-9dzk7"] Mar 12 14:51:44.517087 master-0 kubenswrapper[37036]: I0312 14:51:44.517037 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-865fc75fb8-6hmpx"] Mar 12 14:51:44.546675 master-0 kubenswrapper[37036]: I0312 14:51:44.546603 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-bc20e-default-external-api-0"] Mar 12 14:51:44.618923 master-0 kubenswrapper[37036]: I0312 14:51:44.608969 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-bc20e-default-external-api-0"] Mar 12 14:51:44.653423 master-0 kubenswrapper[37036]: I0312 14:51:44.649599 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-bc20e-default-external-api-0"] Mar 12 14:51:44.653423 master-0 kubenswrapper[37036]: E0312 14:51:44.650399 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5b885c-0466-4883-9af2-c8942c5b700c" containerName="glance-httpd" Mar 12 14:51:44.653423 master-0 kubenswrapper[37036]: I0312 14:51:44.650420 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5b885c-0466-4883-9af2-c8942c5b700c" containerName="glance-httpd" Mar 12 14:51:44.653423 master-0 kubenswrapper[37036]: E0312 14:51:44.650466 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e8b1936-ecd2-4fec-bce6-88bd240ea0ae" containerName="glance-log" Mar 12 14:51:44.653423 master-0 kubenswrapper[37036]: I0312 14:51:44.650476 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e8b1936-ecd2-4fec-bce6-88bd240ea0ae" containerName="glance-log" Mar 12 14:51:44.653423 master-0 kubenswrapper[37036]: E0312 14:51:44.650489 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0289ee73-116b-4f34-ae6e-5560906a2df8" containerName="neutron-api" Mar 12 14:51:44.653423 master-0 kubenswrapper[37036]: I0312 14:51:44.650496 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="0289ee73-116b-4f34-ae6e-5560906a2df8" containerName="neutron-api" Mar 12 14:51:44.653423 master-0 kubenswrapper[37036]: E0312 14:51:44.650520 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e8b1936-ecd2-4fec-bce6-88bd240ea0ae" containerName="glance-httpd" Mar 12 14:51:44.653423 master-0 kubenswrapper[37036]: I0312 14:51:44.650527 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e8b1936-ecd2-4fec-bce6-88bd240ea0ae" containerName="glance-httpd" Mar 12 14:51:44.653423 master-0 kubenswrapper[37036]: E0312 14:51:44.650555 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5b885c-0466-4883-9af2-c8942c5b700c" containerName="glance-log" Mar 12 14:51:44.653423 master-0 kubenswrapper[37036]: I0312 14:51:44.650562 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5b885c-0466-4883-9af2-c8942c5b700c" containerName="glance-log" Mar 12 14:51:44.653423 master-0 kubenswrapper[37036]: E0312 14:51:44.650580 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0289ee73-116b-4f34-ae6e-5560906a2df8" containerName="neutron-httpd" Mar 12 14:51:44.653423 master-0 kubenswrapper[37036]: I0312 14:51:44.650587 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="0289ee73-116b-4f34-ae6e-5560906a2df8" containerName="neutron-httpd" Mar 12 14:51:44.653423 master-0 kubenswrapper[37036]: E0312 14:51:44.650600 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463" containerName="ironic-inspector-db-sync" Mar 12 14:51:44.653423 master-0 kubenswrapper[37036]: I0312 14:51:44.650607 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463" containerName="ironic-inspector-db-sync" Mar 12 14:51:44.653423 master-0 kubenswrapper[37036]: I0312 14:51:44.650859 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e8b1936-ecd2-4fec-bce6-88bd240ea0ae" containerName="glance-log" Mar 12 14:51:44.653423 master-0 kubenswrapper[37036]: I0312 14:51:44.650917 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463" containerName="ironic-inspector-db-sync" Mar 12 14:51:44.653423 master-0 kubenswrapper[37036]: I0312 14:51:44.650938 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="0289ee73-116b-4f34-ae6e-5560906a2df8" containerName="neutron-api" Mar 12 14:51:44.653423 master-0 kubenswrapper[37036]: I0312 14:51:44.650953 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="0289ee73-116b-4f34-ae6e-5560906a2df8" containerName="neutron-httpd" Mar 12 14:51:44.653423 master-0 kubenswrapper[37036]: I0312 14:51:44.650966 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a5b885c-0466-4883-9af2-c8942c5b700c" containerName="glance-log" Mar 12 14:51:44.653423 master-0 kubenswrapper[37036]: I0312 14:51:44.650992 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e8b1936-ecd2-4fec-bce6-88bd240ea0ae" containerName="glance-httpd" Mar 12 14:51:44.653423 master-0 kubenswrapper[37036]: I0312 14:51:44.651017 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a5b885c-0466-4883-9af2-c8942c5b700c" containerName="glance-httpd" Mar 12 14:51:44.653423 master-0 kubenswrapper[37036]: I0312 14:51:44.652585 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:44.656819 master-0 kubenswrapper[37036]: I0312 14:51:44.655328 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 12 14:51:44.656819 master-0 kubenswrapper[37036]: I0312 14:51:44.655347 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-bc20e-default-external-config-data" Mar 12 14:51:44.657305 master-0 kubenswrapper[37036]: I0312 14:51:44.657273 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 12 14:51:44.714505 master-0 kubenswrapper[37036]: I0312 14:51:44.689562 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bc20e-default-external-api-0"] Mar 12 14:51:44.730732 master-0 kubenswrapper[37036]: I0312 14:51:44.726444 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-bc20e-default-internal-api-0"] Mar 12 14:51:44.739021 master-0 kubenswrapper[37036]: I0312 14:51:44.737016 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c-httpd-run\") pod \"glance-bc20e-default-external-api-0\" (UID: \"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:44.739021 master-0 kubenswrapper[37036]: I0312 14:51:44.737145 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c-scripts\") pod \"glance-bc20e-default-external-api-0\" (UID: \"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:44.739021 master-0 kubenswrapper[37036]: I0312 14:51:44.737264 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c-config-data\") pod \"glance-bc20e-default-external-api-0\" (UID: \"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:44.739021 master-0 kubenswrapper[37036]: I0312 14:51:44.737335 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24vjc\" (UniqueName: \"kubernetes.io/projected/d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c-kube-api-access-24vjc\") pod \"glance-bc20e-default-external-api-0\" (UID: \"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:44.739021 master-0 kubenswrapper[37036]: I0312 14:51:44.737400 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c-combined-ca-bundle\") pod \"glance-bc20e-default-external-api-0\" (UID: \"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:44.739021 master-0 kubenswrapper[37036]: I0312 14:51:44.737432 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c-public-tls-certs\") pod \"glance-bc20e-default-external-api-0\" (UID: \"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:44.739021 master-0 kubenswrapper[37036]: I0312 14:51:44.737463 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a339e40b-4843-4796-8fbe-a3a0ca45a5a2\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b388bada-a531-4c3f-bf6b-3b84af4376f1\") pod \"glance-bc20e-default-external-api-0\" (UID: \"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:44.739021 master-0 kubenswrapper[37036]: I0312 14:51:44.738316 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c-logs\") pod \"glance-bc20e-default-external-api-0\" (UID: \"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:44.740091 master-0 kubenswrapper[37036]: I0312 14:51:44.740064 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-bc20e-default-internal-api-0"] Mar 12 14:51:44.762658 master-0 kubenswrapper[37036]: I0312 14:51:44.762610 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-bc20e-default-internal-api-0"] Mar 12 14:51:44.764989 master-0 kubenswrapper[37036]: I0312 14:51:44.764956 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:44.768464 master-0 kubenswrapper[37036]: I0312 14:51:44.768418 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-bc20e-default-internal-config-data" Mar 12 14:51:44.768772 master-0 kubenswrapper[37036]: I0312 14:51:44.768751 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 12 14:51:44.774149 master-0 kubenswrapper[37036]: I0312 14:51:44.774109 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bc20e-default-internal-api-0"] Mar 12 14:51:44.844060 master-0 kubenswrapper[37036]: I0312 14:51:44.839907 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24vjc\" (UniqueName: \"kubernetes.io/projected/d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c-kube-api-access-24vjc\") pod \"glance-bc20e-default-external-api-0\" (UID: \"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:44.844060 master-0 kubenswrapper[37036]: I0312 14:51:44.840008 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c-combined-ca-bundle\") pod \"glance-bc20e-default-external-api-0\" (UID: \"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:44.844060 master-0 kubenswrapper[37036]: I0312 14:51:44.840081 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c-public-tls-certs\") pod \"glance-bc20e-default-external-api-0\" (UID: \"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:44.844060 master-0 kubenswrapper[37036]: I0312 14:51:44.840127 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a339e40b-4843-4796-8fbe-a3a0ca45a5a2\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b388bada-a531-4c3f-bf6b-3b84af4376f1\") pod \"glance-bc20e-default-external-api-0\" (UID: \"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:44.844060 master-0 kubenswrapper[37036]: I0312 14:51:44.840160 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c-logs\") pod \"glance-bc20e-default-external-api-0\" (UID: \"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:44.844060 master-0 kubenswrapper[37036]: I0312 14:51:44.840245 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c-httpd-run\") pod \"glance-bc20e-default-external-api-0\" (UID: \"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:44.844060 master-0 kubenswrapper[37036]: I0312 14:51:44.840340 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c-scripts\") pod \"glance-bc20e-default-external-api-0\" (UID: \"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:44.844060 master-0 kubenswrapper[37036]: I0312 14:51:44.840443 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c-config-data\") pod \"glance-bc20e-default-external-api-0\" (UID: \"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:44.844060 master-0 kubenswrapper[37036]: I0312 14:51:44.842569 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c-logs\") pod \"glance-bc20e-default-external-api-0\" (UID: \"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:44.844669 master-0 kubenswrapper[37036]: I0312 14:51:44.844330 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c-combined-ca-bundle\") pod \"glance-bc20e-default-external-api-0\" (UID: \"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:44.847783 master-0 kubenswrapper[37036]: I0312 14:51:44.846301 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c-httpd-run\") pod \"glance-bc20e-default-external-api-0\" (UID: \"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:44.847783 master-0 kubenswrapper[37036]: I0312 14:51:44.846545 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c-config-data\") pod \"glance-bc20e-default-external-api-0\" (UID: \"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:44.847783 master-0 kubenswrapper[37036]: I0312 14:51:44.846704 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c-public-tls-certs\") pod \"glance-bc20e-default-external-api-0\" (UID: \"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:44.849175 master-0 kubenswrapper[37036]: I0312 14:51:44.848260 37036 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 14:51:44.849175 master-0 kubenswrapper[37036]: I0312 14:51:44.848318 37036 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a339e40b-4843-4796-8fbe-a3a0ca45a5a2\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b388bada-a531-4c3f-bf6b-3b84af4376f1\") pod \"glance-bc20e-default-external-api-0\" (UID: \"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/6c4df80a2d0bf34399f5f7093642ff3bb6c859672516bc87c6e10e693c5b3679/globalmount\"" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:44.849462 master-0 kubenswrapper[37036]: I0312 14:51:44.849441 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c-scripts\") pod \"glance-bc20e-default-external-api-0\" (UID: \"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:44.865858 master-0 kubenswrapper[37036]: I0312 14:51:44.865636 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24vjc\" (UniqueName: \"kubernetes.io/projected/d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c-kube-api-access-24vjc\") pod \"glance-bc20e-default-external-api-0\" (UID: \"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:44.942271 master-0 kubenswrapper[37036]: I0312 14:51:44.942148 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/23884bb7-c60a-40ec-b96e-7b5280cea5f5-httpd-run\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"23884bb7-c60a-40ec-b96e-7b5280cea5f5\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:44.942449 master-0 kubenswrapper[37036]: I0312 14:51:44.942276 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23884bb7-c60a-40ec-b96e-7b5280cea5f5-logs\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"23884bb7-c60a-40ec-b96e-7b5280cea5f5\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:44.942830 master-0 kubenswrapper[37036]: I0312 14:51:44.942463 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/23884bb7-c60a-40ec-b96e-7b5280cea5f5-internal-tls-certs\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"23884bb7-c60a-40ec-b96e-7b5280cea5f5\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:44.942830 master-0 kubenswrapper[37036]: I0312 14:51:44.942563 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c23b5911-fb8a-49ca-a229-b24d2fc68f14\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dbac9550-b4fd-4c80-96a6-54c391bed946\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"23884bb7-c60a-40ec-b96e-7b5280cea5f5\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:44.942942 master-0 kubenswrapper[37036]: I0312 14:51:44.942797 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23884bb7-c60a-40ec-b96e-7b5280cea5f5-scripts\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"23884bb7-c60a-40ec-b96e-7b5280cea5f5\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:44.954075 master-0 kubenswrapper[37036]: I0312 14:51:44.943103 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23884bb7-c60a-40ec-b96e-7b5280cea5f5-combined-ca-bundle\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"23884bb7-c60a-40ec-b96e-7b5280cea5f5\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:44.954075 master-0 kubenswrapper[37036]: I0312 14:51:44.943169 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23884bb7-c60a-40ec-b96e-7b5280cea5f5-config-data\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"23884bb7-c60a-40ec-b96e-7b5280cea5f5\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:44.954075 master-0 kubenswrapper[37036]: I0312 14:51:44.943245 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzh7d\" (UniqueName: \"kubernetes.io/projected/23884bb7-c60a-40ec-b96e-7b5280cea5f5-kube-api-access-mzh7d\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"23884bb7-c60a-40ec-b96e-7b5280cea5f5\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:45.047338 master-0 kubenswrapper[37036]: I0312 14:51:45.047278 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23884bb7-c60a-40ec-b96e-7b5280cea5f5-scripts\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"23884bb7-c60a-40ec-b96e-7b5280cea5f5\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:45.056931 master-0 kubenswrapper[37036]: I0312 14:51:45.047462 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23884bb7-c60a-40ec-b96e-7b5280cea5f5-combined-ca-bundle\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"23884bb7-c60a-40ec-b96e-7b5280cea5f5\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:45.056931 master-0 kubenswrapper[37036]: I0312 14:51:45.047495 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23884bb7-c60a-40ec-b96e-7b5280cea5f5-config-data\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"23884bb7-c60a-40ec-b96e-7b5280cea5f5\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:45.056931 master-0 kubenswrapper[37036]: I0312 14:51:45.047561 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzh7d\" (UniqueName: \"kubernetes.io/projected/23884bb7-c60a-40ec-b96e-7b5280cea5f5-kube-api-access-mzh7d\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"23884bb7-c60a-40ec-b96e-7b5280cea5f5\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:45.056931 master-0 kubenswrapper[37036]: I0312 14:51:45.047725 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/23884bb7-c60a-40ec-b96e-7b5280cea5f5-httpd-run\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"23884bb7-c60a-40ec-b96e-7b5280cea5f5\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:45.056931 master-0 kubenswrapper[37036]: I0312 14:51:45.047768 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23884bb7-c60a-40ec-b96e-7b5280cea5f5-logs\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"23884bb7-c60a-40ec-b96e-7b5280cea5f5\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:45.056931 master-0 kubenswrapper[37036]: I0312 14:51:45.047845 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/23884bb7-c60a-40ec-b96e-7b5280cea5f5-internal-tls-certs\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"23884bb7-c60a-40ec-b96e-7b5280cea5f5\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:45.056931 master-0 kubenswrapper[37036]: I0312 14:51:45.047887 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c23b5911-fb8a-49ca-a229-b24d2fc68f14\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dbac9550-b4fd-4c80-96a6-54c391bed946\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"23884bb7-c60a-40ec-b96e-7b5280cea5f5\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:45.056931 master-0 kubenswrapper[37036]: I0312 14:51:45.049514 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/23884bb7-c60a-40ec-b96e-7b5280cea5f5-httpd-run\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"23884bb7-c60a-40ec-b96e-7b5280cea5f5\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:45.056931 master-0 kubenswrapper[37036]: I0312 14:51:45.050018 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23884bb7-c60a-40ec-b96e-7b5280cea5f5-logs\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"23884bb7-c60a-40ec-b96e-7b5280cea5f5\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:45.056931 master-0 kubenswrapper[37036]: I0312 14:51:45.052473 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/23884bb7-c60a-40ec-b96e-7b5280cea5f5-internal-tls-certs\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"23884bb7-c60a-40ec-b96e-7b5280cea5f5\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:45.056931 master-0 kubenswrapper[37036]: I0312 14:51:45.052735 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23884bb7-c60a-40ec-b96e-7b5280cea5f5-scripts\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"23884bb7-c60a-40ec-b96e-7b5280cea5f5\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:45.056931 master-0 kubenswrapper[37036]: I0312 14:51:45.052974 37036 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 14:51:45.056931 master-0 kubenswrapper[37036]: I0312 14:51:45.052995 37036 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c23b5911-fb8a-49ca-a229-b24d2fc68f14\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dbac9550-b4fd-4c80-96a6-54c391bed946\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"23884bb7-c60a-40ec-b96e-7b5280cea5f5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/49ce0cc1a33b2e89ca28a5f90915fcaf3a1dd141d163d3ea96d25fddb3a57200/globalmount\"" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:45.056931 master-0 kubenswrapper[37036]: I0312 14:51:45.055346 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23884bb7-c60a-40ec-b96e-7b5280cea5f5-combined-ca-bundle\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"23884bb7-c60a-40ec-b96e-7b5280cea5f5\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:45.074428 master-0 kubenswrapper[37036]: I0312 14:51:45.074359 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23884bb7-c60a-40ec-b96e-7b5280cea5f5-config-data\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"23884bb7-c60a-40ec-b96e-7b5280cea5f5\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:45.088923 master-0 kubenswrapper[37036]: I0312 14:51:45.086864 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzh7d\" (UniqueName: \"kubernetes.io/projected/23884bb7-c60a-40ec-b96e-7b5280cea5f5-kube-api-access-mzh7d\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"23884bb7-c60a-40ec-b96e-7b5280cea5f5\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:45.197333 master-0 kubenswrapper[37036]: I0312 14:51:45.197282 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zktq7" event={"ID":"f92f3efc-76bc-40a0-b3c3-8da77d03c022","Type":"ContainerStarted","Data":"8d8c00c7b8c1575c747d8226104e44e0b329740cd1536214a4a38c1f175de102"} Mar 12 14:51:45.198689 master-0 kubenswrapper[37036]: I0312 14:51:45.198660 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a000-account-create-update-t5sxm" event={"ID":"eb4a72f6-6d97-4d7b-a538-11604e6144ea","Type":"ContainerStarted","Data":"43055d2901cfbeee332f35966603e29e2dceed0b519b7f4fa8b3c788cc7d6288"} Mar 12 14:51:45.202555 master-0 kubenswrapper[37036]: I0312 14:51:45.202512 37036 generic.go:334] "Generic (PLEG): container finished" podID="78f8830e-f634-424d-b7b7-606453255117" containerID="493a3f7dd30049dbc9300fbe1793f1c92e24f73ae60232387d925f9a1db82115" exitCode=143 Mar 12 14:51:45.202716 master-0 kubenswrapper[37036]: I0312 14:51:45.202638 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b5866567d-h9t4r" event={"ID":"78f8830e-f634-424d-b7b7-606453255117","Type":"ContainerDied","Data":"493a3f7dd30049dbc9300fbe1793f1c92e24f73ae60232387d925f9a1db82115"} Mar 12 14:51:45.205952 master-0 kubenswrapper[37036]: I0312 14:51:45.205908 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fqxgc" event={"ID":"e762e1b3-ab0c-47b4-88a1-4e4030b12ed4","Type":"ContainerStarted","Data":"ac4bc26b93a0da215e68d6b8a0d64cc0bee228897330c7b001614559fd5ac866"} Mar 12 14:51:45.234379 master-0 kubenswrapper[37036]: I0312 14:51:45.234326 37036 scope.go:117] "RemoveContainer" containerID="e6668d23518a999e94ef455dd1dbffa2ccd0f155ccfa0d0b3c381d6e799708d0" Mar 12 14:51:45.254273 master-0 kubenswrapper[37036]: I0312 14:51:45.253541 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-a000-account-create-update-t5sxm" podStartSLOduration=9.253513004 podStartE2EDuration="9.253513004s" podCreationTimestamp="2026-03-12 14:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:45.218803269 +0000 UTC m=+964.226544206" watchObservedRunningTime="2026-03-12 14:51:45.253513004 +0000 UTC m=+964.261253961" Mar 12 14:51:45.277921 master-0 kubenswrapper[37036]: I0312 14:51:45.275433 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0289ee73-116b-4f34-ae6e-5560906a2df8" path="/var/lib/kubelet/pods/0289ee73-116b-4f34-ae6e-5560906a2df8/volumes" Mar 12 14:51:45.282911 master-0 kubenswrapper[37036]: I0312 14:51:45.279861 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a5b885c-0466-4883-9af2-c8942c5b700c" path="/var/lib/kubelet/pods/3a5b885c-0466-4883-9af2-c8942c5b700c/volumes" Mar 12 14:51:45.286912 master-0 kubenswrapper[37036]: I0312 14:51:45.283353 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e8b1936-ecd2-4fec-bce6-88bd240ea0ae" path="/var/lib/kubelet/pods/9e8b1936-ecd2-4fec-bce6-88bd240ea0ae/volumes" Mar 12 14:51:45.286912 master-0 kubenswrapper[37036]: I0312 14:51:45.285264 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"8c0524b9-cbf3-40e3-9424-98b634ba1b10","Type":"ContainerStarted","Data":"6df4f9d9f166aad0b9ffccbf80385cf6ff72377afd79a9fa05a65c47994878fd"} Mar 12 14:51:45.286912 master-0 kubenswrapper[37036]: I0312 14:51:45.285295 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d364-account-create-update-dznnx" event={"ID":"14afc361-6f21-415e-af7b-7ed3a4f9c48b","Type":"ContainerStarted","Data":"326475cc1c3676214c65e247b9e7092ee89d8d1d5f60a20a2746268509ad4f47"} Mar 12 14:51:45.286912 master-0 kubenswrapper[37036]: I0312 14:51:45.285309 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9dzk7" event={"ID":"72fa304a-a97e-4350-81f5-6180bc4ba594","Type":"ContainerStarted","Data":"77c5c7dafde3597464c5487448b06b959627f6cd1707bb39630f0b02fe327954"} Mar 12 14:51:45.286912 master-0 kubenswrapper[37036]: I0312 14:51:45.285322 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-994d-account-create-update-h6mj5" event={"ID":"112425ab-cbf2-468c-b40c-e64aa339389c","Type":"ContainerStarted","Data":"9b38ed165da3741c5e41e75968f12f18c6646c6bbde0eec7c7c57d57e79a89ae"} Mar 12 14:51:45.290907 master-0 kubenswrapper[37036]: I0312 14:51:45.288195 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-fqxgc" podStartSLOduration=9.288174478 podStartE2EDuration="9.288174478s" podCreationTimestamp="2026-03-12 14:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:45.239446789 +0000 UTC m=+964.247187736" watchObservedRunningTime="2026-03-12 14:51:45.288174478 +0000 UTC m=+964.295915435" Mar 12 14:51:45.290907 master-0 kubenswrapper[37036]: I0312 14:51:45.288465 37036 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-75dfc444b6-mtcqr" podUID="649018e4-7368-455c-8b92-fae29b1b01ec" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 12 14:51:45.321921 master-0 kubenswrapper[37036]: I0312 14:51:45.321150 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-d364-account-create-update-dznnx" podStartSLOduration=9.321124843 podStartE2EDuration="9.321124843s" podCreationTimestamp="2026-03-12 14:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:45.305629622 +0000 UTC m=+964.313370559" watchObservedRunningTime="2026-03-12 14:51:45.321124843 +0000 UTC m=+964.328865780" Mar 12 14:51:45.368306 master-0 kubenswrapper[37036]: I0312 14:51:45.365936 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-9dzk7" podStartSLOduration=9.365889073 podStartE2EDuration="9.365889073s" podCreationTimestamp="2026-03-12 14:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:45.327371461 +0000 UTC m=+964.335112398" watchObservedRunningTime="2026-03-12 14:51:45.365889073 +0000 UTC m=+964.373630010" Mar 12 14:51:45.409932 master-0 kubenswrapper[37036]: I0312 14:51:45.409816 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-994d-account-create-update-h6mj5" podStartSLOduration=9.409790078 podStartE2EDuration="9.409790078s" podCreationTimestamp="2026-03-12 14:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:45.407104851 +0000 UTC m=+964.414845788" watchObservedRunningTime="2026-03-12 14:51:45.409790078 +0000 UTC m=+964.417531015" Mar 12 14:51:45.663931 master-0 kubenswrapper[37036]: I0312 14:51:45.663877 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7947596457-rj5wn"] Mar 12 14:51:45.728186 master-0 kubenswrapper[37036]: I0312 14:51:45.726796 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7947596457-rj5wn" Mar 12 14:51:45.786936 master-0 kubenswrapper[37036]: I0312 14:51:45.774265 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7947596457-rj5wn"] Mar 12 14:51:45.834821 master-0 kubenswrapper[37036]: I0312 14:51:45.804074 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-dns-svc\") pod \"dnsmasq-dns-7947596457-rj5wn\" (UID: \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\") " pod="openstack/dnsmasq-dns-7947596457-rj5wn" Mar 12 14:51:45.834821 master-0 kubenswrapper[37036]: I0312 14:51:45.804253 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8fxl\" (UniqueName: \"kubernetes.io/projected/fbb29ced-f0e0-44d7-bd04-d332938eea7b-kube-api-access-r8fxl\") pod \"dnsmasq-dns-7947596457-rj5wn\" (UID: \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\") " pod="openstack/dnsmasq-dns-7947596457-rj5wn" Mar 12 14:51:45.834821 master-0 kubenswrapper[37036]: I0312 14:51:45.804311 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-ovsdbserver-sb\") pod \"dnsmasq-dns-7947596457-rj5wn\" (UID: \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\") " pod="openstack/dnsmasq-dns-7947596457-rj5wn" Mar 12 14:51:45.834821 master-0 kubenswrapper[37036]: I0312 14:51:45.804392 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-config\") pod \"dnsmasq-dns-7947596457-rj5wn\" (UID: \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\") " pod="openstack/dnsmasq-dns-7947596457-rj5wn" Mar 12 14:51:45.834821 master-0 kubenswrapper[37036]: I0312 14:51:45.804652 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-ovsdbserver-nb\") pod \"dnsmasq-dns-7947596457-rj5wn\" (UID: \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\") " pod="openstack/dnsmasq-dns-7947596457-rj5wn" Mar 12 14:51:45.834821 master-0 kubenswrapper[37036]: I0312 14:51:45.804725 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-dns-swift-storage-0\") pod \"dnsmasq-dns-7947596457-rj5wn\" (UID: \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\") " pod="openstack/dnsmasq-dns-7947596457-rj5wn" Mar 12 14:51:45.902924 master-0 kubenswrapper[37036]: I0312 14:51:45.902300 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 14:51:45.917961 master-0 kubenswrapper[37036]: I0312 14:51:45.914642 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8fxl\" (UniqueName: \"kubernetes.io/projected/fbb29ced-f0e0-44d7-bd04-d332938eea7b-kube-api-access-r8fxl\") pod \"dnsmasq-dns-7947596457-rj5wn\" (UID: \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\") " pod="openstack/dnsmasq-dns-7947596457-rj5wn" Mar 12 14:51:45.917961 master-0 kubenswrapper[37036]: I0312 14:51:45.914720 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-ovsdbserver-sb\") pod \"dnsmasq-dns-7947596457-rj5wn\" (UID: \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\") " pod="openstack/dnsmasq-dns-7947596457-rj5wn" Mar 12 14:51:45.917961 master-0 kubenswrapper[37036]: I0312 14:51:45.914782 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-config\") pod \"dnsmasq-dns-7947596457-rj5wn\" (UID: \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\") " pod="openstack/dnsmasq-dns-7947596457-rj5wn" Mar 12 14:51:45.917961 master-0 kubenswrapper[37036]: I0312 14:51:45.917517 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-ovsdbserver-sb\") pod \"dnsmasq-dns-7947596457-rj5wn\" (UID: \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\") " pod="openstack/dnsmasq-dns-7947596457-rj5wn" Mar 12 14:51:45.927926 master-0 kubenswrapper[37036]: I0312 14:51:45.923753 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-config\") pod \"dnsmasq-dns-7947596457-rj5wn\" (UID: \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\") " pod="openstack/dnsmasq-dns-7947596457-rj5wn" Mar 12 14:51:45.927926 master-0 kubenswrapper[37036]: I0312 14:51:45.923918 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-ovsdbserver-nb\") pod \"dnsmasq-dns-7947596457-rj5wn\" (UID: \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\") " pod="openstack/dnsmasq-dns-7947596457-rj5wn" Mar 12 14:51:45.927926 master-0 kubenswrapper[37036]: I0312 14:51:45.923964 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-dns-swift-storage-0\") pod \"dnsmasq-dns-7947596457-rj5wn\" (UID: \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\") " pod="openstack/dnsmasq-dns-7947596457-rj5wn" Mar 12 14:51:45.927926 master-0 kubenswrapper[37036]: I0312 14:51:45.924237 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-dns-svc\") pod \"dnsmasq-dns-7947596457-rj5wn\" (UID: \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\") " pod="openstack/dnsmasq-dns-7947596457-rj5wn" Mar 12 14:51:45.927926 master-0 kubenswrapper[37036]: I0312 14:51:45.925315 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-dns-swift-storage-0\") pod \"dnsmasq-dns-7947596457-rj5wn\" (UID: \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\") " pod="openstack/dnsmasq-dns-7947596457-rj5wn" Mar 12 14:51:45.927926 master-0 kubenswrapper[37036]: I0312 14:51:45.925850 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-ovsdbserver-nb\") pod \"dnsmasq-dns-7947596457-rj5wn\" (UID: \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\") " pod="openstack/dnsmasq-dns-7947596457-rj5wn" Mar 12 14:51:45.927926 master-0 kubenswrapper[37036]: I0312 14:51:45.926188 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-dns-svc\") pod \"dnsmasq-dns-7947596457-rj5wn\" (UID: \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\") " pod="openstack/dnsmasq-dns-7947596457-rj5wn" Mar 12 14:51:45.928327 master-0 kubenswrapper[37036]: I0312 14:51:45.928008 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 14:51:45.928327 master-0 kubenswrapper[37036]: I0312 14:51:45.928127 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 12 14:51:45.938924 master-0 kubenswrapper[37036]: I0312 14:51:45.936960 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Mar 12 14:51:45.938924 master-0 kubenswrapper[37036]: I0312 14:51:45.937195 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Mar 12 14:51:45.938924 master-0 kubenswrapper[37036]: I0312 14:51:45.937304 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Mar 12 14:51:45.951936 master-0 kubenswrapper[37036]: I0312 14:51:45.946098 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8fxl\" (UniqueName: \"kubernetes.io/projected/fbb29ced-f0e0-44d7-bd04-d332938eea7b-kube-api-access-r8fxl\") pod \"dnsmasq-dns-7947596457-rj5wn\" (UID: \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\") " pod="openstack/dnsmasq-dns-7947596457-rj5wn" Mar 12 14:51:46.033805 master-0 kubenswrapper[37036]: I0312 14:51:46.025744 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xknc\" (UniqueName: \"kubernetes.io/projected/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-kube-api-access-5xknc\") pod \"ironic-inspector-0\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " pod="openstack/ironic-inspector-0" Mar 12 14:51:46.033805 master-0 kubenswrapper[37036]: I0312 14:51:46.025807 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " pod="openstack/ironic-inspector-0" Mar 12 14:51:46.033805 master-0 kubenswrapper[37036]: I0312 14:51:46.025850 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-config\") pod \"ironic-inspector-0\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " pod="openstack/ironic-inspector-0" Mar 12 14:51:46.033805 master-0 kubenswrapper[37036]: I0312 14:51:46.025904 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " pod="openstack/ironic-inspector-0" Mar 12 14:51:46.033805 master-0 kubenswrapper[37036]: I0312 14:51:46.025971 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " pod="openstack/ironic-inspector-0" Mar 12 14:51:46.033805 master-0 kubenswrapper[37036]: I0312 14:51:46.026065 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-scripts\") pod \"ironic-inspector-0\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " pod="openstack/ironic-inspector-0" Mar 12 14:51:46.033805 master-0 kubenswrapper[37036]: I0312 14:51:46.026090 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " pod="openstack/ironic-inspector-0" Mar 12 14:51:46.128282 master-0 kubenswrapper[37036]: I0312 14:51:46.128202 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " pod="openstack/ironic-inspector-0" Mar 12 14:51:46.128876 master-0 kubenswrapper[37036]: I0312 14:51:46.128309 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " pod="openstack/ironic-inspector-0" Mar 12 14:51:46.128876 master-0 kubenswrapper[37036]: I0312 14:51:46.128404 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-scripts\") pod \"ironic-inspector-0\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " pod="openstack/ironic-inspector-0" Mar 12 14:51:46.128876 master-0 kubenswrapper[37036]: I0312 14:51:46.128455 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " pod="openstack/ironic-inspector-0" Mar 12 14:51:46.128876 master-0 kubenswrapper[37036]: I0312 14:51:46.128526 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xknc\" (UniqueName: \"kubernetes.io/projected/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-kube-api-access-5xknc\") pod \"ironic-inspector-0\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " pod="openstack/ironic-inspector-0" Mar 12 14:51:46.128876 master-0 kubenswrapper[37036]: I0312 14:51:46.128564 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " pod="openstack/ironic-inspector-0" Mar 12 14:51:46.128876 master-0 kubenswrapper[37036]: I0312 14:51:46.128619 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-config\") pod \"ironic-inspector-0\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " pod="openstack/ironic-inspector-0" Mar 12 14:51:46.131073 master-0 kubenswrapper[37036]: I0312 14:51:46.130471 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " pod="openstack/ironic-inspector-0" Mar 12 14:51:46.131073 master-0 kubenswrapper[37036]: I0312 14:51:46.130665 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7947596457-rj5wn" Mar 12 14:51:46.131202 master-0 kubenswrapper[37036]: I0312 14:51:46.131066 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " pod="openstack/ironic-inspector-0" Mar 12 14:51:46.134613 master-0 kubenswrapper[37036]: I0312 14:51:46.134568 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-config\") pod \"ironic-inspector-0\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " pod="openstack/ironic-inspector-0" Mar 12 14:51:46.138789 master-0 kubenswrapper[37036]: I0312 14:51:46.138740 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " pod="openstack/ironic-inspector-0" Mar 12 14:51:46.139003 master-0 kubenswrapper[37036]: I0312 14:51:46.138968 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-scripts\") pod \"ironic-inspector-0\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " pod="openstack/ironic-inspector-0" Mar 12 14:51:46.141513 master-0 kubenswrapper[37036]: I0312 14:51:46.141469 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " pod="openstack/ironic-inspector-0" Mar 12 14:51:46.160789 master-0 kubenswrapper[37036]: I0312 14:51:46.160725 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xknc\" (UniqueName: \"kubernetes.io/projected/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-kube-api-access-5xknc\") pod \"ironic-inspector-0\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " pod="openstack/ironic-inspector-0" Mar 12 14:51:46.343924 master-0 kubenswrapper[37036]: I0312 14:51:46.342026 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 12 14:51:46.376928 master-0 kubenswrapper[37036]: I0312 14:51:46.376077 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-994d-account-create-update-h6mj5" event={"ID":"112425ab-cbf2-468c-b40c-e64aa339389c","Type":"ContainerStarted","Data":"ae4479fa7869dfe1007111b51cd8e91b2af2c99f9334d21088b49b60aac8264c"} Mar 12 14:51:46.416797 master-0 kubenswrapper[37036]: I0312 14:51:46.416129 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9dzk7" event={"ID":"72fa304a-a97e-4350-81f5-6180bc4ba594","Type":"ContainerStarted","Data":"7e76f37192143bde354df4c1ec31ab4c04c5fca6d7b64de30f5bb713a5657ebe"} Mar 12 14:51:46.451823 master-0 kubenswrapper[37036]: I0312 14:51:46.451708 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-5685659465-xhxkv" event={"ID":"ee3a29d2-bf14-4521-896e-b0169adefcb2","Type":"ContainerStarted","Data":"8a9c3c6ab5c9f3c50ab23ef44b2a46de5778e52c8654166f920ed2e12cb6b648"} Mar 12 14:51:46.453372 master-0 kubenswrapper[37036]: I0312 14:51:46.453330 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-5685659465-xhxkv" Mar 12 14:51:46.499523 master-0 kubenswrapper[37036]: I0312 14:51:46.499449 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zktq7" event={"ID":"f92f3efc-76bc-40a0-b3c3-8da77d03c022","Type":"ContainerStarted","Data":"985b0f8c87d88b0b1ecff24fa6c53f77da9a7046df2f7161b36112ce418f5190"} Mar 12 14:51:46.728923 master-0 kubenswrapper[37036]: I0312 14:51:46.723330 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-zktq7" podStartSLOduration=10.723308295 podStartE2EDuration="10.723308295s" podCreationTimestamp="2026-03-12 14:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:46.712406286 +0000 UTC m=+965.720147223" watchObservedRunningTime="2026-03-12 14:51:46.723308295 +0000 UTC m=+965.731049232" Mar 12 14:51:46.936003 master-0 kubenswrapper[37036]: I0312 14:51:46.931747 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7947596457-rj5wn"] Mar 12 14:51:47.511045 master-0 kubenswrapper[37036]: I0312 14:51:47.510969 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7947596457-rj5wn" event={"ID":"fbb29ced-f0e0-44d7-bd04-d332938eea7b","Type":"ContainerStarted","Data":"6248ed2f96f7cf2137fe52c924678c8e383999b87c2a87f25523bec8220c301a"} Mar 12 14:51:47.511045 master-0 kubenswrapper[37036]: I0312 14:51:47.511035 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7947596457-rj5wn" event={"ID":"fbb29ced-f0e0-44d7-bd04-d332938eea7b","Type":"ContainerStarted","Data":"180a4096e51ce18cd543e292b1b85b408608e09847aa43e08fa72b6448b6b506"} Mar 12 14:51:48.909962 master-0 kubenswrapper[37036]: I0312 14:51:48.909868 37036 generic.go:334] "Generic (PLEG): container finished" podID="fbb29ced-f0e0-44d7-bd04-d332938eea7b" containerID="6248ed2f96f7cf2137fe52c924678c8e383999b87c2a87f25523bec8220c301a" exitCode=0 Mar 12 14:51:48.910524 master-0 kubenswrapper[37036]: I0312 14:51:48.909985 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7947596457-rj5wn" event={"ID":"fbb29ced-f0e0-44d7-bd04-d332938eea7b","Type":"ContainerDied","Data":"6248ed2f96f7cf2137fe52c924678c8e383999b87c2a87f25523bec8220c301a"} Mar 12 14:51:48.914963 master-0 kubenswrapper[37036]: I0312 14:51:48.914917 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b5866567d-h9t4r" event={"ID":"78f8830e-f634-424d-b7b7-606453255117","Type":"ContainerDied","Data":"968ac1a419ce83c9df0b7935a1d2d00b1c77d46fefb605ce697a12a530f04a12"} Mar 12 14:51:48.914963 master-0 kubenswrapper[37036]: I0312 14:51:48.914961 37036 generic.go:334] "Generic (PLEG): container finished" podID="78f8830e-f634-424d-b7b7-606453255117" containerID="968ac1a419ce83c9df0b7935a1d2d00b1c77d46fefb605ce697a12a530f04a12" exitCode=0 Mar 12 14:51:49.373415 master-0 kubenswrapper[37036]: I0312 14:51:49.373375 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:49.379876 master-0 kubenswrapper[37036]: I0312 14:51:49.379804 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-75dfc444b6-mtcqr" Mar 12 14:51:49.597367 master-0 kubenswrapper[37036]: W0312 14:51:49.597327 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d3a5f1e_e962_49a7_8cbc_586918ab5a4d.slice/crio-1771a1c394d41b9d32f767840e48e897e72bfdccd5a7cd252c8929dde0f13698 WatchSource:0}: Error finding container 1771a1c394d41b9d32f767840e48e897e72bfdccd5a7cd252c8929dde0f13698: Status 404 returned error can't find the container with id 1771a1c394d41b9d32f767840e48e897e72bfdccd5a7cd252c8929dde0f13698 Mar 12 14:51:49.608867 master-0 kubenswrapper[37036]: I0312 14:51:49.608832 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 14:51:49.864985 master-0 kubenswrapper[37036]: I0312 14:51:49.864934 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:51:50.152694 master-0 kubenswrapper[37036]: I0312 14:51:50.152555 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d","Type":"ContainerStarted","Data":"1771a1c394d41b9d32f767840e48e897e72bfdccd5a7cd252c8929dde0f13698"} Mar 12 14:51:50.154358 master-0 kubenswrapper[37036]: I0312 14:51:50.154335 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b5866567d-h9t4r" event={"ID":"78f8830e-f634-424d-b7b7-606453255117","Type":"ContainerDied","Data":"173400c1b05979fc25f005d7ed422981b58f01bfb55887d568e9b58239906968"} Mar 12 14:51:50.154428 master-0 kubenswrapper[37036]: I0312 14:51:50.154369 37036 scope.go:117] "RemoveContainer" containerID="968ac1a419ce83c9df0b7935a1d2d00b1c77d46fefb605ce697a12a530f04a12" Mar 12 14:51:50.154499 master-0 kubenswrapper[37036]: I0312 14:51:50.154481 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b5866567d-h9t4r" Mar 12 14:51:50.173243 master-0 kubenswrapper[37036]: I0312 14:51:50.173213 37036 generic.go:334] "Generic (PLEG): container finished" podID="8c0524b9-cbf3-40e3-9424-98b634ba1b10" containerID="6df4f9d9f166aad0b9ffccbf80385cf6ff72377afd79a9fa05a65c47994878fd" exitCode=0 Mar 12 14:51:50.173407 master-0 kubenswrapper[37036]: I0312 14:51:50.173259 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"8c0524b9-cbf3-40e3-9424-98b634ba1b10","Type":"ContainerDied","Data":"6df4f9d9f166aad0b9ffccbf80385cf6ff72377afd79a9fa05a65c47994878fd"} Mar 12 14:51:50.178049 master-0 kubenswrapper[37036]: I0312 14:51:50.177645 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7947596457-rj5wn" event={"ID":"fbb29ced-f0e0-44d7-bd04-d332938eea7b","Type":"ContainerStarted","Data":"468c8f2fa5121b935a4598fe81d551d0838bd7545898ee9321ac7fd0bd1de48b"} Mar 12 14:51:50.178049 master-0 kubenswrapper[37036]: I0312 14:51:50.177678 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7947596457-rj5wn" Mar 12 14:51:50.223368 master-0 kubenswrapper[37036]: I0312 14:51:50.223322 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-public-tls-certs\") pod \"78f8830e-f634-424d-b7b7-606453255117\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " Mar 12 14:51:50.223526 master-0 kubenswrapper[37036]: I0312 14:51:50.223442 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-combined-ca-bundle\") pod \"78f8830e-f634-424d-b7b7-606453255117\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " Mar 12 14:51:50.223604 master-0 kubenswrapper[37036]: I0312 14:51:50.223568 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-scripts\") pod \"78f8830e-f634-424d-b7b7-606453255117\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " Mar 12 14:51:50.223655 master-0 kubenswrapper[37036]: I0312 14:51:50.223613 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h27k4\" (UniqueName: \"kubernetes.io/projected/78f8830e-f634-424d-b7b7-606453255117-kube-api-access-h27k4\") pod \"78f8830e-f634-424d-b7b7-606453255117\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " Mar 12 14:51:50.223761 master-0 kubenswrapper[37036]: I0312 14:51:50.223742 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78f8830e-f634-424d-b7b7-606453255117-logs\") pod \"78f8830e-f634-424d-b7b7-606453255117\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " Mar 12 14:51:50.223811 master-0 kubenswrapper[37036]: I0312 14:51:50.223767 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-internal-tls-certs\") pod \"78f8830e-f634-424d-b7b7-606453255117\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " Mar 12 14:51:50.223861 master-0 kubenswrapper[37036]: I0312 14:51:50.223822 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-config-data\") pod \"78f8830e-f634-424d-b7b7-606453255117\" (UID: \"78f8830e-f634-424d-b7b7-606453255117\") " Mar 12 14:51:50.296267 master-0 kubenswrapper[37036]: I0312 14:51:50.296182 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78f8830e-f634-424d-b7b7-606453255117-kube-api-access-h27k4" (OuterVolumeSpecName: "kube-api-access-h27k4") pod "78f8830e-f634-424d-b7b7-606453255117" (UID: "78f8830e-f634-424d-b7b7-606453255117"). InnerVolumeSpecName "kube-api-access-h27k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:50.298540 master-0 kubenswrapper[37036]: I0312 14:51:50.298489 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-scripts" (OuterVolumeSpecName: "scripts") pod "78f8830e-f634-424d-b7b7-606453255117" (UID: "78f8830e-f634-424d-b7b7-606453255117"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:50.299766 master-0 kubenswrapper[37036]: I0312 14:51:50.299716 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78f8830e-f634-424d-b7b7-606453255117-logs" (OuterVolumeSpecName: "logs") pod "78f8830e-f634-424d-b7b7-606453255117" (UID: "78f8830e-f634-424d-b7b7-606453255117"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:51:50.305262 master-0 kubenswrapper[37036]: I0312 14:51:50.305186 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-config-data" (OuterVolumeSpecName: "config-data") pod "78f8830e-f634-424d-b7b7-606453255117" (UID: "78f8830e-f634-424d-b7b7-606453255117"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:50.313593 master-0 kubenswrapper[37036]: I0312 14:51:50.313377 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78f8830e-f634-424d-b7b7-606453255117" (UID: "78f8830e-f634-424d-b7b7-606453255117"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:50.314487 master-0 kubenswrapper[37036]: I0312 14:51:50.314421 37036 scope.go:117] "RemoveContainer" containerID="493a3f7dd30049dbc9300fbe1793f1c92e24f73ae60232387d925f9a1db82115" Mar 12 14:51:50.327198 master-0 kubenswrapper[37036]: I0312 14:51:50.327134 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:50.327198 master-0 kubenswrapper[37036]: I0312 14:51:50.327174 37036 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:50.327198 master-0 kubenswrapper[37036]: I0312 14:51:50.327184 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h27k4\" (UniqueName: \"kubernetes.io/projected/78f8830e-f634-424d-b7b7-606453255117-kube-api-access-h27k4\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:50.327198 master-0 kubenswrapper[37036]: I0312 14:51:50.327201 37036 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78f8830e-f634-424d-b7b7-606453255117-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:50.327448 master-0 kubenswrapper[37036]: I0312 14:51:50.327212 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:50.370145 master-0 kubenswrapper[37036]: I0312 14:51:50.369967 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "78f8830e-f634-424d-b7b7-606453255117" (UID: "78f8830e-f634-424d-b7b7-606453255117"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:50.400807 master-0 kubenswrapper[37036]: I0312 14:51:50.400757 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "78f8830e-f634-424d-b7b7-606453255117" (UID: "78f8830e-f634-424d-b7b7-606453255117"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:51:50.429792 master-0 kubenswrapper[37036]: I0312 14:51:50.429724 37036 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:50.429792 master-0 kubenswrapper[37036]: I0312 14:51:50.429786 37036 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78f8830e-f634-424d-b7b7-606453255117-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:50.724557 master-0 kubenswrapper[37036]: I0312 14:51:50.724462 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7947596457-rj5wn" podStartSLOduration=5.724439765 podStartE2EDuration="5.724439765s" podCreationTimestamp="2026-03-12 14:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:50.718602743 +0000 UTC m=+969.726343690" watchObservedRunningTime="2026-03-12 14:51:50.724439765 +0000 UTC m=+969.732180702" Mar 12 14:51:51.188057 master-0 kubenswrapper[37036]: I0312 14:51:51.187987 37036 generic.go:334] "Generic (PLEG): container finished" podID="7d3a5f1e-e962-49a7-8cbc-586918ab5a4d" containerID="a1329fe3eb21e8c693a4f679ba5724bab1d7e069b873f7c81d0bf3fbdc035e10" exitCode=0 Mar 12 14:51:51.188057 master-0 kubenswrapper[37036]: I0312 14:51:51.188059 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d","Type":"ContainerDied","Data":"a1329fe3eb21e8c693a4f679ba5724bab1d7e069b873f7c81d0bf3fbdc035e10"} Mar 12 14:51:51.191436 master-0 kubenswrapper[37036]: I0312 14:51:51.191399 37036 generic.go:334] "Generic (PLEG): container finished" podID="eb4a72f6-6d97-4d7b-a538-11604e6144ea" containerID="43055d2901cfbeee332f35966603e29e2dceed0b519b7f4fa8b3c788cc7d6288" exitCode=0 Mar 12 14:51:51.191585 master-0 kubenswrapper[37036]: I0312 14:51:51.191496 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a000-account-create-update-t5sxm" event={"ID":"eb4a72f6-6d97-4d7b-a538-11604e6144ea","Type":"ContainerDied","Data":"43055d2901cfbeee332f35966603e29e2dceed0b519b7f4fa8b3c788cc7d6288"} Mar 12 14:51:51.194117 master-0 kubenswrapper[37036]: I0312 14:51:51.194085 37036 generic.go:334] "Generic (PLEG): container finished" podID="e762e1b3-ab0c-47b4-88a1-4e4030b12ed4" containerID="ac4bc26b93a0da215e68d6b8a0d64cc0bee228897330c7b001614559fd5ac866" exitCode=0 Mar 12 14:51:51.194201 master-0 kubenswrapper[37036]: I0312 14:51:51.194131 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fqxgc" event={"ID":"e762e1b3-ab0c-47b4-88a1-4e4030b12ed4","Type":"ContainerDied","Data":"ac4bc26b93a0da215e68d6b8a0d64cc0bee228897330c7b001614559fd5ac866"} Mar 12 14:51:51.195516 master-0 kubenswrapper[37036]: I0312 14:51:51.195464 37036 generic.go:334] "Generic (PLEG): container finished" podID="112425ab-cbf2-468c-b40c-e64aa339389c" containerID="ae4479fa7869dfe1007111b51cd8e91b2af2c99f9334d21088b49b60aac8264c" exitCode=0 Mar 12 14:51:51.195516 master-0 kubenswrapper[37036]: I0312 14:51:51.195483 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-994d-account-create-update-h6mj5" event={"ID":"112425ab-cbf2-468c-b40c-e64aa339389c","Type":"ContainerDied","Data":"ae4479fa7869dfe1007111b51cd8e91b2af2c99f9334d21088b49b60aac8264c"} Mar 12 14:51:51.196895 master-0 kubenswrapper[37036]: I0312 14:51:51.196864 37036 generic.go:334] "Generic (PLEG): container finished" podID="72fa304a-a97e-4350-81f5-6180bc4ba594" containerID="7e76f37192143bde354df4c1ec31ab4c04c5fca6d7b64de30f5bb713a5657ebe" exitCode=0 Mar 12 14:51:51.196985 master-0 kubenswrapper[37036]: I0312 14:51:51.196922 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9dzk7" event={"ID":"72fa304a-a97e-4350-81f5-6180bc4ba594","Type":"ContainerDied","Data":"7e76f37192143bde354df4c1ec31ab4c04c5fca6d7b64de30f5bb713a5657ebe"} Mar 12 14:51:51.198256 master-0 kubenswrapper[37036]: I0312 14:51:51.198228 37036 generic.go:334] "Generic (PLEG): container finished" podID="14afc361-6f21-415e-af7b-7ed3a4f9c48b" containerID="326475cc1c3676214c65e247b9e7092ee89d8d1d5f60a20a2746268509ad4f47" exitCode=0 Mar 12 14:51:51.198355 master-0 kubenswrapper[37036]: I0312 14:51:51.198305 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d364-account-create-update-dznnx" event={"ID":"14afc361-6f21-415e-af7b-7ed3a4f9c48b","Type":"ContainerDied","Data":"326475cc1c3676214c65e247b9e7092ee89d8d1d5f60a20a2746268509ad4f47"} Mar 12 14:51:51.199806 master-0 kubenswrapper[37036]: I0312 14:51:51.199752 37036 generic.go:334] "Generic (PLEG): container finished" podID="f92f3efc-76bc-40a0-b3c3-8da77d03c022" containerID="985b0f8c87d88b0b1ecff24fa6c53f77da9a7046df2f7161b36112ce418f5190" exitCode=0 Mar 12 14:51:51.199892 master-0 kubenswrapper[37036]: I0312 14:51:51.199823 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zktq7" event={"ID":"f92f3efc-76bc-40a0-b3c3-8da77d03c022","Type":"ContainerDied","Data":"985b0f8c87d88b0b1ecff24fa6c53f77da9a7046df2f7161b36112ce418f5190"} Mar 12 14:51:51.487801 master-0 kubenswrapper[37036]: I0312 14:51:51.487726 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-b5866567d-h9t4r"] Mar 12 14:51:51.847710 master-0 kubenswrapper[37036]: I0312 14:51:51.847646 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-b5866567d-h9t4r"] Mar 12 14:51:52.424223 master-0 kubenswrapper[37036]: I0312 14:51:52.420719 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a339e40b-4843-4796-8fbe-a3a0ca45a5a2\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b388bada-a531-4c3f-bf6b-3b84af4376f1\") pod \"glance-bc20e-default-external-api-0\" (UID: \"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c\") " pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:52.450924 master-0 kubenswrapper[37036]: I0312 14:51:52.446545 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-5685659465-xhxkv" Mar 12 14:51:52.583120 master-0 kubenswrapper[37036]: I0312 14:51:52.582617 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:51:53.142945 master-0 kubenswrapper[37036]: I0312 14:51:53.138120 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d364-account-create-update-dznnx" Mar 12 14:51:53.191124 master-0 kubenswrapper[37036]: I0312 14:51:53.187790 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zktq7" Mar 12 14:51:53.263528 master-0 kubenswrapper[37036]: I0312 14:51:53.263345 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14afc361-6f21-415e-af7b-7ed3a4f9c48b-operator-scripts\") pod \"14afc361-6f21-415e-af7b-7ed3a4f9c48b\" (UID: \"14afc361-6f21-415e-af7b-7ed3a4f9c48b\") " Mar 12 14:51:53.263755 master-0 kubenswrapper[37036]: I0312 14:51:53.263515 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8g48\" (UniqueName: \"kubernetes.io/projected/f92f3efc-76bc-40a0-b3c3-8da77d03c022-kube-api-access-m8g48\") pod \"f92f3efc-76bc-40a0-b3c3-8da77d03c022\" (UID: \"f92f3efc-76bc-40a0-b3c3-8da77d03c022\") " Mar 12 14:51:53.264292 master-0 kubenswrapper[37036]: I0312 14:51:53.264218 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f92f3efc-76bc-40a0-b3c3-8da77d03c022-operator-scripts\") pod \"f92f3efc-76bc-40a0-b3c3-8da77d03c022\" (UID: \"f92f3efc-76bc-40a0-b3c3-8da77d03c022\") " Mar 12 14:51:53.264417 master-0 kubenswrapper[37036]: I0312 14:51:53.264345 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxh2r\" (UniqueName: \"kubernetes.io/projected/14afc361-6f21-415e-af7b-7ed3a4f9c48b-kube-api-access-xxh2r\") pod \"14afc361-6f21-415e-af7b-7ed3a4f9c48b\" (UID: \"14afc361-6f21-415e-af7b-7ed3a4f9c48b\") " Mar 12 14:51:53.268553 master-0 kubenswrapper[37036]: I0312 14:51:53.267840 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fqxgc" Mar 12 14:51:53.269418 master-0 kubenswrapper[37036]: I0312 14:51:53.269361 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f92f3efc-76bc-40a0-b3c3-8da77d03c022-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f92f3efc-76bc-40a0-b3c3-8da77d03c022" (UID: "f92f3efc-76bc-40a0-b3c3-8da77d03c022"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:53.270096 master-0 kubenswrapper[37036]: I0312 14:51:53.270060 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14afc361-6f21-415e-af7b-7ed3a4f9c48b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "14afc361-6f21-415e-af7b-7ed3a4f9c48b" (UID: "14afc361-6f21-415e-af7b-7ed3a4f9c48b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:53.274607 master-0 kubenswrapper[37036]: I0312 14:51:53.273479 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f92f3efc-76bc-40a0-b3c3-8da77d03c022-kube-api-access-m8g48" (OuterVolumeSpecName: "kube-api-access-m8g48") pod "f92f3efc-76bc-40a0-b3c3-8da77d03c022" (UID: "f92f3efc-76bc-40a0-b3c3-8da77d03c022"). InnerVolumeSpecName "kube-api-access-m8g48". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:53.277997 master-0 kubenswrapper[37036]: I0312 14:51:53.277462 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14afc361-6f21-415e-af7b-7ed3a4f9c48b-kube-api-access-xxh2r" (OuterVolumeSpecName: "kube-api-access-xxh2r") pod "14afc361-6f21-415e-af7b-7ed3a4f9c48b" (UID: "14afc361-6f21-415e-af7b-7ed3a4f9c48b"). InnerVolumeSpecName "kube-api-access-xxh2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:53.284025 master-0 kubenswrapper[37036]: I0312 14:51:53.283760 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d364-account-create-update-dznnx" Mar 12 14:51:53.290964 master-0 kubenswrapper[37036]: I0312 14:51:53.287032 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zktq7" Mar 12 14:51:53.329616 master-0 kubenswrapper[37036]: I0312 14:51:53.329360 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c23b5911-fb8a-49ca-a229-b24d2fc68f14\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dbac9550-b4fd-4c80-96a6-54c391bed946\") pod \"glance-bc20e-default-internal-api-0\" (UID: \"23884bb7-c60a-40ec-b96e-7b5280cea5f5\") " pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:53.411460 master-0 kubenswrapper[37036]: I0312 14:51:53.410198 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxh2r\" (UniqueName: \"kubernetes.io/projected/14afc361-6f21-415e-af7b-7ed3a4f9c48b-kube-api-access-xxh2r\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:53.411460 master-0 kubenswrapper[37036]: I0312 14:51:53.410244 37036 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14afc361-6f21-415e-af7b-7ed3a4f9c48b-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:53.411460 master-0 kubenswrapper[37036]: I0312 14:51:53.410254 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8g48\" (UniqueName: \"kubernetes.io/projected/f92f3efc-76bc-40a0-b3c3-8da77d03c022-kube-api-access-m8g48\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:53.411460 master-0 kubenswrapper[37036]: I0312 14:51:53.410263 37036 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f92f3efc-76bc-40a0-b3c3-8da77d03c022-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:53.473986 master-0 kubenswrapper[37036]: I0312 14:51:53.473891 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78f8830e-f634-424d-b7b7-606453255117" path="/var/lib/kubelet/pods/78f8830e-f634-424d-b7b7-606453255117/volumes" Mar 12 14:51:53.486076 master-0 kubenswrapper[37036]: I0312 14:51:53.483101 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9dzk7" Mar 12 14:51:53.496639 master-0 kubenswrapper[37036]: I0312 14:51:53.496600 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fqxgc" event={"ID":"e762e1b3-ab0c-47b4-88a1-4e4030b12ed4","Type":"ContainerDied","Data":"a366e663d10b8d6c064cb828dfc4cc618c3bb8b2bc76751191a7d69b2339ae6e"} Mar 12 14:51:53.496852 master-0 kubenswrapper[37036]: I0312 14:51:53.496834 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a366e663d10b8d6c064cb828dfc4cc618c3bb8b2bc76751191a7d69b2339ae6e" Mar 12 14:51:53.497248 master-0 kubenswrapper[37036]: I0312 14:51:53.497228 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d364-account-create-update-dznnx" event={"ID":"14afc361-6f21-415e-af7b-7ed3a4f9c48b","Type":"ContainerDied","Data":"8eb3a8d223a44d45be1ac9f6c12a1021522a12d04b6ded12df6cb1901a54fa48"} Mar 12 14:51:53.497364 master-0 kubenswrapper[37036]: I0312 14:51:53.497347 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eb3a8d223a44d45be1ac9f6c12a1021522a12d04b6ded12df6cb1901a54fa48" Mar 12 14:51:53.497467 master-0 kubenswrapper[37036]: I0312 14:51:53.497448 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zktq7" event={"ID":"f92f3efc-76bc-40a0-b3c3-8da77d03c022","Type":"ContainerDied","Data":"8d8c00c7b8c1575c747d8226104e44e0b329740cd1536214a4a38c1f175de102"} Mar 12 14:51:53.497573 master-0 kubenswrapper[37036]: I0312 14:51:53.497555 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d8c00c7b8c1575c747d8226104e44e0b329740cd1536214a4a38c1f175de102" Mar 12 14:51:53.517483 master-0 kubenswrapper[37036]: I0312 14:51:53.517421 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e762e1b3-ab0c-47b4-88a1-4e4030b12ed4-operator-scripts\") pod \"e762e1b3-ab0c-47b4-88a1-4e4030b12ed4\" (UID: \"e762e1b3-ab0c-47b4-88a1-4e4030b12ed4\") " Mar 12 14:51:53.517688 master-0 kubenswrapper[37036]: I0312 14:51:53.517503 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dncv\" (UniqueName: \"kubernetes.io/projected/e762e1b3-ab0c-47b4-88a1-4e4030b12ed4-kube-api-access-4dncv\") pod \"e762e1b3-ab0c-47b4-88a1-4e4030b12ed4\" (UID: \"e762e1b3-ab0c-47b4-88a1-4e4030b12ed4\") " Mar 12 14:51:53.524342 master-0 kubenswrapper[37036]: I0312 14:51:53.524295 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e762e1b3-ab0c-47b4-88a1-4e4030b12ed4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e762e1b3-ab0c-47b4-88a1-4e4030b12ed4" (UID: "e762e1b3-ab0c-47b4-88a1-4e4030b12ed4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:53.526341 master-0 kubenswrapper[37036]: I0312 14:51:53.526315 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-994d-account-create-update-h6mj5" Mar 12 14:51:53.533387 master-0 kubenswrapper[37036]: I0312 14:51:53.533291 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e762e1b3-ab0c-47b4-88a1-4e4030b12ed4-kube-api-access-4dncv" (OuterVolumeSpecName: "kube-api-access-4dncv") pod "e762e1b3-ab0c-47b4-88a1-4e4030b12ed4" (UID: "e762e1b3-ab0c-47b4-88a1-4e4030b12ed4"). InnerVolumeSpecName "kube-api-access-4dncv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:53.540041 master-0 kubenswrapper[37036]: I0312 14:51:53.539604 37036 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e762e1b3-ab0c-47b4-88a1-4e4030b12ed4-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:53.540041 master-0 kubenswrapper[37036]: I0312 14:51:53.540037 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dncv\" (UniqueName: \"kubernetes.io/projected/e762e1b3-ab0c-47b4-88a1-4e4030b12ed4-kube-api-access-4dncv\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:53.549501 master-0 kubenswrapper[37036]: I0312 14:51:53.549448 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:51:53.555620 master-0 kubenswrapper[37036]: I0312 14:51:53.553807 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a000-account-create-update-t5sxm" Mar 12 14:51:53.641359 master-0 kubenswrapper[37036]: I0312 14:51:53.641310 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/112425ab-cbf2-468c-b40c-e64aa339389c-operator-scripts\") pod \"112425ab-cbf2-468c-b40c-e64aa339389c\" (UID: \"112425ab-cbf2-468c-b40c-e64aa339389c\") " Mar 12 14:51:53.641519 master-0 kubenswrapper[37036]: I0312 14:51:53.641406 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hj8k\" (UniqueName: \"kubernetes.io/projected/72fa304a-a97e-4350-81f5-6180bc4ba594-kube-api-access-9hj8k\") pod \"72fa304a-a97e-4350-81f5-6180bc4ba594\" (UID: \"72fa304a-a97e-4350-81f5-6180bc4ba594\") " Mar 12 14:51:53.641573 master-0 kubenswrapper[37036]: I0312 14:51:53.641550 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6mhk\" (UniqueName: \"kubernetes.io/projected/112425ab-cbf2-468c-b40c-e64aa339389c-kube-api-access-t6mhk\") pod \"112425ab-cbf2-468c-b40c-e64aa339389c\" (UID: \"112425ab-cbf2-468c-b40c-e64aa339389c\") " Mar 12 14:51:53.641632 master-0 kubenswrapper[37036]: I0312 14:51:53.641621 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72fa304a-a97e-4350-81f5-6180bc4ba594-operator-scripts\") pod \"72fa304a-a97e-4350-81f5-6180bc4ba594\" (UID: \"72fa304a-a97e-4350-81f5-6180bc4ba594\") " Mar 12 14:51:53.641966 master-0 kubenswrapper[37036]: I0312 14:51:53.641752 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/112425ab-cbf2-468c-b40c-e64aa339389c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "112425ab-cbf2-468c-b40c-e64aa339389c" (UID: "112425ab-cbf2-468c-b40c-e64aa339389c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:53.642555 master-0 kubenswrapper[37036]: I0312 14:51:53.642509 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72fa304a-a97e-4350-81f5-6180bc4ba594-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "72fa304a-a97e-4350-81f5-6180bc4ba594" (UID: "72fa304a-a97e-4350-81f5-6180bc4ba594"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:53.645554 master-0 kubenswrapper[37036]: I0312 14:51:53.645306 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/112425ab-cbf2-468c-b40c-e64aa339389c-kube-api-access-t6mhk" (OuterVolumeSpecName: "kube-api-access-t6mhk") pod "112425ab-cbf2-468c-b40c-e64aa339389c" (UID: "112425ab-cbf2-468c-b40c-e64aa339389c"). InnerVolumeSpecName "kube-api-access-t6mhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:53.646259 master-0 kubenswrapper[37036]: I0312 14:51:53.646233 37036 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/112425ab-cbf2-468c-b40c-e64aa339389c-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:53.646340 master-0 kubenswrapper[37036]: I0312 14:51:53.646261 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6mhk\" (UniqueName: \"kubernetes.io/projected/112425ab-cbf2-468c-b40c-e64aa339389c-kube-api-access-t6mhk\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:53.646340 master-0 kubenswrapper[37036]: I0312 14:51:53.646277 37036 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72fa304a-a97e-4350-81f5-6180bc4ba594-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:53.649644 master-0 kubenswrapper[37036]: I0312 14:51:53.649427 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72fa304a-a97e-4350-81f5-6180bc4ba594-kube-api-access-9hj8k" (OuterVolumeSpecName: "kube-api-access-9hj8k") pod "72fa304a-a97e-4350-81f5-6180bc4ba594" (UID: "72fa304a-a97e-4350-81f5-6180bc4ba594"). InnerVolumeSpecName "kube-api-access-9hj8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:53.752518 master-0 kubenswrapper[37036]: I0312 14:51:53.746883 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb4a72f6-6d97-4d7b-a538-11604e6144ea-operator-scripts\") pod \"eb4a72f6-6d97-4d7b-a538-11604e6144ea\" (UID: \"eb4a72f6-6d97-4d7b-a538-11604e6144ea\") " Mar 12 14:51:53.752518 master-0 kubenswrapper[37036]: I0312 14:51:53.746964 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcnwf\" (UniqueName: \"kubernetes.io/projected/eb4a72f6-6d97-4d7b-a538-11604e6144ea-kube-api-access-bcnwf\") pod \"eb4a72f6-6d97-4d7b-a538-11604e6144ea\" (UID: \"eb4a72f6-6d97-4d7b-a538-11604e6144ea\") " Mar 12 14:51:53.752518 master-0 kubenswrapper[37036]: I0312 14:51:53.748483 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb4a72f6-6d97-4d7b-a538-11604e6144ea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb4a72f6-6d97-4d7b-a538-11604e6144ea" (UID: "eb4a72f6-6d97-4d7b-a538-11604e6144ea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:51:53.752518 master-0 kubenswrapper[37036]: I0312 14:51:53.749691 37036 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb4a72f6-6d97-4d7b-a538-11604e6144ea-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:53.752518 master-0 kubenswrapper[37036]: I0312 14:51:53.749713 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hj8k\" (UniqueName: \"kubernetes.io/projected/72fa304a-a97e-4350-81f5-6180bc4ba594-kube-api-access-9hj8k\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:53.774936 master-0 kubenswrapper[37036]: I0312 14:51:53.773259 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb4a72f6-6d97-4d7b-a538-11604e6144ea-kube-api-access-bcnwf" (OuterVolumeSpecName: "kube-api-access-bcnwf") pod "eb4a72f6-6d97-4d7b-a538-11604e6144ea" (UID: "eb4a72f6-6d97-4d7b-a538-11604e6144ea"). InnerVolumeSpecName "kube-api-access-bcnwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:51:53.785856 master-0 kubenswrapper[37036]: I0312 14:51:53.785429 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bc20e-default-external-api-0"] Mar 12 14:51:53.814177 master-0 kubenswrapper[37036]: W0312 14:51:53.814124 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6f44e4f_ee7c_47ac_a347_0f91e81dfb2c.slice/crio-3baa4ee043c968cd9d55fda5701916d7a75ed18a32185a4e4a739799948f21c3 WatchSource:0}: Error finding container 3baa4ee043c968cd9d55fda5701916d7a75ed18a32185a4e4a739799948f21c3: Status 404 returned error can't find the container with id 3baa4ee043c968cd9d55fda5701916d7a75ed18a32185a4e4a739799948f21c3 Mar 12 14:51:53.853723 master-0 kubenswrapper[37036]: I0312 14:51:53.852951 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcnwf\" (UniqueName: \"kubernetes.io/projected/eb4a72f6-6d97-4d7b-a538-11604e6144ea-kube-api-access-bcnwf\") on node \"master-0\" DevicePath \"\"" Mar 12 14:51:53.989911 master-0 kubenswrapper[37036]: I0312 14:51:53.989852 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 14:51:54.270651 master-0 kubenswrapper[37036]: I0312 14:51:54.270601 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bc20e-default-internal-api-0"] Mar 12 14:51:54.326553 master-0 kubenswrapper[37036]: I0312 14:51:54.326497 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bc20e-default-external-api-0" event={"ID":"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c","Type":"ContainerStarted","Data":"3baa4ee043c968cd9d55fda5701916d7a75ed18a32185a4e4a739799948f21c3"} Mar 12 14:51:54.332797 master-0 kubenswrapper[37036]: I0312 14:51:54.331939 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-994d-account-create-update-h6mj5" event={"ID":"112425ab-cbf2-468c-b40c-e64aa339389c","Type":"ContainerDied","Data":"9b38ed165da3741c5e41e75968f12f18c6646c6bbde0eec7c7c57d57e79a89ae"} Mar 12 14:51:54.332797 master-0 kubenswrapper[37036]: I0312 14:51:54.331980 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b38ed165da3741c5e41e75968f12f18c6646c6bbde0eec7c7c57d57e79a89ae" Mar 12 14:51:54.332797 master-0 kubenswrapper[37036]: I0312 14:51:54.332057 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-994d-account-create-update-h6mj5" Mar 12 14:51:54.349052 master-0 kubenswrapper[37036]: I0312 14:51:54.348950 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9dzk7" Mar 12 14:51:54.349338 master-0 kubenswrapper[37036]: I0312 14:51:54.349290 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9dzk7" event={"ID":"72fa304a-a97e-4350-81f5-6180bc4ba594","Type":"ContainerDied","Data":"77c5c7dafde3597464c5487448b06b959627f6cd1707bb39630f0b02fe327954"} Mar 12 14:51:54.349414 master-0 kubenswrapper[37036]: I0312 14:51:54.349341 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77c5c7dafde3597464c5487448b06b959627f6cd1707bb39630f0b02fe327954" Mar 12 14:51:54.351193 master-0 kubenswrapper[37036]: I0312 14:51:54.351128 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bc20e-default-internal-api-0" event={"ID":"23884bb7-c60a-40ec-b96e-7b5280cea5f5","Type":"ContainerStarted","Data":"bd03cb9cdb8ffb3d41cc5e7a190a2ccedf1ebda04aa6571ed611ca61c23ecd93"} Mar 12 14:51:54.354578 master-0 kubenswrapper[37036]: I0312 14:51:54.354529 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a000-account-create-update-t5sxm" event={"ID":"eb4a72f6-6d97-4d7b-a538-11604e6144ea","Type":"ContainerDied","Data":"eeaffe7d7209188c696e9a8baca9c2cde078e79aaff725db51e3a35adb785f89"} Mar 12 14:51:54.354648 master-0 kubenswrapper[37036]: I0312 14:51:54.354586 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeaffe7d7209188c696e9a8baca9c2cde078e79aaff725db51e3a35adb785f89" Mar 12 14:51:54.354648 master-0 kubenswrapper[37036]: I0312 14:51:54.354597 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fqxgc" Mar 12 14:51:54.354751 master-0 kubenswrapper[37036]: I0312 14:51:54.354560 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a000-account-create-update-t5sxm" Mar 12 14:51:55.398152 master-0 kubenswrapper[37036]: I0312 14:51:55.397948 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bc20e-default-internal-api-0" event={"ID":"23884bb7-c60a-40ec-b96e-7b5280cea5f5","Type":"ContainerStarted","Data":"032feb3808cadbd2024a36c7ae6a14da0f2ce2ebb116c19407bd7624c7520419"} Mar 12 14:51:55.401591 master-0 kubenswrapper[37036]: I0312 14:51:55.401544 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bc20e-default-external-api-0" event={"ID":"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c","Type":"ContainerStarted","Data":"c1160a945d64136bdff1607a42e4e7424f4219c79e680166a2425cc4987bbf51"} Mar 12 14:51:56.135007 master-0 kubenswrapper[37036]: I0312 14:51:56.132570 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7947596457-rj5wn" Mar 12 14:51:56.372596 master-0 kubenswrapper[37036]: I0312 14:51:56.372524 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7847764989-d9gwb"] Mar 12 14:51:56.373018 master-0 kubenswrapper[37036]: I0312 14:51:56.372826 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7847764989-d9gwb" podUID="3ce6481f-851c-4ead-a7c8-5de1d781cef9" containerName="dnsmasq-dns" containerID="cri-o://964973b397b2284ee8057cda7c0bad6efd8b6105a2d8773ee1796bca5ac9d87d" gracePeriod=10 Mar 12 14:51:56.422541 master-0 kubenswrapper[37036]: I0312 14:51:56.422419 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bc20e-default-internal-api-0" event={"ID":"23884bb7-c60a-40ec-b96e-7b5280cea5f5","Type":"ContainerStarted","Data":"9a3ee235a4d4ccad50e93b695fb353e1dacb343697a46fd454ddeac20d1af3e3"} Mar 12 14:51:56.427391 master-0 kubenswrapper[37036]: I0312 14:51:56.427353 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bc20e-default-external-api-0" event={"ID":"d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c","Type":"ContainerStarted","Data":"1d3e139c50391338e9d3beb35aacb2adbf01457681a58021a29857e9578420e6"} Mar 12 14:51:56.748992 master-0 kubenswrapper[37036]: I0312 14:51:56.746598 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-bc20e-default-external-api-0" podStartSLOduration=12.746576865 podStartE2EDuration="12.746576865s" podCreationTimestamp="2026-03-12 14:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:56.746385952 +0000 UTC m=+975.754126889" watchObservedRunningTime="2026-03-12 14:51:56.746576865 +0000 UTC m=+975.754317812" Mar 12 14:51:56.830569 master-0 kubenswrapper[37036]: I0312 14:51:56.830462 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-bc20e-default-internal-api-0" podStartSLOduration=12.830438167 podStartE2EDuration="12.830438167s" podCreationTimestamp="2026-03-12 14:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:51:56.794724935 +0000 UTC m=+975.802465872" watchObservedRunningTime="2026-03-12 14:51:56.830438167 +0000 UTC m=+975.838179104" Mar 12 14:51:57.287826 master-0 kubenswrapper[37036]: I0312 14:51:57.287692 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zjqjr"] Mar 12 14:51:57.288373 master-0 kubenswrapper[37036]: E0312 14:51:57.288287 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14afc361-6f21-415e-af7b-7ed3a4f9c48b" containerName="mariadb-account-create-update" Mar 12 14:51:57.288373 master-0 kubenswrapper[37036]: I0312 14:51:57.288307 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="14afc361-6f21-415e-af7b-7ed3a4f9c48b" containerName="mariadb-account-create-update" Mar 12 14:51:57.288373 master-0 kubenswrapper[37036]: E0312 14:51:57.288334 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e762e1b3-ab0c-47b4-88a1-4e4030b12ed4" containerName="mariadb-database-create" Mar 12 14:51:57.288373 master-0 kubenswrapper[37036]: I0312 14:51:57.288340 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="e762e1b3-ab0c-47b4-88a1-4e4030b12ed4" containerName="mariadb-database-create" Mar 12 14:51:57.288373 master-0 kubenswrapper[37036]: E0312 14:51:57.288355 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78f8830e-f634-424d-b7b7-606453255117" containerName="placement-log" Mar 12 14:51:57.288373 master-0 kubenswrapper[37036]: I0312 14:51:57.288361 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="78f8830e-f634-424d-b7b7-606453255117" containerName="placement-log" Mar 12 14:51:57.288373 master-0 kubenswrapper[37036]: E0312 14:51:57.288373 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f92f3efc-76bc-40a0-b3c3-8da77d03c022" containerName="mariadb-database-create" Mar 12 14:51:57.288799 master-0 kubenswrapper[37036]: I0312 14:51:57.288381 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="f92f3efc-76bc-40a0-b3c3-8da77d03c022" containerName="mariadb-database-create" Mar 12 14:51:57.288799 master-0 kubenswrapper[37036]: E0312 14:51:57.288395 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb4a72f6-6d97-4d7b-a538-11604e6144ea" containerName="mariadb-account-create-update" Mar 12 14:51:57.288799 master-0 kubenswrapper[37036]: I0312 14:51:57.288400 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb4a72f6-6d97-4d7b-a538-11604e6144ea" containerName="mariadb-account-create-update" Mar 12 14:51:57.288799 master-0 kubenswrapper[37036]: E0312 14:51:57.288417 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112425ab-cbf2-468c-b40c-e64aa339389c" containerName="mariadb-account-create-update" Mar 12 14:51:57.288799 master-0 kubenswrapper[37036]: I0312 14:51:57.288423 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="112425ab-cbf2-468c-b40c-e64aa339389c" containerName="mariadb-account-create-update" Mar 12 14:51:57.288799 master-0 kubenswrapper[37036]: E0312 14:51:57.288434 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72fa304a-a97e-4350-81f5-6180bc4ba594" containerName="mariadb-database-create" Mar 12 14:51:57.288799 master-0 kubenswrapper[37036]: I0312 14:51:57.288441 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="72fa304a-a97e-4350-81f5-6180bc4ba594" containerName="mariadb-database-create" Mar 12 14:51:57.288799 master-0 kubenswrapper[37036]: E0312 14:51:57.288458 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78f8830e-f634-424d-b7b7-606453255117" containerName="placement-api" Mar 12 14:51:57.288799 master-0 kubenswrapper[37036]: I0312 14:51:57.288464 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="78f8830e-f634-424d-b7b7-606453255117" containerName="placement-api" Mar 12 14:51:57.288799 master-0 kubenswrapper[37036]: I0312 14:51:57.288670 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="112425ab-cbf2-468c-b40c-e64aa339389c" containerName="mariadb-account-create-update" Mar 12 14:51:57.288799 master-0 kubenswrapper[37036]: I0312 14:51:57.288705 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="78f8830e-f634-424d-b7b7-606453255117" containerName="placement-log" Mar 12 14:51:57.288799 master-0 kubenswrapper[37036]: I0312 14:51:57.288720 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="72fa304a-a97e-4350-81f5-6180bc4ba594" containerName="mariadb-database-create" Mar 12 14:51:57.288799 master-0 kubenswrapper[37036]: I0312 14:51:57.288727 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="f92f3efc-76bc-40a0-b3c3-8da77d03c022" containerName="mariadb-database-create" Mar 12 14:51:57.288799 master-0 kubenswrapper[37036]: I0312 14:51:57.288753 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb4a72f6-6d97-4d7b-a538-11604e6144ea" containerName="mariadb-account-create-update" Mar 12 14:51:57.288799 master-0 kubenswrapper[37036]: I0312 14:51:57.288775 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="e762e1b3-ab0c-47b4-88a1-4e4030b12ed4" containerName="mariadb-database-create" Mar 12 14:51:57.288799 master-0 kubenswrapper[37036]: I0312 14:51:57.288809 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="78f8830e-f634-424d-b7b7-606453255117" containerName="placement-api" Mar 12 14:51:57.289577 master-0 kubenswrapper[37036]: I0312 14:51:57.288824 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="14afc361-6f21-415e-af7b-7ed3a4f9c48b" containerName="mariadb-account-create-update" Mar 12 14:51:57.289630 master-0 kubenswrapper[37036]: I0312 14:51:57.289601 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zjqjr" Mar 12 14:51:57.294732 master-0 kubenswrapper[37036]: I0312 14:51:57.294675 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 12 14:51:57.295016 master-0 kubenswrapper[37036]: I0312 14:51:57.294743 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 12 14:51:57.338066 master-0 kubenswrapper[37036]: I0312 14:51:57.337973 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zjqjr"] Mar 12 14:51:57.397946 master-0 kubenswrapper[37036]: I0312 14:51:57.397130 37036 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7847764989-d9gwb" podUID="3ce6481f-851c-4ead-a7c8-5de1d781cef9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.238:5353: connect: connection refused" Mar 12 14:51:57.406058 master-0 kubenswrapper[37036]: I0312 14:51:57.400698 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61a413f4-9b2a-4a44-aef7-6c75090b9a44-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zjqjr\" (UID: \"61a413f4-9b2a-4a44-aef7-6c75090b9a44\") " pod="openstack/nova-cell0-conductor-db-sync-zjqjr" Mar 12 14:51:57.406058 master-0 kubenswrapper[37036]: I0312 14:51:57.402413 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61a413f4-9b2a-4a44-aef7-6c75090b9a44-scripts\") pod \"nova-cell0-conductor-db-sync-zjqjr\" (UID: \"61a413f4-9b2a-4a44-aef7-6c75090b9a44\") " pod="openstack/nova-cell0-conductor-db-sync-zjqjr" Mar 12 14:51:57.406058 master-0 kubenswrapper[37036]: I0312 14:51:57.402718 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61a413f4-9b2a-4a44-aef7-6c75090b9a44-config-data\") pod \"nova-cell0-conductor-db-sync-zjqjr\" (UID: \"61a413f4-9b2a-4a44-aef7-6c75090b9a44\") " pod="openstack/nova-cell0-conductor-db-sync-zjqjr" Mar 12 14:51:57.406058 master-0 kubenswrapper[37036]: I0312 14:51:57.403106 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lr8f\" (UniqueName: \"kubernetes.io/projected/61a413f4-9b2a-4a44-aef7-6c75090b9a44-kube-api-access-8lr8f\") pod \"nova-cell0-conductor-db-sync-zjqjr\" (UID: \"61a413f4-9b2a-4a44-aef7-6c75090b9a44\") " pod="openstack/nova-cell0-conductor-db-sync-zjqjr" Mar 12 14:51:57.505364 master-0 kubenswrapper[37036]: I0312 14:51:57.505300 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61a413f4-9b2a-4a44-aef7-6c75090b9a44-config-data\") pod \"nova-cell0-conductor-db-sync-zjqjr\" (UID: \"61a413f4-9b2a-4a44-aef7-6c75090b9a44\") " pod="openstack/nova-cell0-conductor-db-sync-zjqjr" Mar 12 14:51:57.505930 master-0 kubenswrapper[37036]: I0312 14:51:57.505398 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lr8f\" (UniqueName: \"kubernetes.io/projected/61a413f4-9b2a-4a44-aef7-6c75090b9a44-kube-api-access-8lr8f\") pod \"nova-cell0-conductor-db-sync-zjqjr\" (UID: \"61a413f4-9b2a-4a44-aef7-6c75090b9a44\") " pod="openstack/nova-cell0-conductor-db-sync-zjqjr" Mar 12 14:51:57.505930 master-0 kubenswrapper[37036]: I0312 14:51:57.505563 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61a413f4-9b2a-4a44-aef7-6c75090b9a44-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zjqjr\" (UID: \"61a413f4-9b2a-4a44-aef7-6c75090b9a44\") " pod="openstack/nova-cell0-conductor-db-sync-zjqjr" Mar 12 14:51:57.505930 master-0 kubenswrapper[37036]: I0312 14:51:57.505602 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61a413f4-9b2a-4a44-aef7-6c75090b9a44-scripts\") pod \"nova-cell0-conductor-db-sync-zjqjr\" (UID: \"61a413f4-9b2a-4a44-aef7-6c75090b9a44\") " pod="openstack/nova-cell0-conductor-db-sync-zjqjr" Mar 12 14:51:57.517941 master-0 kubenswrapper[37036]: I0312 14:51:57.510528 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61a413f4-9b2a-4a44-aef7-6c75090b9a44-config-data\") pod \"nova-cell0-conductor-db-sync-zjqjr\" (UID: \"61a413f4-9b2a-4a44-aef7-6c75090b9a44\") " pod="openstack/nova-cell0-conductor-db-sync-zjqjr" Mar 12 14:51:57.517941 master-0 kubenswrapper[37036]: I0312 14:51:57.510599 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61a413f4-9b2a-4a44-aef7-6c75090b9a44-scripts\") pod \"nova-cell0-conductor-db-sync-zjqjr\" (UID: \"61a413f4-9b2a-4a44-aef7-6c75090b9a44\") " pod="openstack/nova-cell0-conductor-db-sync-zjqjr" Mar 12 14:51:57.517941 master-0 kubenswrapper[37036]: I0312 14:51:57.511722 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61a413f4-9b2a-4a44-aef7-6c75090b9a44-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zjqjr\" (UID: \"61a413f4-9b2a-4a44-aef7-6c75090b9a44\") " pod="openstack/nova-cell0-conductor-db-sync-zjqjr" Mar 12 14:51:57.531070 master-0 kubenswrapper[37036]: I0312 14:51:57.529568 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lr8f\" (UniqueName: \"kubernetes.io/projected/61a413f4-9b2a-4a44-aef7-6c75090b9a44-kube-api-access-8lr8f\") pod \"nova-cell0-conductor-db-sync-zjqjr\" (UID: \"61a413f4-9b2a-4a44-aef7-6c75090b9a44\") " pod="openstack/nova-cell0-conductor-db-sync-zjqjr" Mar 12 14:51:57.645092 master-0 kubenswrapper[37036]: I0312 14:51:57.644379 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zjqjr" Mar 12 14:52:00.105066 master-0 kubenswrapper[37036]: I0312 14:52:00.105015 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7847764989-d9gwb" Mar 12 14:52:00.181461 master-0 kubenswrapper[37036]: I0312 14:52:00.181340 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-ovsdbserver-nb\") pod \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\" (UID: \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\") " Mar 12 14:52:00.181681 master-0 kubenswrapper[37036]: I0312 14:52:00.181650 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-dns-swift-storage-0\") pod \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\" (UID: \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\") " Mar 12 14:52:00.181758 master-0 kubenswrapper[37036]: I0312 14:52:00.181736 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjpqk\" (UniqueName: \"kubernetes.io/projected/3ce6481f-851c-4ead-a7c8-5de1d781cef9-kube-api-access-cjpqk\") pod \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\" (UID: \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\") " Mar 12 14:52:00.182215 master-0 kubenswrapper[37036]: I0312 14:52:00.181847 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-ovsdbserver-sb\") pod \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\" (UID: \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\") " Mar 12 14:52:00.182215 master-0 kubenswrapper[37036]: I0312 14:52:00.181935 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-config\") pod \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\" (UID: \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\") " Mar 12 14:52:00.182215 master-0 kubenswrapper[37036]: I0312 14:52:00.181988 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-dns-svc\") pod \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\" (UID: \"3ce6481f-851c-4ead-a7c8-5de1d781cef9\") " Mar 12 14:52:00.262204 master-0 kubenswrapper[37036]: W0312 14:52:00.262129 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61a413f4_9b2a_4a44_aef7_6c75090b9a44.slice/crio-54d4e9d22c2de27a539739db2d66d90562ef1d884c4cdf600f1711320ad4a36a WatchSource:0}: Error finding container 54d4e9d22c2de27a539739db2d66d90562ef1d884c4cdf600f1711320ad4a36a: Status 404 returned error can't find the container with id 54d4e9d22c2de27a539739db2d66d90562ef1d884c4cdf600f1711320ad4a36a Mar 12 14:52:00.267294 master-0 kubenswrapper[37036]: I0312 14:52:00.267258 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ce6481f-851c-4ead-a7c8-5de1d781cef9-kube-api-access-cjpqk" (OuterVolumeSpecName: "kube-api-access-cjpqk") pod "3ce6481f-851c-4ead-a7c8-5de1d781cef9" (UID: "3ce6481f-851c-4ead-a7c8-5de1d781cef9"). InnerVolumeSpecName "kube-api-access-cjpqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:52:00.281315 master-0 kubenswrapper[37036]: I0312 14:52:00.281258 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zjqjr"] Mar 12 14:52:00.300365 master-0 kubenswrapper[37036]: I0312 14:52:00.299884 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjpqk\" (UniqueName: \"kubernetes.io/projected/3ce6481f-851c-4ead-a7c8-5de1d781cef9-kube-api-access-cjpqk\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:00.326114 master-0 kubenswrapper[37036]: I0312 14:52:00.326036 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3ce6481f-851c-4ead-a7c8-5de1d781cef9" (UID: "3ce6481f-851c-4ead-a7c8-5de1d781cef9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:52:00.339310 master-0 kubenswrapper[37036]: I0312 14:52:00.334946 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3ce6481f-851c-4ead-a7c8-5de1d781cef9" (UID: "3ce6481f-851c-4ead-a7c8-5de1d781cef9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:52:00.339310 master-0 kubenswrapper[37036]: I0312 14:52:00.336148 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3ce6481f-851c-4ead-a7c8-5de1d781cef9" (UID: "3ce6481f-851c-4ead-a7c8-5de1d781cef9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:52:00.370545 master-0 kubenswrapper[37036]: I0312 14:52:00.370476 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3ce6481f-851c-4ead-a7c8-5de1d781cef9" (UID: "3ce6481f-851c-4ead-a7c8-5de1d781cef9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:52:00.385330 master-0 kubenswrapper[37036]: I0312 14:52:00.385246 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-config" (OuterVolumeSpecName: "config") pod "3ce6481f-851c-4ead-a7c8-5de1d781cef9" (UID: "3ce6481f-851c-4ead-a7c8-5de1d781cef9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:52:00.403193 master-0 kubenswrapper[37036]: I0312 14:52:00.403145 37036 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:00.403193 master-0 kubenswrapper[37036]: I0312 14:52:00.403189 37036 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:00.403635 master-0 kubenswrapper[37036]: I0312 14:52:00.403202 37036 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:00.403635 master-0 kubenswrapper[37036]: I0312 14:52:00.403211 37036 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:00.403635 master-0 kubenswrapper[37036]: I0312 14:52:00.403222 37036 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ce6481f-851c-4ead-a7c8-5de1d781cef9-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:00.509662 master-0 kubenswrapper[37036]: I0312 14:52:00.507427 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zjqjr" event={"ID":"61a413f4-9b2a-4a44-aef7-6c75090b9a44","Type":"ContainerStarted","Data":"54d4e9d22c2de27a539739db2d66d90562ef1d884c4cdf600f1711320ad4a36a"} Mar 12 14:52:00.512363 master-0 kubenswrapper[37036]: I0312 14:52:00.512301 37036 generic.go:334] "Generic (PLEG): container finished" podID="3ce6481f-851c-4ead-a7c8-5de1d781cef9" containerID="964973b397b2284ee8057cda7c0bad6efd8b6105a2d8773ee1796bca5ac9d87d" exitCode=0 Mar 12 14:52:00.512501 master-0 kubenswrapper[37036]: I0312 14:52:00.512364 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7847764989-d9gwb" event={"ID":"3ce6481f-851c-4ead-a7c8-5de1d781cef9","Type":"ContainerDied","Data":"964973b397b2284ee8057cda7c0bad6efd8b6105a2d8773ee1796bca5ac9d87d"} Mar 12 14:52:00.512501 master-0 kubenswrapper[37036]: I0312 14:52:00.512397 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7847764989-d9gwb" event={"ID":"3ce6481f-851c-4ead-a7c8-5de1d781cef9","Type":"ContainerDied","Data":"df8917773066c6768a329bbd755868e50bcd7f9bcd206c2729fda8c145e9f1b6"} Mar 12 14:52:00.512501 master-0 kubenswrapper[37036]: I0312 14:52:00.512435 37036 scope.go:117] "RemoveContainer" containerID="964973b397b2284ee8057cda7c0bad6efd8b6105a2d8773ee1796bca5ac9d87d" Mar 12 14:52:00.512669 master-0 kubenswrapper[37036]: I0312 14:52:00.512647 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7847764989-d9gwb" Mar 12 14:52:00.566044 master-0 kubenswrapper[37036]: I0312 14:52:00.566010 37036 scope.go:117] "RemoveContainer" containerID="29127fdb6c441e4a272691a75084a293302a2094a49f7af6db4342c807953247" Mar 12 14:52:00.581418 master-0 kubenswrapper[37036]: I0312 14:52:00.581342 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7847764989-d9gwb"] Mar 12 14:52:00.591277 master-0 kubenswrapper[37036]: I0312 14:52:00.591235 37036 scope.go:117] "RemoveContainer" containerID="964973b397b2284ee8057cda7c0bad6efd8b6105a2d8773ee1796bca5ac9d87d" Mar 12 14:52:00.592470 master-0 kubenswrapper[37036]: E0312 14:52:00.592416 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"964973b397b2284ee8057cda7c0bad6efd8b6105a2d8773ee1796bca5ac9d87d\": container with ID starting with 964973b397b2284ee8057cda7c0bad6efd8b6105a2d8773ee1796bca5ac9d87d not found: ID does not exist" containerID="964973b397b2284ee8057cda7c0bad6efd8b6105a2d8773ee1796bca5ac9d87d" Mar 12 14:52:00.592551 master-0 kubenswrapper[37036]: I0312 14:52:00.592480 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"964973b397b2284ee8057cda7c0bad6efd8b6105a2d8773ee1796bca5ac9d87d"} err="failed to get container status \"964973b397b2284ee8057cda7c0bad6efd8b6105a2d8773ee1796bca5ac9d87d\": rpc error: code = NotFound desc = could not find container \"964973b397b2284ee8057cda7c0bad6efd8b6105a2d8773ee1796bca5ac9d87d\": container with ID starting with 964973b397b2284ee8057cda7c0bad6efd8b6105a2d8773ee1796bca5ac9d87d not found: ID does not exist" Mar 12 14:52:00.592551 master-0 kubenswrapper[37036]: I0312 14:52:00.592515 37036 scope.go:117] "RemoveContainer" containerID="29127fdb6c441e4a272691a75084a293302a2094a49f7af6db4342c807953247" Mar 12 14:52:00.593123 master-0 kubenswrapper[37036]: E0312 14:52:00.593092 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29127fdb6c441e4a272691a75084a293302a2094a49f7af6db4342c807953247\": container with ID starting with 29127fdb6c441e4a272691a75084a293302a2094a49f7af6db4342c807953247 not found: ID does not exist" containerID="29127fdb6c441e4a272691a75084a293302a2094a49f7af6db4342c807953247" Mar 12 14:52:00.593194 master-0 kubenswrapper[37036]: I0312 14:52:00.593124 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29127fdb6c441e4a272691a75084a293302a2094a49f7af6db4342c807953247"} err="failed to get container status \"29127fdb6c441e4a272691a75084a293302a2094a49f7af6db4342c807953247\": rpc error: code = NotFound desc = could not find container \"29127fdb6c441e4a272691a75084a293302a2094a49f7af6db4342c807953247\": container with ID starting with 29127fdb6c441e4a272691a75084a293302a2094a49f7af6db4342c807953247 not found: ID does not exist" Mar 12 14:52:00.603742 master-0 kubenswrapper[37036]: I0312 14:52:00.603495 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7847764989-d9gwb"] Mar 12 14:52:01.261428 master-0 kubenswrapper[37036]: I0312 14:52:01.261247 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ce6481f-851c-4ead-a7c8-5de1d781cef9" path="/var/lib/kubelet/pods/3ce6481f-851c-4ead-a7c8-5de1d781cef9/volumes" Mar 12 14:52:01.537265 master-0 kubenswrapper[37036]: I0312 14:52:01.537078 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"8c0524b9-cbf3-40e3-9424-98b634ba1b10","Type":"ContainerStarted","Data":"5b9d6b8d3e41d2665fcb1a46393e0bcde5da7071984972a5a11923715f59c0b5"} Mar 12 14:52:01.543725 master-0 kubenswrapper[37036]: I0312 14:52:01.543673 37036 generic.go:334] "Generic (PLEG): container finished" podID="7d3a5f1e-e962-49a7-8cbc-586918ab5a4d" containerID="7922c5548f626b2370928e6581214b1a0542be5cf2e37fbe83b4e983c0b1d7c9" exitCode=0 Mar 12 14:52:01.543990 master-0 kubenswrapper[37036]: I0312 14:52:01.543728 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d","Type":"ContainerDied","Data":"7922c5548f626b2370928e6581214b1a0542be5cf2e37fbe83b4e983c0b1d7c9"} Mar 12 14:52:02.583644 master-0 kubenswrapper[37036]: I0312 14:52:02.583538 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:52:02.586479 master-0 kubenswrapper[37036]: I0312 14:52:02.586439 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:52:02.626300 master-0 kubenswrapper[37036]: I0312 14:52:02.626243 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:52:02.633762 master-0 kubenswrapper[37036]: I0312 14:52:02.633702 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:52:02.775262 master-0 kubenswrapper[37036]: I0312 14:52:02.773658 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 12 14:52:02.872018 master-0 kubenswrapper[37036]: I0312 14:52:02.871953 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-scripts\") pod \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " Mar 12 14:52:02.872239 master-0 kubenswrapper[37036]: I0312 14:52:02.872042 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " Mar 12 14:52:02.872239 master-0 kubenswrapper[37036]: I0312 14:52:02.872177 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-var-lib-ironic\") pod \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " Mar 12 14:52:02.872239 master-0 kubenswrapper[37036]: I0312 14:52:02.872227 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-combined-ca-bundle\") pod \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " Mar 12 14:52:02.872356 master-0 kubenswrapper[37036]: I0312 14:52:02.872256 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-etc-podinfo\") pod \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " Mar 12 14:52:02.872400 master-0 kubenswrapper[37036]: I0312 14:52:02.872362 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-config\") pod \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " Mar 12 14:52:02.872435 master-0 kubenswrapper[37036]: I0312 14:52:02.872424 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xknc\" (UniqueName: \"kubernetes.io/projected/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-kube-api-access-5xknc\") pod \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\" (UID: \"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d\") " Mar 12 14:52:02.873111 master-0 kubenswrapper[37036]: I0312 14:52:02.873065 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "7d3a5f1e-e962-49a7-8cbc-586918ab5a4d" (UID: "7d3a5f1e-e962-49a7-8cbc-586918ab5a4d"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:52:02.877749 master-0 kubenswrapper[37036]: I0312 14:52:02.877349 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "7d3a5f1e-e962-49a7-8cbc-586918ab5a4d" (UID: "7d3a5f1e-e962-49a7-8cbc-586918ab5a4d"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:52:02.878748 master-0 kubenswrapper[37036]: I0312 14:52:02.878698 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "7d3a5f1e-e962-49a7-8cbc-586918ab5a4d" (UID: "7d3a5f1e-e962-49a7-8cbc-586918ab5a4d"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 12 14:52:02.880644 master-0 kubenswrapper[37036]: I0312 14:52:02.880548 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-scripts" (OuterVolumeSpecName: "scripts") pod "7d3a5f1e-e962-49a7-8cbc-586918ab5a4d" (UID: "7d3a5f1e-e962-49a7-8cbc-586918ab5a4d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:52:02.881560 master-0 kubenswrapper[37036]: I0312 14:52:02.881511 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-config" (OuterVolumeSpecName: "config") pod "7d3a5f1e-e962-49a7-8cbc-586918ab5a4d" (UID: "7d3a5f1e-e962-49a7-8cbc-586918ab5a4d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:52:02.886176 master-0 kubenswrapper[37036]: I0312 14:52:02.886133 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-kube-api-access-5xknc" (OuterVolumeSpecName: "kube-api-access-5xknc") pod "7d3a5f1e-e962-49a7-8cbc-586918ab5a4d" (UID: "7d3a5f1e-e962-49a7-8cbc-586918ab5a4d"). InnerVolumeSpecName "kube-api-access-5xknc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:52:02.936972 master-0 kubenswrapper[37036]: I0312 14:52:02.935246 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d3a5f1e-e962-49a7-8cbc-586918ab5a4d" (UID: "7d3a5f1e-e962-49a7-8cbc-586918ab5a4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:52:02.976173 master-0 kubenswrapper[37036]: I0312 14:52:02.975817 37036 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:02.976173 master-0 kubenswrapper[37036]: I0312 14:52:02.975862 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xknc\" (UniqueName: \"kubernetes.io/projected/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-kube-api-access-5xknc\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:02.976173 master-0 kubenswrapper[37036]: I0312 14:52:02.975875 37036 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:02.976173 master-0 kubenswrapper[37036]: I0312 14:52:02.975887 37036 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:02.976173 master-0 kubenswrapper[37036]: I0312 14:52:02.975914 37036 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:02.976173 master-0 kubenswrapper[37036]: I0312 14:52:02.975923 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:02.976173 master-0 kubenswrapper[37036]: I0312 14:52:02.975932 37036 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:03.551512 master-0 kubenswrapper[37036]: I0312 14:52:03.551179 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:52:03.551512 master-0 kubenswrapper[37036]: I0312 14:52:03.551247 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:52:03.581074 master-0 kubenswrapper[37036]: I0312 14:52:03.579736 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 12 14:52:03.581074 master-0 kubenswrapper[37036]: I0312 14:52:03.579798 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"7d3a5f1e-e962-49a7-8cbc-586918ab5a4d","Type":"ContainerDied","Data":"1771a1c394d41b9d32f767840e48e897e72bfdccd5a7cd252c8929dde0f13698"} Mar 12 14:52:03.581074 master-0 kubenswrapper[37036]: I0312 14:52:03.579836 37036 scope.go:117] "RemoveContainer" containerID="7922c5548f626b2370928e6581214b1a0542be5cf2e37fbe83b4e983c0b1d7c9" Mar 12 14:52:03.581074 master-0 kubenswrapper[37036]: I0312 14:52:03.580033 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:52:03.581074 master-0 kubenswrapper[37036]: I0312 14:52:03.580057 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:52:03.591650 master-0 kubenswrapper[37036]: I0312 14:52:03.591442 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:52:03.591990 master-0 kubenswrapper[37036]: I0312 14:52:03.591944 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:52:03.600395 master-0 kubenswrapper[37036]: I0312 14:52:03.598276 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:52:03.612214 master-0 kubenswrapper[37036]: I0312 14:52:03.612176 37036 scope.go:117] "RemoveContainer" containerID="a1329fe3eb21e8c693a4f679ba5724bab1d7e069b873f7c81d0bf3fbdc035e10" Mar 12 14:52:03.839050 master-0 kubenswrapper[37036]: I0312 14:52:03.837683 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 14:52:03.851943 master-0 kubenswrapper[37036]: I0312 14:52:03.848158 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 14:52:03.869811 master-0 kubenswrapper[37036]: I0312 14:52:03.869763 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 14:52:03.874108 master-0 kubenswrapper[37036]: E0312 14:52:03.870616 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ce6481f-851c-4ead-a7c8-5de1d781cef9" containerName="dnsmasq-dns" Mar 12 14:52:03.874108 master-0 kubenswrapper[37036]: I0312 14:52:03.870636 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ce6481f-851c-4ead-a7c8-5de1d781cef9" containerName="dnsmasq-dns" Mar 12 14:52:03.874108 master-0 kubenswrapper[37036]: E0312 14:52:03.870647 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3a5f1e-e962-49a7-8cbc-586918ab5a4d" containerName="inspector-pxe-init" Mar 12 14:52:03.874108 master-0 kubenswrapper[37036]: I0312 14:52:03.870654 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3a5f1e-e962-49a7-8cbc-586918ab5a4d" containerName="inspector-pxe-init" Mar 12 14:52:03.874108 master-0 kubenswrapper[37036]: E0312 14:52:03.870697 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3a5f1e-e962-49a7-8cbc-586918ab5a4d" containerName="ironic-python-agent-init" Mar 12 14:52:03.874108 master-0 kubenswrapper[37036]: I0312 14:52:03.870705 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3a5f1e-e962-49a7-8cbc-586918ab5a4d" containerName="ironic-python-agent-init" Mar 12 14:52:03.874108 master-0 kubenswrapper[37036]: E0312 14:52:03.870739 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ce6481f-851c-4ead-a7c8-5de1d781cef9" containerName="init" Mar 12 14:52:03.874108 master-0 kubenswrapper[37036]: I0312 14:52:03.870746 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ce6481f-851c-4ead-a7c8-5de1d781cef9" containerName="init" Mar 12 14:52:03.874108 master-0 kubenswrapper[37036]: I0312 14:52:03.871150 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d3a5f1e-e962-49a7-8cbc-586918ab5a4d" containerName="inspector-pxe-init" Mar 12 14:52:03.874108 master-0 kubenswrapper[37036]: I0312 14:52:03.871172 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ce6481f-851c-4ead-a7c8-5de1d781cef9" containerName="dnsmasq-dns" Mar 12 14:52:03.904046 master-0 kubenswrapper[37036]: I0312 14:52:03.900963 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 12 14:52:03.912615 master-0 kubenswrapper[37036]: I0312 14:52:03.905549 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Mar 12 14:52:03.912615 master-0 kubenswrapper[37036]: I0312 14:52:03.905584 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-internal-svc" Mar 12 14:52:03.912615 master-0 kubenswrapper[37036]: I0312 14:52:03.905756 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-public-svc" Mar 12 14:52:03.912615 master-0 kubenswrapper[37036]: I0312 14:52:03.905849 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Mar 12 14:52:03.912615 master-0 kubenswrapper[37036]: I0312 14:52:03.906030 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Mar 12 14:52:03.952923 master-0 kubenswrapper[37036]: I0312 14:52:03.945424 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 14:52:04.034591 master-0 kubenswrapper[37036]: I0312 14:52:04.034515 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8deaf53d-c497-4a42-92f7-1d88df637fec-config\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.034835 master-0 kubenswrapper[37036]: I0312 14:52:04.034621 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8deaf53d-c497-4a42-92f7-1d88df637fec-scripts\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.034835 master-0 kubenswrapper[37036]: I0312 14:52:04.034676 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/8deaf53d-c497-4a42-92f7-1d88df637fec-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.034835 master-0 kubenswrapper[37036]: I0312 14:52:04.034704 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8deaf53d-c497-4a42-92f7-1d88df637fec-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.034835 master-0 kubenswrapper[37036]: I0312 14:52:04.034746 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/8deaf53d-c497-4a42-92f7-1d88df637fec-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.034835 master-0 kubenswrapper[37036]: I0312 14:52:04.034795 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8deaf53d-c497-4a42-92f7-1d88df637fec-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.035099 master-0 kubenswrapper[37036]: I0312 14:52:04.034869 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8deaf53d-c497-4a42-92f7-1d88df637fec-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.035099 master-0 kubenswrapper[37036]: I0312 14:52:04.035035 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkgfr\" (UniqueName: \"kubernetes.io/projected/8deaf53d-c497-4a42-92f7-1d88df637fec-kube-api-access-rkgfr\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.039928 master-0 kubenswrapper[37036]: I0312 14:52:04.035279 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/8deaf53d-c497-4a42-92f7-1d88df637fec-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.138428 master-0 kubenswrapper[37036]: I0312 14:52:04.138304 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8deaf53d-c497-4a42-92f7-1d88df637fec-config\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.138428 master-0 kubenswrapper[37036]: I0312 14:52:04.138397 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8deaf53d-c497-4a42-92f7-1d88df637fec-scripts\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.138660 master-0 kubenswrapper[37036]: I0312 14:52:04.138443 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/8deaf53d-c497-4a42-92f7-1d88df637fec-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.138660 master-0 kubenswrapper[37036]: I0312 14:52:04.138477 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8deaf53d-c497-4a42-92f7-1d88df637fec-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.138660 master-0 kubenswrapper[37036]: I0312 14:52:04.138518 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/8deaf53d-c497-4a42-92f7-1d88df637fec-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.138660 master-0 kubenswrapper[37036]: I0312 14:52:04.138562 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8deaf53d-c497-4a42-92f7-1d88df637fec-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.138660 master-0 kubenswrapper[37036]: I0312 14:52:04.138632 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8deaf53d-c497-4a42-92f7-1d88df637fec-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.138824 master-0 kubenswrapper[37036]: I0312 14:52:04.138760 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkgfr\" (UniqueName: \"kubernetes.io/projected/8deaf53d-c497-4a42-92f7-1d88df637fec-kube-api-access-rkgfr\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.138824 master-0 kubenswrapper[37036]: I0312 14:52:04.138794 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/8deaf53d-c497-4a42-92f7-1d88df637fec-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.139823 master-0 kubenswrapper[37036]: I0312 14:52:04.139746 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/8deaf53d-c497-4a42-92f7-1d88df637fec-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.140941 master-0 kubenswrapper[37036]: I0312 14:52:04.140782 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/8deaf53d-c497-4a42-92f7-1d88df637fec-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.144983 master-0 kubenswrapper[37036]: I0312 14:52:04.144867 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8deaf53d-c497-4a42-92f7-1d88df637fec-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.152788 master-0 kubenswrapper[37036]: I0312 14:52:04.151865 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/8deaf53d-c497-4a42-92f7-1d88df637fec-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.153361 master-0 kubenswrapper[37036]: I0312 14:52:04.153284 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8deaf53d-c497-4a42-92f7-1d88df637fec-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.155948 master-0 kubenswrapper[37036]: I0312 14:52:04.155680 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8deaf53d-c497-4a42-92f7-1d88df637fec-config\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.157511 master-0 kubenswrapper[37036]: I0312 14:52:04.157476 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8deaf53d-c497-4a42-92f7-1d88df637fec-scripts\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.161867 master-0 kubenswrapper[37036]: I0312 14:52:04.160211 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkgfr\" (UniqueName: \"kubernetes.io/projected/8deaf53d-c497-4a42-92f7-1d88df637fec-kube-api-access-rkgfr\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.163515 master-0 kubenswrapper[37036]: I0312 14:52:04.163460 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8deaf53d-c497-4a42-92f7-1d88df637fec-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"8deaf53d-c497-4a42-92f7-1d88df637fec\") " pod="openstack/ironic-inspector-0" Mar 12 14:52:04.350288 master-0 kubenswrapper[37036]: I0312 14:52:04.350214 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 12 14:52:04.601204 master-0 kubenswrapper[37036]: I0312 14:52:04.601131 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:52:05.260941 master-0 kubenswrapper[37036]: I0312 14:52:05.260433 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d3a5f1e-e962-49a7-8cbc-586918ab5a4d" path="/var/lib/kubelet/pods/7d3a5f1e-e962-49a7-8cbc-586918ab5a4d/volumes" Mar 12 14:52:05.612659 master-0 kubenswrapper[37036]: I0312 14:52:05.612536 37036 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 14:52:08.470927 master-0 kubenswrapper[37036]: I0312 14:52:08.470724 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:52:08.470927 master-0 kubenswrapper[37036]: I0312 14:52:08.470876 37036 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 14:52:08.513520 master-0 kubenswrapper[37036]: I0312 14:52:08.513455 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:52:08.513728 master-0 kubenswrapper[37036]: I0312 14:52:08.513605 37036 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 14:52:08.515074 master-0 kubenswrapper[37036]: I0312 14:52:08.514958 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-bc20e-default-external-api-0" Mar 12 14:52:08.543803 master-0 kubenswrapper[37036]: I0312 14:52:08.543746 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-bc20e-default-internal-api-0" Mar 12 14:52:12.554396 master-0 kubenswrapper[37036]: I0312 14:52:12.554322 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 14:52:12.715455 master-0 kubenswrapper[37036]: I0312 14:52:12.715382 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"8deaf53d-c497-4a42-92f7-1d88df637fec","Type":"ContainerStarted","Data":"60ede5ff387cb3273e547adacad86a56eedc73c1cbeef6034f077900967d1558"} Mar 12 14:52:12.720022 master-0 kubenswrapper[37036]: I0312 14:52:12.719949 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zjqjr" event={"ID":"61a413f4-9b2a-4a44-aef7-6c75090b9a44","Type":"ContainerStarted","Data":"7232bc1d911005cdb464491505f27d9f035b65dd4c78d5a553e34bb2fe4e6447"} Mar 12 14:52:12.754210 master-0 kubenswrapper[37036]: I0312 14:52:12.753866 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-zjqjr" podStartSLOduration=4.007076467 podStartE2EDuration="15.753849492s" podCreationTimestamp="2026-03-12 14:51:57 +0000 UTC" firstStartedPulling="2026-03-12 14:52:00.266350364 +0000 UTC m=+979.274091301" lastFinishedPulling="2026-03-12 14:52:12.013123399 +0000 UTC m=+991.020864326" observedRunningTime="2026-03-12 14:52:12.741959004 +0000 UTC m=+991.749699941" watchObservedRunningTime="2026-03-12 14:52:12.753849492 +0000 UTC m=+991.761590429" Mar 12 14:52:13.752053 master-0 kubenswrapper[37036]: I0312 14:52:13.751993 37036 generic.go:334] "Generic (PLEG): container finished" podID="8deaf53d-c497-4a42-92f7-1d88df637fec" containerID="1d2e00ea088cc43a83b9c426b5d7a3426401d75ddf29b4f89fcb6844f341cb95" exitCode=0 Mar 12 14:52:13.752861 master-0 kubenswrapper[37036]: I0312 14:52:13.752115 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"8deaf53d-c497-4a42-92f7-1d88df637fec","Type":"ContainerDied","Data":"1d2e00ea088cc43a83b9c426b5d7a3426401d75ddf29b4f89fcb6844f341cb95"} Mar 12 14:52:14.764234 master-0 kubenswrapper[37036]: I0312 14:52:14.764099 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"8deaf53d-c497-4a42-92f7-1d88df637fec","Type":"ContainerStarted","Data":"baa5f5f15a7ed81bdb55496751792be727ea4babb121719e7dc41883d2f57b3a"} Mar 12 14:52:15.776088 master-0 kubenswrapper[37036]: I0312 14:52:15.776017 37036 generic.go:334] "Generic (PLEG): container finished" podID="8deaf53d-c497-4a42-92f7-1d88df637fec" containerID="baa5f5f15a7ed81bdb55496751792be727ea4babb121719e7dc41883d2f57b3a" exitCode=0 Mar 12 14:52:15.776088 master-0 kubenswrapper[37036]: I0312 14:52:15.776066 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"8deaf53d-c497-4a42-92f7-1d88df637fec","Type":"ContainerDied","Data":"baa5f5f15a7ed81bdb55496751792be727ea4babb121719e7dc41883d2f57b3a"} Mar 12 14:52:16.825394 master-0 kubenswrapper[37036]: I0312 14:52:16.825276 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"8deaf53d-c497-4a42-92f7-1d88df637fec","Type":"ContainerStarted","Data":"40041e4b6a66620bd6c65aedb6333507f1aa0e5a1c8d25e989c67a11dc52d8bf"} Mar 12 14:52:17.846340 master-0 kubenswrapper[37036]: I0312 14:52:17.846205 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"8deaf53d-c497-4a42-92f7-1d88df637fec","Type":"ContainerStarted","Data":"37e3438d4f3eefe26a8f918be849af3d990145378d6202149bf14efae3c7f14f"} Mar 12 14:52:17.846340 master-0 kubenswrapper[37036]: I0312 14:52:17.846257 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"8deaf53d-c497-4a42-92f7-1d88df637fec","Type":"ContainerStarted","Data":"a69a116cb44cb6b394ef488e1348cbf18e7e0a36b985ba0ab1f8ed9693d293c9"} Mar 12 14:52:18.864119 master-0 kubenswrapper[37036]: I0312 14:52:18.863988 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"8deaf53d-c497-4a42-92f7-1d88df637fec","Type":"ContainerStarted","Data":"c8c22da031d534ac640b754a764a93e23fe7e22cab5c5cea7fc8de70efa89ec5"} Mar 12 14:52:18.864119 master-0 kubenswrapper[37036]: I0312 14:52:18.864044 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"8deaf53d-c497-4a42-92f7-1d88df637fec","Type":"ContainerStarted","Data":"1cbb0b2b15fdf5c880f7fa9ac8733a98d923f9148e32a51b55aec35208697480"} Mar 12 14:52:18.864774 master-0 kubenswrapper[37036]: I0312 14:52:18.864177 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 12 14:52:18.864774 master-0 kubenswrapper[37036]: I0312 14:52:18.864235 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 12 14:52:18.905579 master-0 kubenswrapper[37036]: I0312 14:52:18.905473 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-0" podStartSLOduration=15.905452119 podStartE2EDuration="15.905452119s" podCreationTimestamp="2026-03-12 14:52:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:52:18.893398299 +0000 UTC m=+997.901139256" watchObservedRunningTime="2026-03-12 14:52:18.905452119 +0000 UTC m=+997.913193056" Mar 12 14:52:19.350462 master-0 kubenswrapper[37036]: I0312 14:52:19.350399 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 12 14:52:19.350462 master-0 kubenswrapper[37036]: I0312 14:52:19.350458 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 12 14:52:24.350436 master-0 kubenswrapper[37036]: I0312 14:52:24.350348 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Mar 12 14:52:24.350436 master-0 kubenswrapper[37036]: I0312 14:52:24.350430 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Mar 12 14:52:24.359371 master-0 kubenswrapper[37036]: I0312 14:52:24.355106 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 12 14:52:24.399704 master-0 kubenswrapper[37036]: I0312 14:52:24.399645 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Mar 12 14:52:24.401777 master-0 kubenswrapper[37036]: I0312 14:52:24.401747 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Mar 12 14:52:24.418309 master-0 kubenswrapper[37036]: I0312 14:52:24.418086 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 12 14:52:24.932344 master-0 kubenswrapper[37036]: I0312 14:52:24.932271 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 12 14:52:24.936780 master-0 kubenswrapper[37036]: I0312 14:52:24.935103 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 12 14:52:29.979876 master-0 kubenswrapper[37036]: I0312 14:52:29.979741 37036 generic.go:334] "Generic (PLEG): container finished" podID="61a413f4-9b2a-4a44-aef7-6c75090b9a44" containerID="7232bc1d911005cdb464491505f27d9f035b65dd4c78d5a553e34bb2fe4e6447" exitCode=0 Mar 12 14:52:29.979876 master-0 kubenswrapper[37036]: I0312 14:52:29.979793 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zjqjr" event={"ID":"61a413f4-9b2a-4a44-aef7-6c75090b9a44","Type":"ContainerDied","Data":"7232bc1d911005cdb464491505f27d9f035b65dd4c78d5a553e34bb2fe4e6447"} Mar 12 14:52:31.468541 master-0 kubenswrapper[37036]: I0312 14:52:31.468036 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zjqjr" Mar 12 14:52:31.585561 master-0 kubenswrapper[37036]: I0312 14:52:31.584950 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61a413f4-9b2a-4a44-aef7-6c75090b9a44-scripts\") pod \"61a413f4-9b2a-4a44-aef7-6c75090b9a44\" (UID: \"61a413f4-9b2a-4a44-aef7-6c75090b9a44\") " Mar 12 14:52:31.585561 master-0 kubenswrapper[37036]: I0312 14:52:31.585073 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61a413f4-9b2a-4a44-aef7-6c75090b9a44-combined-ca-bundle\") pod \"61a413f4-9b2a-4a44-aef7-6c75090b9a44\" (UID: \"61a413f4-9b2a-4a44-aef7-6c75090b9a44\") " Mar 12 14:52:31.586614 master-0 kubenswrapper[37036]: I0312 14:52:31.585938 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61a413f4-9b2a-4a44-aef7-6c75090b9a44-config-data\") pod \"61a413f4-9b2a-4a44-aef7-6c75090b9a44\" (UID: \"61a413f4-9b2a-4a44-aef7-6c75090b9a44\") " Mar 12 14:52:31.586614 master-0 kubenswrapper[37036]: I0312 14:52:31.586059 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lr8f\" (UniqueName: \"kubernetes.io/projected/61a413f4-9b2a-4a44-aef7-6c75090b9a44-kube-api-access-8lr8f\") pod \"61a413f4-9b2a-4a44-aef7-6c75090b9a44\" (UID: \"61a413f4-9b2a-4a44-aef7-6c75090b9a44\") " Mar 12 14:52:31.590456 master-0 kubenswrapper[37036]: I0312 14:52:31.588313 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61a413f4-9b2a-4a44-aef7-6c75090b9a44-scripts" (OuterVolumeSpecName: "scripts") pod "61a413f4-9b2a-4a44-aef7-6c75090b9a44" (UID: "61a413f4-9b2a-4a44-aef7-6c75090b9a44"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:52:31.590456 master-0 kubenswrapper[37036]: I0312 14:52:31.589998 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61a413f4-9b2a-4a44-aef7-6c75090b9a44-kube-api-access-8lr8f" (OuterVolumeSpecName: "kube-api-access-8lr8f") pod "61a413f4-9b2a-4a44-aef7-6c75090b9a44" (UID: "61a413f4-9b2a-4a44-aef7-6c75090b9a44"). InnerVolumeSpecName "kube-api-access-8lr8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:52:31.613944 master-0 kubenswrapper[37036]: I0312 14:52:31.613278 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61a413f4-9b2a-4a44-aef7-6c75090b9a44-config-data" (OuterVolumeSpecName: "config-data") pod "61a413f4-9b2a-4a44-aef7-6c75090b9a44" (UID: "61a413f4-9b2a-4a44-aef7-6c75090b9a44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:52:31.614122 master-0 kubenswrapper[37036]: I0312 14:52:31.614014 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61a413f4-9b2a-4a44-aef7-6c75090b9a44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61a413f4-9b2a-4a44-aef7-6c75090b9a44" (UID: "61a413f4-9b2a-4a44-aef7-6c75090b9a44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:52:31.694818 master-0 kubenswrapper[37036]: I0312 14:52:31.694746 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lr8f\" (UniqueName: \"kubernetes.io/projected/61a413f4-9b2a-4a44-aef7-6c75090b9a44-kube-api-access-8lr8f\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:31.694818 master-0 kubenswrapper[37036]: I0312 14:52:31.694801 37036 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61a413f4-9b2a-4a44-aef7-6c75090b9a44-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:31.694818 master-0 kubenswrapper[37036]: I0312 14:52:31.694814 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61a413f4-9b2a-4a44-aef7-6c75090b9a44-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:31.694818 master-0 kubenswrapper[37036]: I0312 14:52:31.694823 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61a413f4-9b2a-4a44-aef7-6c75090b9a44-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:32.007189 master-0 kubenswrapper[37036]: I0312 14:52:32.007134 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zjqjr" event={"ID":"61a413f4-9b2a-4a44-aef7-6c75090b9a44","Type":"ContainerDied","Data":"54d4e9d22c2de27a539739db2d66d90562ef1d884c4cdf600f1711320ad4a36a"} Mar 12 14:52:32.007189 master-0 kubenswrapper[37036]: I0312 14:52:32.007178 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54d4e9d22c2de27a539739db2d66d90562ef1d884c4cdf600f1711320ad4a36a" Mar 12 14:52:32.007464 master-0 kubenswrapper[37036]: I0312 14:52:32.007228 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zjqjr" Mar 12 14:52:32.180206 master-0 kubenswrapper[37036]: I0312 14:52:32.180147 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 14:52:32.181152 master-0 kubenswrapper[37036]: E0312 14:52:32.181130 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a413f4-9b2a-4a44-aef7-6c75090b9a44" containerName="nova-cell0-conductor-db-sync" Mar 12 14:52:32.181260 master-0 kubenswrapper[37036]: I0312 14:52:32.181248 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a413f4-9b2a-4a44-aef7-6c75090b9a44" containerName="nova-cell0-conductor-db-sync" Mar 12 14:52:32.181608 master-0 kubenswrapper[37036]: I0312 14:52:32.181594 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a413f4-9b2a-4a44-aef7-6c75090b9a44" containerName="nova-cell0-conductor-db-sync" Mar 12 14:52:32.182464 master-0 kubenswrapper[37036]: I0312 14:52:32.182447 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 12 14:52:32.185889 master-0 kubenswrapper[37036]: I0312 14:52:32.185868 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 12 14:52:32.197479 master-0 kubenswrapper[37036]: I0312 14:52:32.197412 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 14:52:32.326941 master-0 kubenswrapper[37036]: I0312 14:52:32.326787 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b210b938-2578-45b2-a6ef-84908f58242a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b210b938-2578-45b2-a6ef-84908f58242a\") " pod="openstack/nova-cell0-conductor-0" Mar 12 14:52:32.327130 master-0 kubenswrapper[37036]: I0312 14:52:32.327028 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slsjd\" (UniqueName: \"kubernetes.io/projected/b210b938-2578-45b2-a6ef-84908f58242a-kube-api-access-slsjd\") pod \"nova-cell0-conductor-0\" (UID: \"b210b938-2578-45b2-a6ef-84908f58242a\") " pod="openstack/nova-cell0-conductor-0" Mar 12 14:52:32.327130 master-0 kubenswrapper[37036]: I0312 14:52:32.327115 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b210b938-2578-45b2-a6ef-84908f58242a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b210b938-2578-45b2-a6ef-84908f58242a\") " pod="openstack/nova-cell0-conductor-0" Mar 12 14:52:32.430377 master-0 kubenswrapper[37036]: I0312 14:52:32.430287 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b210b938-2578-45b2-a6ef-84908f58242a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b210b938-2578-45b2-a6ef-84908f58242a\") " pod="openstack/nova-cell0-conductor-0" Mar 12 14:52:32.430613 master-0 kubenswrapper[37036]: I0312 14:52:32.430428 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slsjd\" (UniqueName: \"kubernetes.io/projected/b210b938-2578-45b2-a6ef-84908f58242a-kube-api-access-slsjd\") pod \"nova-cell0-conductor-0\" (UID: \"b210b938-2578-45b2-a6ef-84908f58242a\") " pod="openstack/nova-cell0-conductor-0" Mar 12 14:52:32.430613 master-0 kubenswrapper[37036]: I0312 14:52:32.430472 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b210b938-2578-45b2-a6ef-84908f58242a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b210b938-2578-45b2-a6ef-84908f58242a\") " pod="openstack/nova-cell0-conductor-0" Mar 12 14:52:32.434684 master-0 kubenswrapper[37036]: I0312 14:52:32.434641 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b210b938-2578-45b2-a6ef-84908f58242a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b210b938-2578-45b2-a6ef-84908f58242a\") " pod="openstack/nova-cell0-conductor-0" Mar 12 14:52:32.435086 master-0 kubenswrapper[37036]: I0312 14:52:32.435041 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b210b938-2578-45b2-a6ef-84908f58242a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b210b938-2578-45b2-a6ef-84908f58242a\") " pod="openstack/nova-cell0-conductor-0" Mar 12 14:52:32.446075 master-0 kubenswrapper[37036]: I0312 14:52:32.446017 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slsjd\" (UniqueName: \"kubernetes.io/projected/b210b938-2578-45b2-a6ef-84908f58242a-kube-api-access-slsjd\") pod \"nova-cell0-conductor-0\" (UID: \"b210b938-2578-45b2-a6ef-84908f58242a\") " pod="openstack/nova-cell0-conductor-0" Mar 12 14:52:32.543741 master-0 kubenswrapper[37036]: I0312 14:52:32.543663 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 12 14:52:33.028162 master-0 kubenswrapper[37036]: I0312 14:52:33.027424 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 14:52:33.034625 master-0 kubenswrapper[37036]: W0312 14:52:33.034580 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb210b938_2578_45b2_a6ef_84908f58242a.slice/crio-18bab4c0e01da4cdd774685d44799660653df0325e5e6aa215463dba96aacd2b WatchSource:0}: Error finding container 18bab4c0e01da4cdd774685d44799660653df0325e5e6aa215463dba96aacd2b: Status 404 returned error can't find the container with id 18bab4c0e01da4cdd774685d44799660653df0325e5e6aa215463dba96aacd2b Mar 12 14:52:34.030927 master-0 kubenswrapper[37036]: I0312 14:52:34.030839 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b210b938-2578-45b2-a6ef-84908f58242a","Type":"ContainerStarted","Data":"fd85fb795f76c244f98465b3cc51942b32534873f6bd5a215d0be80c056e2811"} Mar 12 14:52:34.030927 master-0 kubenswrapper[37036]: I0312 14:52:34.030923 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b210b938-2578-45b2-a6ef-84908f58242a","Type":"ContainerStarted","Data":"18bab4c0e01da4cdd774685d44799660653df0325e5e6aa215463dba96aacd2b"} Mar 12 14:52:34.031636 master-0 kubenswrapper[37036]: I0312 14:52:34.031143 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 12 14:52:34.056841 master-0 kubenswrapper[37036]: I0312 14:52:34.056751 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.056731198 podStartE2EDuration="2.056731198s" podCreationTimestamp="2026-03-12 14:52:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:52:34.050262429 +0000 UTC m=+1013.058003366" watchObservedRunningTime="2026-03-12 14:52:34.056731198 +0000 UTC m=+1013.064472145" Mar 12 14:52:42.574757 master-0 kubenswrapper[37036]: I0312 14:52:42.574704 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 12 14:52:43.223779 master-0 kubenswrapper[37036]: I0312 14:52:43.223727 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-hvjms"] Mar 12 14:52:43.240277 master-0 kubenswrapper[37036]: I0312 14:52:43.232783 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hvjms" Mar 12 14:52:43.243172 master-0 kubenswrapper[37036]: I0312 14:52:43.243125 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 12 14:52:43.243464 master-0 kubenswrapper[37036]: I0312 14:52:43.243442 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 12 14:52:43.277696 master-0 kubenswrapper[37036]: I0312 14:52:43.275485 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-hvjms"] Mar 12 14:52:43.358966 master-0 kubenswrapper[37036]: I0312 14:52:43.357262 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb059aa-827d-46f3-8218-8178e9eeafbd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hvjms\" (UID: \"ccb059aa-827d-46f3-8218-8178e9eeafbd\") " pod="openstack/nova-cell0-cell-mapping-hvjms" Mar 12 14:52:43.358966 master-0 kubenswrapper[37036]: I0312 14:52:43.357344 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb059aa-827d-46f3-8218-8178e9eeafbd-config-data\") pod \"nova-cell0-cell-mapping-hvjms\" (UID: \"ccb059aa-827d-46f3-8218-8178e9eeafbd\") " pod="openstack/nova-cell0-cell-mapping-hvjms" Mar 12 14:52:43.358966 master-0 kubenswrapper[37036]: I0312 14:52:43.357456 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccb059aa-827d-46f3-8218-8178e9eeafbd-scripts\") pod \"nova-cell0-cell-mapping-hvjms\" (UID: \"ccb059aa-827d-46f3-8218-8178e9eeafbd\") " pod="openstack/nova-cell0-cell-mapping-hvjms" Mar 12 14:52:43.358966 master-0 kubenswrapper[37036]: I0312 14:52:43.357621 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87s7j\" (UniqueName: \"kubernetes.io/projected/ccb059aa-827d-46f3-8218-8178e9eeafbd-kube-api-access-87s7j\") pod \"nova-cell0-cell-mapping-hvjms\" (UID: \"ccb059aa-827d-46f3-8218-8178e9eeafbd\") " pod="openstack/nova-cell0-cell-mapping-hvjms" Mar 12 14:52:43.385924 master-0 kubenswrapper[37036]: I0312 14:52:43.377285 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Mar 12 14:52:43.385924 master-0 kubenswrapper[37036]: I0312 14:52:43.379222 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 14:52:43.401537 master-0 kubenswrapper[37036]: I0312 14:52:43.398311 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-ironic-compute-config-data" Mar 12 14:52:43.477406 master-0 kubenswrapper[37036]: I0312 14:52:43.476213 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Mar 12 14:52:43.481544 master-0 kubenswrapper[37036]: I0312 14:52:43.481453 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb059aa-827d-46f3-8218-8178e9eeafbd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hvjms\" (UID: \"ccb059aa-827d-46f3-8218-8178e9eeafbd\") " pod="openstack/nova-cell0-cell-mapping-hvjms" Mar 12 14:52:43.481544 master-0 kubenswrapper[37036]: I0312 14:52:43.481553 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb059aa-827d-46f3-8218-8178e9eeafbd-config-data\") pod \"nova-cell0-cell-mapping-hvjms\" (UID: \"ccb059aa-827d-46f3-8218-8178e9eeafbd\") " pod="openstack/nova-cell0-cell-mapping-hvjms" Mar 12 14:52:43.481975 master-0 kubenswrapper[37036]: I0312 14:52:43.481607 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccb059aa-827d-46f3-8218-8178e9eeafbd-scripts\") pod \"nova-cell0-cell-mapping-hvjms\" (UID: \"ccb059aa-827d-46f3-8218-8178e9eeafbd\") " pod="openstack/nova-cell0-cell-mapping-hvjms" Mar 12 14:52:43.481975 master-0 kubenswrapper[37036]: I0312 14:52:43.481660 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a440e7b-a37d-4e7e-9873-ac70bc709a60-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"2a440e7b-a37d-4e7e-9873-ac70bc709a60\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 14:52:43.481975 master-0 kubenswrapper[37036]: I0312 14:52:43.481699 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j556l\" (UniqueName: \"kubernetes.io/projected/2a440e7b-a37d-4e7e-9873-ac70bc709a60-kube-api-access-j556l\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"2a440e7b-a37d-4e7e-9873-ac70bc709a60\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 14:52:43.481975 master-0 kubenswrapper[37036]: I0312 14:52:43.481757 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a440e7b-a37d-4e7e-9873-ac70bc709a60-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"2a440e7b-a37d-4e7e-9873-ac70bc709a60\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 14:52:43.481975 master-0 kubenswrapper[37036]: I0312 14:52:43.481782 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87s7j\" (UniqueName: \"kubernetes.io/projected/ccb059aa-827d-46f3-8218-8178e9eeafbd-kube-api-access-87s7j\") pod \"nova-cell0-cell-mapping-hvjms\" (UID: \"ccb059aa-827d-46f3-8218-8178e9eeafbd\") " pod="openstack/nova-cell0-cell-mapping-hvjms" Mar 12 14:52:43.493589 master-0 kubenswrapper[37036]: I0312 14:52:43.492831 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb059aa-827d-46f3-8218-8178e9eeafbd-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hvjms\" (UID: \"ccb059aa-827d-46f3-8218-8178e9eeafbd\") " pod="openstack/nova-cell0-cell-mapping-hvjms" Mar 12 14:52:43.495392 master-0 kubenswrapper[37036]: I0312 14:52:43.494229 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccb059aa-827d-46f3-8218-8178e9eeafbd-scripts\") pod \"nova-cell0-cell-mapping-hvjms\" (UID: \"ccb059aa-827d-46f3-8218-8178e9eeafbd\") " pod="openstack/nova-cell0-cell-mapping-hvjms" Mar 12 14:52:43.503782 master-0 kubenswrapper[37036]: I0312 14:52:43.499819 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb059aa-827d-46f3-8218-8178e9eeafbd-config-data\") pod \"nova-cell0-cell-mapping-hvjms\" (UID: \"ccb059aa-827d-46f3-8218-8178e9eeafbd\") " pod="openstack/nova-cell0-cell-mapping-hvjms" Mar 12 14:52:43.525579 master-0 kubenswrapper[37036]: I0312 14:52:43.525514 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 14:52:43.528780 master-0 kubenswrapper[37036]: I0312 14:52:43.528724 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 14:52:43.568924 master-0 kubenswrapper[37036]: I0312 14:52:43.558290 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 14:52:43.603501 master-0 kubenswrapper[37036]: I0312 14:52:43.597102 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a440e7b-a37d-4e7e-9873-ac70bc709a60-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"2a440e7b-a37d-4e7e-9873-ac70bc709a60\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 14:52:43.603501 master-0 kubenswrapper[37036]: I0312 14:52:43.597210 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j556l\" (UniqueName: \"kubernetes.io/projected/2a440e7b-a37d-4e7e-9873-ac70bc709a60-kube-api-access-j556l\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"2a440e7b-a37d-4e7e-9873-ac70bc709a60\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 14:52:43.603501 master-0 kubenswrapper[37036]: I0312 14:52:43.597339 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a440e7b-a37d-4e7e-9873-ac70bc709a60-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"2a440e7b-a37d-4e7e-9873-ac70bc709a60\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 14:52:43.631547 master-0 kubenswrapper[37036]: I0312 14:52:43.631478 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 14:52:43.632692 master-0 kubenswrapper[37036]: I0312 14:52:43.632387 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a440e7b-a37d-4e7e-9873-ac70bc709a60-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"2a440e7b-a37d-4e7e-9873-ac70bc709a60\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 14:52:43.633041 master-0 kubenswrapper[37036]: I0312 14:52:43.633009 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87s7j\" (UniqueName: \"kubernetes.io/projected/ccb059aa-827d-46f3-8218-8178e9eeafbd-kube-api-access-87s7j\") pod \"nova-cell0-cell-mapping-hvjms\" (UID: \"ccb059aa-827d-46f3-8218-8178e9eeafbd\") " pod="openstack/nova-cell0-cell-mapping-hvjms" Mar 12 14:52:43.635333 master-0 kubenswrapper[37036]: I0312 14:52:43.634761 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a440e7b-a37d-4e7e-9873-ac70bc709a60-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"2a440e7b-a37d-4e7e-9873-ac70bc709a60\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 14:52:43.655009 master-0 kubenswrapper[37036]: I0312 14:52:43.646839 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 14:52:43.655009 master-0 kubenswrapper[37036]: I0312 14:52:43.648850 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 14:52:43.655009 master-0 kubenswrapper[37036]: I0312 14:52:43.652217 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 12 14:52:43.699954 master-0 kubenswrapper[37036]: I0312 14:52:43.699520 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c26314aa-b970-4b92-8037-7485b8d5b20b-logs\") pod \"nova-api-0\" (UID: \"c26314aa-b970-4b92-8037-7485b8d5b20b\") " pod="openstack/nova-api-0" Mar 12 14:52:43.699954 master-0 kubenswrapper[37036]: I0312 14:52:43.699689 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c26314aa-b970-4b92-8037-7485b8d5b20b-config-data\") pod \"nova-api-0\" (UID: \"c26314aa-b970-4b92-8037-7485b8d5b20b\") " pod="openstack/nova-api-0" Mar 12 14:52:43.699954 master-0 kubenswrapper[37036]: I0312 14:52:43.699778 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26314aa-b970-4b92-8037-7485b8d5b20b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c26314aa-b970-4b92-8037-7485b8d5b20b\") " pod="openstack/nova-api-0" Mar 12 14:52:43.699954 master-0 kubenswrapper[37036]: I0312 14:52:43.699817 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm2d7\" (UniqueName: \"kubernetes.io/projected/c26314aa-b970-4b92-8037-7485b8d5b20b-kube-api-access-pm2d7\") pod \"nova-api-0\" (UID: \"c26314aa-b970-4b92-8037-7485b8d5b20b\") " pod="openstack/nova-api-0" Mar 12 14:52:43.765044 master-0 kubenswrapper[37036]: I0312 14:52:43.760787 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j556l\" (UniqueName: \"kubernetes.io/projected/2a440e7b-a37d-4e7e-9873-ac70bc709a60-kube-api-access-j556l\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"2a440e7b-a37d-4e7e-9873-ac70bc709a60\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 14:52:43.786948 master-0 kubenswrapper[37036]: I0312 14:52:43.776706 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 14:52:43.806950 master-0 kubenswrapper[37036]: I0312 14:52:43.803894 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18749a86-eed1-4fa8-b31d-98f0a3fc67fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"18749a86-eed1-4fa8-b31d-98f0a3fc67fb\") " pod="openstack/nova-scheduler-0" Mar 12 14:52:43.806950 master-0 kubenswrapper[37036]: I0312 14:52:43.804799 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c26314aa-b970-4b92-8037-7485b8d5b20b-logs\") pod \"nova-api-0\" (UID: \"c26314aa-b970-4b92-8037-7485b8d5b20b\") " pod="openstack/nova-api-0" Mar 12 14:52:43.806950 master-0 kubenswrapper[37036]: I0312 14:52:43.804856 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24kzt\" (UniqueName: \"kubernetes.io/projected/18749a86-eed1-4fa8-b31d-98f0a3fc67fb-kube-api-access-24kzt\") pod \"nova-scheduler-0\" (UID: \"18749a86-eed1-4fa8-b31d-98f0a3fc67fb\") " pod="openstack/nova-scheduler-0" Mar 12 14:52:43.806950 master-0 kubenswrapper[37036]: I0312 14:52:43.804993 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c26314aa-b970-4b92-8037-7485b8d5b20b-config-data\") pod \"nova-api-0\" (UID: \"c26314aa-b970-4b92-8037-7485b8d5b20b\") " pod="openstack/nova-api-0" Mar 12 14:52:43.806950 master-0 kubenswrapper[37036]: I0312 14:52:43.805069 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18749a86-eed1-4fa8-b31d-98f0a3fc67fb-config-data\") pod \"nova-scheduler-0\" (UID: \"18749a86-eed1-4fa8-b31d-98f0a3fc67fb\") " pod="openstack/nova-scheduler-0" Mar 12 14:52:43.806950 master-0 kubenswrapper[37036]: I0312 14:52:43.805090 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26314aa-b970-4b92-8037-7485b8d5b20b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c26314aa-b970-4b92-8037-7485b8d5b20b\") " pod="openstack/nova-api-0" Mar 12 14:52:43.806950 master-0 kubenswrapper[37036]: I0312 14:52:43.805128 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm2d7\" (UniqueName: \"kubernetes.io/projected/c26314aa-b970-4b92-8037-7485b8d5b20b-kube-api-access-pm2d7\") pod \"nova-api-0\" (UID: \"c26314aa-b970-4b92-8037-7485b8d5b20b\") " pod="openstack/nova-api-0" Mar 12 14:52:43.811000 master-0 kubenswrapper[37036]: I0312 14:52:43.807802 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c26314aa-b970-4b92-8037-7485b8d5b20b-logs\") pod \"nova-api-0\" (UID: \"c26314aa-b970-4b92-8037-7485b8d5b20b\") " pod="openstack/nova-api-0" Mar 12 14:52:43.811000 master-0 kubenswrapper[37036]: I0312 14:52:43.809842 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 14:52:43.861804 master-0 kubenswrapper[37036]: I0312 14:52:43.861517 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm2d7\" (UniqueName: \"kubernetes.io/projected/c26314aa-b970-4b92-8037-7485b8d5b20b-kube-api-access-pm2d7\") pod \"nova-api-0\" (UID: \"c26314aa-b970-4b92-8037-7485b8d5b20b\") " pod="openstack/nova-api-0" Mar 12 14:52:43.886967 master-0 kubenswrapper[37036]: I0312 14:52:43.886113 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c26314aa-b970-4b92-8037-7485b8d5b20b-config-data\") pod \"nova-api-0\" (UID: \"c26314aa-b970-4b92-8037-7485b8d5b20b\") " pod="openstack/nova-api-0" Mar 12 14:52:43.887576 master-0 kubenswrapper[37036]: I0312 14:52:43.887555 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26314aa-b970-4b92-8037-7485b8d5b20b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c26314aa-b970-4b92-8037-7485b8d5b20b\") " pod="openstack/nova-api-0" Mar 12 14:52:43.911921 master-0 kubenswrapper[37036]: I0312 14:52:43.907005 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18749a86-eed1-4fa8-b31d-98f0a3fc67fb-config-data\") pod \"nova-scheduler-0\" (UID: \"18749a86-eed1-4fa8-b31d-98f0a3fc67fb\") " pod="openstack/nova-scheduler-0" Mar 12 14:52:43.911921 master-0 kubenswrapper[37036]: I0312 14:52:43.907215 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18749a86-eed1-4fa8-b31d-98f0a3fc67fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"18749a86-eed1-4fa8-b31d-98f0a3fc67fb\") " pod="openstack/nova-scheduler-0" Mar 12 14:52:43.911921 master-0 kubenswrapper[37036]: I0312 14:52:43.907284 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24kzt\" (UniqueName: \"kubernetes.io/projected/18749a86-eed1-4fa8-b31d-98f0a3fc67fb-kube-api-access-24kzt\") pod \"nova-scheduler-0\" (UID: \"18749a86-eed1-4fa8-b31d-98f0a3fc67fb\") " pod="openstack/nova-scheduler-0" Mar 12 14:52:43.923920 master-0 kubenswrapper[37036]: I0312 14:52:43.922500 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 14:52:43.931586 master-0 kubenswrapper[37036]: I0312 14:52:43.928114 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:52:43.931586 master-0 kubenswrapper[37036]: I0312 14:52:43.928189 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18749a86-eed1-4fa8-b31d-98f0a3fc67fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"18749a86-eed1-4fa8-b31d-98f0a3fc67fb\") " pod="openstack/nova-scheduler-0" Mar 12 14:52:43.931586 master-0 kubenswrapper[37036]: I0312 14:52:43.929524 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18749a86-eed1-4fa8-b31d-98f0a3fc67fb-config-data\") pod \"nova-scheduler-0\" (UID: \"18749a86-eed1-4fa8-b31d-98f0a3fc67fb\") " pod="openstack/nova-scheduler-0" Mar 12 14:52:43.931586 master-0 kubenswrapper[37036]: I0312 14:52:43.929605 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hvjms" Mar 12 14:52:43.932123 master-0 kubenswrapper[37036]: I0312 14:52:43.932066 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 12 14:52:43.957315 master-0 kubenswrapper[37036]: I0312 14:52:43.943859 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 14:52:43.957315 master-0 kubenswrapper[37036]: I0312 14:52:43.955422 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24kzt\" (UniqueName: \"kubernetes.io/projected/18749a86-eed1-4fa8-b31d-98f0a3fc67fb-kube-api-access-24kzt\") pod \"nova-scheduler-0\" (UID: \"18749a86-eed1-4fa8-b31d-98f0a3fc67fb\") " pod="openstack/nova-scheduler-0" Mar 12 14:52:43.965470 master-0 kubenswrapper[37036]: I0312 14:52:43.964977 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 12 14:52:43.967377 master-0 kubenswrapper[37036]: I0312 14:52:43.967357 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 14:52:43.988391 master-0 kubenswrapper[37036]: I0312 14:52:43.978058 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 12 14:52:43.988391 master-0 kubenswrapper[37036]: I0312 14:52:43.983166 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 14:52:44.051601 master-0 kubenswrapper[37036]: I0312 14:52:44.050980 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4997802f-9a57-43da-8580-059b53e904c8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4997802f-9a57-43da-8580-059b53e904c8\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:52:44.051601 master-0 kubenswrapper[37036]: I0312 14:52:44.051511 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxhzg\" (UniqueName: \"kubernetes.io/projected/4997802f-9a57-43da-8580-059b53e904c8-kube-api-access-vxhzg\") pod \"nova-cell1-novncproxy-0\" (UID: \"4997802f-9a57-43da-8580-059b53e904c8\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:52:44.051601 master-0 kubenswrapper[37036]: I0312 14:52:44.051561 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4997802f-9a57-43da-8580-059b53e904c8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4997802f-9a57-43da-8580-059b53e904c8\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:52:44.109085 master-0 kubenswrapper[37036]: I0312 14:52:44.108997 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 14:52:44.146359 master-0 kubenswrapper[37036]: I0312 14:52:44.146294 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b98899985-qcjxc"] Mar 12 14:52:44.163856 master-0 kubenswrapper[37036]: I0312 14:52:44.161361 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e34f13-efba-4d2b-b59f-aa4b83a99280-config-data\") pod \"nova-metadata-0\" (UID: \"38e34f13-efba-4d2b-b59f-aa4b83a99280\") " pod="openstack/nova-metadata-0" Mar 12 14:52:44.163856 master-0 kubenswrapper[37036]: I0312 14:52:44.161578 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxhzg\" (UniqueName: \"kubernetes.io/projected/4997802f-9a57-43da-8580-059b53e904c8-kube-api-access-vxhzg\") pod \"nova-cell1-novncproxy-0\" (UID: \"4997802f-9a57-43da-8580-059b53e904c8\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:52:44.163856 master-0 kubenswrapper[37036]: I0312 14:52:44.161638 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4997802f-9a57-43da-8580-059b53e904c8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4997802f-9a57-43da-8580-059b53e904c8\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:52:44.163856 master-0 kubenswrapper[37036]: I0312 14:52:44.161758 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcrbr\" (UniqueName: \"kubernetes.io/projected/38e34f13-efba-4d2b-b59f-aa4b83a99280-kube-api-access-tcrbr\") pod \"nova-metadata-0\" (UID: \"38e34f13-efba-4d2b-b59f-aa4b83a99280\") " pod="openstack/nova-metadata-0" Mar 12 14:52:44.163856 master-0 kubenswrapper[37036]: I0312 14:52:44.161911 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38e34f13-efba-4d2b-b59f-aa4b83a99280-logs\") pod \"nova-metadata-0\" (UID: \"38e34f13-efba-4d2b-b59f-aa4b83a99280\") " pod="openstack/nova-metadata-0" Mar 12 14:52:44.163856 master-0 kubenswrapper[37036]: I0312 14:52:44.161967 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e34f13-efba-4d2b-b59f-aa4b83a99280-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"38e34f13-efba-4d2b-b59f-aa4b83a99280\") " pod="openstack/nova-metadata-0" Mar 12 14:52:44.165724 master-0 kubenswrapper[37036]: I0312 14:52:44.165686 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b98899985-qcjxc" Mar 12 14:52:44.169065 master-0 kubenswrapper[37036]: I0312 14:52:44.168955 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 14:52:44.170385 master-0 kubenswrapper[37036]: I0312 14:52:44.170350 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4997802f-9a57-43da-8580-059b53e904c8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4997802f-9a57-43da-8580-059b53e904c8\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:52:44.175226 master-0 kubenswrapper[37036]: I0312 14:52:44.175073 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4997802f-9a57-43da-8580-059b53e904c8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4997802f-9a57-43da-8580-059b53e904c8\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:52:44.183936 master-0 kubenswrapper[37036]: I0312 14:52:44.183853 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4997802f-9a57-43da-8580-059b53e904c8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4997802f-9a57-43da-8580-059b53e904c8\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:52:44.191882 master-0 kubenswrapper[37036]: I0312 14:52:44.191842 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b98899985-qcjxc"] Mar 12 14:52:44.197296 master-0 kubenswrapper[37036]: I0312 14:52:44.193774 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxhzg\" (UniqueName: \"kubernetes.io/projected/4997802f-9a57-43da-8580-059b53e904c8-kube-api-access-vxhzg\") pod \"nova-cell1-novncproxy-0\" (UID: \"4997802f-9a57-43da-8580-059b53e904c8\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:52:44.275988 master-0 kubenswrapper[37036]: I0312 14:52:44.273326 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-ovsdbserver-nb\") pod \"dnsmasq-dns-5b98899985-qcjxc\" (UID: \"4f146506-a967-4da7-b1cb-57ba34e55eae\") " pod="openstack/dnsmasq-dns-5b98899985-qcjxc" Mar 12 14:52:44.275988 master-0 kubenswrapper[37036]: I0312 14:52:44.273419 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcrbr\" (UniqueName: \"kubernetes.io/projected/38e34f13-efba-4d2b-b59f-aa4b83a99280-kube-api-access-tcrbr\") pod \"nova-metadata-0\" (UID: \"38e34f13-efba-4d2b-b59f-aa4b83a99280\") " pod="openstack/nova-metadata-0" Mar 12 14:52:44.275988 master-0 kubenswrapper[37036]: I0312 14:52:44.273489 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vnlc\" (UniqueName: \"kubernetes.io/projected/4f146506-a967-4da7-b1cb-57ba34e55eae-kube-api-access-9vnlc\") pod \"dnsmasq-dns-5b98899985-qcjxc\" (UID: \"4f146506-a967-4da7-b1cb-57ba34e55eae\") " pod="openstack/dnsmasq-dns-5b98899985-qcjxc" Mar 12 14:52:44.275988 master-0 kubenswrapper[37036]: I0312 14:52:44.273523 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38e34f13-efba-4d2b-b59f-aa4b83a99280-logs\") pod \"nova-metadata-0\" (UID: \"38e34f13-efba-4d2b-b59f-aa4b83a99280\") " pod="openstack/nova-metadata-0" Mar 12 14:52:44.275988 master-0 kubenswrapper[37036]: I0312 14:52:44.273558 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e34f13-efba-4d2b-b59f-aa4b83a99280-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"38e34f13-efba-4d2b-b59f-aa4b83a99280\") " pod="openstack/nova-metadata-0" Mar 12 14:52:44.275988 master-0 kubenswrapper[37036]: I0312 14:52:44.273656 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-dns-swift-storage-0\") pod \"dnsmasq-dns-5b98899985-qcjxc\" (UID: \"4f146506-a967-4da7-b1cb-57ba34e55eae\") " pod="openstack/dnsmasq-dns-5b98899985-qcjxc" Mar 12 14:52:44.275988 master-0 kubenswrapper[37036]: I0312 14:52:44.273716 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-dns-svc\") pod \"dnsmasq-dns-5b98899985-qcjxc\" (UID: \"4f146506-a967-4da7-b1cb-57ba34e55eae\") " pod="openstack/dnsmasq-dns-5b98899985-qcjxc" Mar 12 14:52:44.275988 master-0 kubenswrapper[37036]: I0312 14:52:44.273751 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e34f13-efba-4d2b-b59f-aa4b83a99280-config-data\") pod \"nova-metadata-0\" (UID: \"38e34f13-efba-4d2b-b59f-aa4b83a99280\") " pod="openstack/nova-metadata-0" Mar 12 14:52:44.275988 master-0 kubenswrapper[37036]: I0312 14:52:44.273781 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-ovsdbserver-sb\") pod \"dnsmasq-dns-5b98899985-qcjxc\" (UID: \"4f146506-a967-4da7-b1cb-57ba34e55eae\") " pod="openstack/dnsmasq-dns-5b98899985-qcjxc" Mar 12 14:52:44.275988 master-0 kubenswrapper[37036]: I0312 14:52:44.273829 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-config\") pod \"dnsmasq-dns-5b98899985-qcjxc\" (UID: \"4f146506-a967-4da7-b1cb-57ba34e55eae\") " pod="openstack/dnsmasq-dns-5b98899985-qcjxc" Mar 12 14:52:44.275988 master-0 kubenswrapper[37036]: I0312 14:52:44.274675 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38e34f13-efba-4d2b-b59f-aa4b83a99280-logs\") pod \"nova-metadata-0\" (UID: \"38e34f13-efba-4d2b-b59f-aa4b83a99280\") " pod="openstack/nova-metadata-0" Mar 12 14:52:44.296919 master-0 kubenswrapper[37036]: I0312 14:52:44.292504 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e34f13-efba-4d2b-b59f-aa4b83a99280-config-data\") pod \"nova-metadata-0\" (UID: \"38e34f13-efba-4d2b-b59f-aa4b83a99280\") " pod="openstack/nova-metadata-0" Mar 12 14:52:44.300678 master-0 kubenswrapper[37036]: I0312 14:52:44.300638 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e34f13-efba-4d2b-b59f-aa4b83a99280-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"38e34f13-efba-4d2b-b59f-aa4b83a99280\") " pod="openstack/nova-metadata-0" Mar 12 14:52:44.317633 master-0 kubenswrapper[37036]: I0312 14:52:44.315822 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcrbr\" (UniqueName: \"kubernetes.io/projected/38e34f13-efba-4d2b-b59f-aa4b83a99280-kube-api-access-tcrbr\") pod \"nova-metadata-0\" (UID: \"38e34f13-efba-4d2b-b59f-aa4b83a99280\") " pod="openstack/nova-metadata-0" Mar 12 14:52:44.367213 master-0 kubenswrapper[37036]: I0312 14:52:44.359444 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:52:44.393072 master-0 kubenswrapper[37036]: I0312 14:52:44.377586 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vnlc\" (UniqueName: \"kubernetes.io/projected/4f146506-a967-4da7-b1cb-57ba34e55eae-kube-api-access-9vnlc\") pod \"dnsmasq-dns-5b98899985-qcjxc\" (UID: \"4f146506-a967-4da7-b1cb-57ba34e55eae\") " pod="openstack/dnsmasq-dns-5b98899985-qcjxc" Mar 12 14:52:44.393072 master-0 kubenswrapper[37036]: I0312 14:52:44.377950 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 14:52:44.393072 master-0 kubenswrapper[37036]: I0312 14:52:44.378093 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-dns-swift-storage-0\") pod \"dnsmasq-dns-5b98899985-qcjxc\" (UID: \"4f146506-a967-4da7-b1cb-57ba34e55eae\") " pod="openstack/dnsmasq-dns-5b98899985-qcjxc" Mar 12 14:52:44.393072 master-0 kubenswrapper[37036]: I0312 14:52:44.378224 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-dns-svc\") pod \"dnsmasq-dns-5b98899985-qcjxc\" (UID: \"4f146506-a967-4da7-b1cb-57ba34e55eae\") " pod="openstack/dnsmasq-dns-5b98899985-qcjxc" Mar 12 14:52:44.393072 master-0 kubenswrapper[37036]: I0312 14:52:44.378296 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-ovsdbserver-sb\") pod \"dnsmasq-dns-5b98899985-qcjxc\" (UID: \"4f146506-a967-4da7-b1cb-57ba34e55eae\") " pod="openstack/dnsmasq-dns-5b98899985-qcjxc" Mar 12 14:52:44.393072 master-0 kubenswrapper[37036]: I0312 14:52:44.385365 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-config\") pod \"dnsmasq-dns-5b98899985-qcjxc\" (UID: \"4f146506-a967-4da7-b1cb-57ba34e55eae\") " pod="openstack/dnsmasq-dns-5b98899985-qcjxc" Mar 12 14:52:44.393072 master-0 kubenswrapper[37036]: I0312 14:52:44.385537 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-ovsdbserver-nb\") pod \"dnsmasq-dns-5b98899985-qcjxc\" (UID: \"4f146506-a967-4da7-b1cb-57ba34e55eae\") " pod="openstack/dnsmasq-dns-5b98899985-qcjxc" Mar 12 14:52:44.393072 master-0 kubenswrapper[37036]: I0312 14:52:44.386946 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-config\") pod \"dnsmasq-dns-5b98899985-qcjxc\" (UID: \"4f146506-a967-4da7-b1cb-57ba34e55eae\") " pod="openstack/dnsmasq-dns-5b98899985-qcjxc" Mar 12 14:52:44.393072 master-0 kubenswrapper[37036]: I0312 14:52:44.387649 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-ovsdbserver-nb\") pod \"dnsmasq-dns-5b98899985-qcjxc\" (UID: \"4f146506-a967-4da7-b1cb-57ba34e55eae\") " pod="openstack/dnsmasq-dns-5b98899985-qcjxc" Mar 12 14:52:44.393072 master-0 kubenswrapper[37036]: I0312 14:52:44.389071 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-dns-svc\") pod \"dnsmasq-dns-5b98899985-qcjxc\" (UID: \"4f146506-a967-4da7-b1cb-57ba34e55eae\") " pod="openstack/dnsmasq-dns-5b98899985-qcjxc" Mar 12 14:52:44.393072 master-0 kubenswrapper[37036]: I0312 14:52:44.392072 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-dns-swift-storage-0\") pod \"dnsmasq-dns-5b98899985-qcjxc\" (UID: \"4f146506-a967-4da7-b1cb-57ba34e55eae\") " pod="openstack/dnsmasq-dns-5b98899985-qcjxc" Mar 12 14:52:44.393072 master-0 kubenswrapper[37036]: I0312 14:52:44.392214 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-ovsdbserver-sb\") pod \"dnsmasq-dns-5b98899985-qcjxc\" (UID: \"4f146506-a967-4da7-b1cb-57ba34e55eae\") " pod="openstack/dnsmasq-dns-5b98899985-qcjxc" Mar 12 14:52:44.424809 master-0 kubenswrapper[37036]: I0312 14:52:44.417155 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vnlc\" (UniqueName: \"kubernetes.io/projected/4f146506-a967-4da7-b1cb-57ba34e55eae-kube-api-access-9vnlc\") pod \"dnsmasq-dns-5b98899985-qcjxc\" (UID: \"4f146506-a967-4da7-b1cb-57ba34e55eae\") " pod="openstack/dnsmasq-dns-5b98899985-qcjxc" Mar 12 14:52:44.519049 master-0 kubenswrapper[37036]: I0312 14:52:44.516465 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b98899985-qcjxc" Mar 12 14:52:44.722921 master-0 kubenswrapper[37036]: W0312 14:52:44.720284 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a440e7b_a37d_4e7e_9873_ac70bc709a60.slice/crio-eb7c70fd1c5f4e84cfeba251dc23912cb9f204623fd910770d52d209057538c9 WatchSource:0}: Error finding container eb7c70fd1c5f4e84cfeba251dc23912cb9f204623fd910770d52d209057538c9: Status 404 returned error can't find the container with id eb7c70fd1c5f4e84cfeba251dc23912cb9f204623fd910770d52d209057538c9 Mar 12 14:52:44.732521 master-0 kubenswrapper[37036]: I0312 14:52:44.724284 37036 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 14:52:44.732521 master-0 kubenswrapper[37036]: I0312 14:52:44.730383 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Mar 12 14:52:44.847294 master-0 kubenswrapper[37036]: I0312 14:52:44.836754 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-hvjms"] Mar 12 14:52:44.977915 master-0 kubenswrapper[37036]: I0312 14:52:44.973974 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q72km"] Mar 12 14:52:44.977915 master-0 kubenswrapper[37036]: I0312 14:52:44.975986 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-q72km" Mar 12 14:52:44.997941 master-0 kubenswrapper[37036]: I0312 14:52:44.988169 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 12 14:52:44.997941 master-0 kubenswrapper[37036]: I0312 14:52:44.988466 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 12 14:52:44.997941 master-0 kubenswrapper[37036]: I0312 14:52:44.991594 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q72km"] Mar 12 14:52:45.055115 master-0 kubenswrapper[37036]: I0312 14:52:45.051800 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46e4d8ed-5640-49bc-ae47-44c113072fab-scripts\") pod \"nova-cell1-conductor-db-sync-q72km\" (UID: \"46e4d8ed-5640-49bc-ae47-44c113072fab\") " pod="openstack/nova-cell1-conductor-db-sync-q72km" Mar 12 14:52:45.055115 master-0 kubenswrapper[37036]: I0312 14:52:45.051934 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46e4d8ed-5640-49bc-ae47-44c113072fab-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-q72km\" (UID: \"46e4d8ed-5640-49bc-ae47-44c113072fab\") " pod="openstack/nova-cell1-conductor-db-sync-q72km" Mar 12 14:52:45.055115 master-0 kubenswrapper[37036]: I0312 14:52:45.052089 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e4d8ed-5640-49bc-ae47-44c113072fab-config-data\") pod \"nova-cell1-conductor-db-sync-q72km\" (UID: \"46e4d8ed-5640-49bc-ae47-44c113072fab\") " pod="openstack/nova-cell1-conductor-db-sync-q72km" Mar 12 14:52:45.055115 master-0 kubenswrapper[37036]: I0312 14:52:45.052474 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk4l4\" (UniqueName: \"kubernetes.io/projected/46e4d8ed-5640-49bc-ae47-44c113072fab-kube-api-access-zk4l4\") pod \"nova-cell1-conductor-db-sync-q72km\" (UID: \"46e4d8ed-5640-49bc-ae47-44c113072fab\") " pod="openstack/nova-cell1-conductor-db-sync-q72km" Mar 12 14:52:45.094713 master-0 kubenswrapper[37036]: I0312 14:52:45.094413 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 14:52:45.154846 master-0 kubenswrapper[37036]: I0312 14:52:45.154502 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk4l4\" (UniqueName: \"kubernetes.io/projected/46e4d8ed-5640-49bc-ae47-44c113072fab-kube-api-access-zk4l4\") pod \"nova-cell1-conductor-db-sync-q72km\" (UID: \"46e4d8ed-5640-49bc-ae47-44c113072fab\") " pod="openstack/nova-cell1-conductor-db-sync-q72km" Mar 12 14:52:45.155101 master-0 kubenswrapper[37036]: I0312 14:52:45.154971 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46e4d8ed-5640-49bc-ae47-44c113072fab-scripts\") pod \"nova-cell1-conductor-db-sync-q72km\" (UID: \"46e4d8ed-5640-49bc-ae47-44c113072fab\") " pod="openstack/nova-cell1-conductor-db-sync-q72km" Mar 12 14:52:45.155101 master-0 kubenswrapper[37036]: I0312 14:52:45.155042 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46e4d8ed-5640-49bc-ae47-44c113072fab-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-q72km\" (UID: \"46e4d8ed-5640-49bc-ae47-44c113072fab\") " pod="openstack/nova-cell1-conductor-db-sync-q72km" Mar 12 14:52:45.155357 master-0 kubenswrapper[37036]: I0312 14:52:45.155157 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e4d8ed-5640-49bc-ae47-44c113072fab-config-data\") pod \"nova-cell1-conductor-db-sync-q72km\" (UID: \"46e4d8ed-5640-49bc-ae47-44c113072fab\") " pod="openstack/nova-cell1-conductor-db-sync-q72km" Mar 12 14:52:45.160354 master-0 kubenswrapper[37036]: I0312 14:52:45.159788 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e4d8ed-5640-49bc-ae47-44c113072fab-config-data\") pod \"nova-cell1-conductor-db-sync-q72km\" (UID: \"46e4d8ed-5640-49bc-ae47-44c113072fab\") " pod="openstack/nova-cell1-conductor-db-sync-q72km" Mar 12 14:52:45.167699 master-0 kubenswrapper[37036]: I0312 14:52:45.167651 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46e4d8ed-5640-49bc-ae47-44c113072fab-scripts\") pod \"nova-cell1-conductor-db-sync-q72km\" (UID: \"46e4d8ed-5640-49bc-ae47-44c113072fab\") " pod="openstack/nova-cell1-conductor-db-sync-q72km" Mar 12 14:52:45.176835 master-0 kubenswrapper[37036]: I0312 14:52:45.176757 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46e4d8ed-5640-49bc-ae47-44c113072fab-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-q72km\" (UID: \"46e4d8ed-5640-49bc-ae47-44c113072fab\") " pod="openstack/nova-cell1-conductor-db-sync-q72km" Mar 12 14:52:45.190810 master-0 kubenswrapper[37036]: I0312 14:52:45.190744 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk4l4\" (UniqueName: \"kubernetes.io/projected/46e4d8ed-5640-49bc-ae47-44c113072fab-kube-api-access-zk4l4\") pod \"nova-cell1-conductor-db-sync-q72km\" (UID: \"46e4d8ed-5640-49bc-ae47-44c113072fab\") " pod="openstack/nova-cell1-conductor-db-sync-q72km" Mar 12 14:52:45.286157 master-0 kubenswrapper[37036]: I0312 14:52:45.281146 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hvjms" event={"ID":"ccb059aa-827d-46f3-8218-8178e9eeafbd","Type":"ContainerStarted","Data":"9d9929604e826c941aa4dc2d411654444de355770aafd4dd9394826d6b33cd94"} Mar 12 14:52:45.286157 master-0 kubenswrapper[37036]: I0312 14:52:45.281214 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hvjms" event={"ID":"ccb059aa-827d-46f3-8218-8178e9eeafbd","Type":"ContainerStarted","Data":"603f67e32048c755c313d35c6ff6f55173fee78eaa82d11ab62231d1e946e4e0"} Mar 12 14:52:45.296918 master-0 kubenswrapper[37036]: I0312 14:52:45.294246 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c26314aa-b970-4b92-8037-7485b8d5b20b","Type":"ContainerStarted","Data":"694b033464c22e5c50fcf6c0332fcc50c9fcc18663ffc32e2815238fe4a4790d"} Mar 12 14:52:45.320521 master-0 kubenswrapper[37036]: I0312 14:52:45.320424 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-hvjms" podStartSLOduration=2.320403876 podStartE2EDuration="2.320403876s" podCreationTimestamp="2026-03-12 14:52:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:52:45.319300919 +0000 UTC m=+1024.327041876" watchObservedRunningTime="2026-03-12 14:52:45.320403876 +0000 UTC m=+1024.328144813" Mar 12 14:52:45.324141 master-0 kubenswrapper[37036]: I0312 14:52:45.324051 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"2a440e7b-a37d-4e7e-9873-ac70bc709a60","Type":"ContainerStarted","Data":"eb7c70fd1c5f4e84cfeba251dc23912cb9f204623fd910770d52d209057538c9"} Mar 12 14:52:45.328145 master-0 kubenswrapper[37036]: I0312 14:52:45.327837 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-q72km" Mar 12 14:52:45.573518 master-0 kubenswrapper[37036]: I0312 14:52:45.573481 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 14:52:45.643123 master-0 kubenswrapper[37036]: I0312 14:52:45.643042 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 14:52:45.744721 master-0 kubenswrapper[37036]: I0312 14:52:45.744013 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 14:52:45.803581 master-0 kubenswrapper[37036]: I0312 14:52:45.803047 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b98899985-qcjxc"] Mar 12 14:52:45.944477 master-0 kubenswrapper[37036]: I0312 14:52:45.943669 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q72km"] Mar 12 14:52:46.018130 master-0 kubenswrapper[37036]: W0312 14:52:46.018064 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46e4d8ed_5640_49bc_ae47_44c113072fab.slice/crio-97a06cca7fbccb48c12eaf6a14b7f344c44c7445087652f518f4905fee5bdb46 WatchSource:0}: Error finding container 97a06cca7fbccb48c12eaf6a14b7f344c44c7445087652f518f4905fee5bdb46: Status 404 returned error can't find the container with id 97a06cca7fbccb48c12eaf6a14b7f344c44c7445087652f518f4905fee5bdb46 Mar 12 14:52:46.354056 master-0 kubenswrapper[37036]: I0312 14:52:46.351558 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"18749a86-eed1-4fa8-b31d-98f0a3fc67fb","Type":"ContainerStarted","Data":"b166050c38b65737a8b55c168f109a49622fba7f0fc622a8add704d9e9a714ad"} Mar 12 14:52:46.366172 master-0 kubenswrapper[37036]: I0312 14:52:46.365941 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-q72km" event={"ID":"46e4d8ed-5640-49bc-ae47-44c113072fab","Type":"ContainerStarted","Data":"21c24d49863539a1e314662967888c609742206be5e838a00017444084a9bd56"} Mar 12 14:52:46.366172 master-0 kubenswrapper[37036]: I0312 14:52:46.366008 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-q72km" event={"ID":"46e4d8ed-5640-49bc-ae47-44c113072fab","Type":"ContainerStarted","Data":"97a06cca7fbccb48c12eaf6a14b7f344c44c7445087652f518f4905fee5bdb46"} Mar 12 14:52:46.369683 master-0 kubenswrapper[37036]: I0312 14:52:46.369645 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"38e34f13-efba-4d2b-b59f-aa4b83a99280","Type":"ContainerStarted","Data":"837222c1ff58df3c4056a59cda5ab585007e71f8bf49d4bad28a53647a85af60"} Mar 12 14:52:46.372870 master-0 kubenswrapper[37036]: I0312 14:52:46.372824 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4997802f-9a57-43da-8580-059b53e904c8","Type":"ContainerStarted","Data":"2b06fda22261c17e67a14dc28f0b16ec656c0fd7ac6990dbfbf2646fe22ab260"} Mar 12 14:52:46.378510 master-0 kubenswrapper[37036]: I0312 14:52:46.378452 37036 generic.go:334] "Generic (PLEG): container finished" podID="4f146506-a967-4da7-b1cb-57ba34e55eae" containerID="320f295bd7f1413cc99a1a20a2c53b20e3bc1f764231c07886d04c601cb4521f" exitCode=0 Mar 12 14:52:46.378815 master-0 kubenswrapper[37036]: I0312 14:52:46.378771 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b98899985-qcjxc" event={"ID":"4f146506-a967-4da7-b1cb-57ba34e55eae","Type":"ContainerDied","Data":"320f295bd7f1413cc99a1a20a2c53b20e3bc1f764231c07886d04c601cb4521f"} Mar 12 14:52:46.378873 master-0 kubenswrapper[37036]: I0312 14:52:46.378824 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b98899985-qcjxc" event={"ID":"4f146506-a967-4da7-b1cb-57ba34e55eae","Type":"ContainerStarted","Data":"33854a1087f99e6b93e6f0643e6cb739c82e01b35dc7f499b1d2715958a6f975"} Mar 12 14:52:46.419547 master-0 kubenswrapper[37036]: I0312 14:52:46.419448 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-q72km" podStartSLOduration=2.41939143 podStartE2EDuration="2.41939143s" podCreationTimestamp="2026-03-12 14:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:52:46.386641911 +0000 UTC m=+1025.394382848" watchObservedRunningTime="2026-03-12 14:52:46.41939143 +0000 UTC m=+1025.427132367" Mar 12 14:52:47.419984 master-0 kubenswrapper[37036]: I0312 14:52:47.415188 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b98899985-qcjxc" event={"ID":"4f146506-a967-4da7-b1cb-57ba34e55eae","Type":"ContainerStarted","Data":"e9cb804899b4791907de75c5787d8348bfcdfab33c441329276c18bc32f87f37"} Mar 12 14:52:47.419984 master-0 kubenswrapper[37036]: I0312 14:52:47.418562 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b98899985-qcjxc" Mar 12 14:52:47.456921 master-0 kubenswrapper[37036]: I0312 14:52:47.456700 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b98899985-qcjxc" podStartSLOduration=4.456681168 podStartE2EDuration="4.456681168s" podCreationTimestamp="2026-03-12 14:52:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:52:47.440155484 +0000 UTC m=+1026.447896451" watchObservedRunningTime="2026-03-12 14:52:47.456681168 +0000 UTC m=+1026.464422095" Mar 12 14:52:48.137928 master-0 kubenswrapper[37036]: I0312 14:52:48.135976 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 14:52:48.171929 master-0 kubenswrapper[37036]: I0312 14:52:48.171072 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 14:52:51.508752 master-0 kubenswrapper[37036]: I0312 14:52:51.503730 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c26314aa-b970-4b92-8037-7485b8d5b20b","Type":"ContainerStarted","Data":"2a8d7b2f366ebe6213104dbff2a30441e2cc0e9ea1518c28b55dbe08143c8527"} Mar 12 14:52:51.508752 master-0 kubenswrapper[37036]: I0312 14:52:51.503950 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c26314aa-b970-4b92-8037-7485b8d5b20b","Type":"ContainerStarted","Data":"631e6e95777e01adedcf1bf2e1756babcef4928935880353f50d3247fc2cf43a"} Mar 12 14:52:51.508752 master-0 kubenswrapper[37036]: I0312 14:52:51.508052 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"18749a86-eed1-4fa8-b31d-98f0a3fc67fb","Type":"ContainerStarted","Data":"ac0dff2d2482ccfd5fd6d524f32466e8f407f9a336eb4c4c8a041a8168d40121"} Mar 12 14:52:51.512582 master-0 kubenswrapper[37036]: I0312 14:52:51.512540 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="38e34f13-efba-4d2b-b59f-aa4b83a99280" containerName="nova-metadata-log" containerID="cri-o://5a95f6721f84072b25a0ecb69b186972fd70618817e3d96b4837ed206c55d96b" gracePeriod=30 Mar 12 14:52:51.512980 master-0 kubenswrapper[37036]: I0312 14:52:51.512752 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"38e34f13-efba-4d2b-b59f-aa4b83a99280","Type":"ContainerStarted","Data":"9745092af9325d00f5ee3367c505b093a44cd1b2d3c61ba62ba6fa56b7639a93"} Mar 12 14:52:51.513124 master-0 kubenswrapper[37036]: I0312 14:52:51.513076 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"38e34f13-efba-4d2b-b59f-aa4b83a99280","Type":"ContainerStarted","Data":"5a95f6721f84072b25a0ecb69b186972fd70618817e3d96b4837ed206c55d96b"} Mar 12 14:52:51.513220 master-0 kubenswrapper[37036]: I0312 14:52:51.512792 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="38e34f13-efba-4d2b-b59f-aa4b83a99280" containerName="nova-metadata-metadata" containerID="cri-o://9745092af9325d00f5ee3367c505b093a44cd1b2d3c61ba62ba6fa56b7639a93" gracePeriod=30 Mar 12 14:52:51.516430 master-0 kubenswrapper[37036]: I0312 14:52:51.516381 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4997802f-9a57-43da-8580-059b53e904c8","Type":"ContainerStarted","Data":"efa6d664d37b5d0b4ce207daa20da33480eabb55584c122de9da9a76aa822315"} Mar 12 14:52:51.516575 master-0 kubenswrapper[37036]: I0312 14:52:51.516511 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="4997802f-9a57-43da-8580-059b53e904c8" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://efa6d664d37b5d0b4ce207daa20da33480eabb55584c122de9da9a76aa822315" gracePeriod=30 Mar 12 14:52:51.624238 master-0 kubenswrapper[37036]: I0312 14:52:51.621148 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.890213816 podStartE2EDuration="8.621125102s" podCreationTimestamp="2026-03-12 14:52:43 +0000 UTC" firstStartedPulling="2026-03-12 14:52:45.600502345 +0000 UTC m=+1024.608243282" lastFinishedPulling="2026-03-12 14:52:50.331413631 +0000 UTC m=+1029.339154568" observedRunningTime="2026-03-12 14:52:51.595498906 +0000 UTC m=+1030.603239853" watchObservedRunningTime="2026-03-12 14:52:51.621125102 +0000 UTC m=+1030.628866039" Mar 12 14:52:51.638928 master-0 kubenswrapper[37036]: I0312 14:52:51.637231 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.872110625 podStartE2EDuration="8.637188595s" podCreationTimestamp="2026-03-12 14:52:43 +0000 UTC" firstStartedPulling="2026-03-12 14:52:45.562811065 +0000 UTC m=+1024.570552012" lastFinishedPulling="2026-03-12 14:52:50.327889045 +0000 UTC m=+1029.335629982" observedRunningTime="2026-03-12 14:52:51.624667089 +0000 UTC m=+1030.632408046" watchObservedRunningTime="2026-03-12 14:52:51.637188595 +0000 UTC m=+1030.644929522" Mar 12 14:52:51.657567 master-0 kubenswrapper[37036]: I0312 14:52:51.656526 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.857337385 podStartE2EDuration="8.656510057s" podCreationTimestamp="2026-03-12 14:52:43 +0000 UTC" firstStartedPulling="2026-03-12 14:52:45.56300805 +0000 UTC m=+1024.570748997" lastFinishedPulling="2026-03-12 14:52:50.362180732 +0000 UTC m=+1029.369921669" observedRunningTime="2026-03-12 14:52:51.655489661 +0000 UTC m=+1030.663230598" watchObservedRunningTime="2026-03-12 14:52:51.656510057 +0000 UTC m=+1030.664250994" Mar 12 14:52:51.702536 master-0 kubenswrapper[37036]: I0312 14:52:51.702224 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.483139516 podStartE2EDuration="8.702206152s" podCreationTimestamp="2026-03-12 14:52:43 +0000 UTC" firstStartedPulling="2026-03-12 14:52:45.098811715 +0000 UTC m=+1024.106552652" lastFinishedPulling="2026-03-12 14:52:50.317878351 +0000 UTC m=+1029.325619288" observedRunningTime="2026-03-12 14:52:51.690815374 +0000 UTC m=+1030.698556321" watchObservedRunningTime="2026-03-12 14:52:51.702206152 +0000 UTC m=+1030.709947089" Mar 12 14:52:52.144759 master-0 kubenswrapper[37036]: I0312 14:52:52.144665 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 14:52:52.173227 master-0 kubenswrapper[37036]: I0312 14:52:52.173169 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcrbr\" (UniqueName: \"kubernetes.io/projected/38e34f13-efba-4d2b-b59f-aa4b83a99280-kube-api-access-tcrbr\") pod \"38e34f13-efba-4d2b-b59f-aa4b83a99280\" (UID: \"38e34f13-efba-4d2b-b59f-aa4b83a99280\") " Mar 12 14:52:52.173425 master-0 kubenswrapper[37036]: I0312 14:52:52.173358 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38e34f13-efba-4d2b-b59f-aa4b83a99280-logs\") pod \"38e34f13-efba-4d2b-b59f-aa4b83a99280\" (UID: \"38e34f13-efba-4d2b-b59f-aa4b83a99280\") " Mar 12 14:52:52.173502 master-0 kubenswrapper[37036]: I0312 14:52:52.173483 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e34f13-efba-4d2b-b59f-aa4b83a99280-combined-ca-bundle\") pod \"38e34f13-efba-4d2b-b59f-aa4b83a99280\" (UID: \"38e34f13-efba-4d2b-b59f-aa4b83a99280\") " Mar 12 14:52:52.173553 master-0 kubenswrapper[37036]: I0312 14:52:52.173541 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e34f13-efba-4d2b-b59f-aa4b83a99280-config-data\") pod \"38e34f13-efba-4d2b-b59f-aa4b83a99280\" (UID: \"38e34f13-efba-4d2b-b59f-aa4b83a99280\") " Mar 12 14:52:52.173895 master-0 kubenswrapper[37036]: I0312 14:52:52.173875 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38e34f13-efba-4d2b-b59f-aa4b83a99280-logs" (OuterVolumeSpecName: "logs") pod "38e34f13-efba-4d2b-b59f-aa4b83a99280" (UID: "38e34f13-efba-4d2b-b59f-aa4b83a99280"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:52:52.174236 master-0 kubenswrapper[37036]: I0312 14:52:52.174215 37036 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38e34f13-efba-4d2b-b59f-aa4b83a99280-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:52.176830 master-0 kubenswrapper[37036]: I0312 14:52:52.176777 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38e34f13-efba-4d2b-b59f-aa4b83a99280-kube-api-access-tcrbr" (OuterVolumeSpecName: "kube-api-access-tcrbr") pod "38e34f13-efba-4d2b-b59f-aa4b83a99280" (UID: "38e34f13-efba-4d2b-b59f-aa4b83a99280"). InnerVolumeSpecName "kube-api-access-tcrbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:52:52.203099 master-0 kubenswrapper[37036]: I0312 14:52:52.203035 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e34f13-efba-4d2b-b59f-aa4b83a99280-config-data" (OuterVolumeSpecName: "config-data") pod "38e34f13-efba-4d2b-b59f-aa4b83a99280" (UID: "38e34f13-efba-4d2b-b59f-aa4b83a99280"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:52:52.204726 master-0 kubenswrapper[37036]: I0312 14:52:52.204657 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e34f13-efba-4d2b-b59f-aa4b83a99280-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38e34f13-efba-4d2b-b59f-aa4b83a99280" (UID: "38e34f13-efba-4d2b-b59f-aa4b83a99280"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:52:52.279500 master-0 kubenswrapper[37036]: I0312 14:52:52.279413 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcrbr\" (UniqueName: \"kubernetes.io/projected/38e34f13-efba-4d2b-b59f-aa4b83a99280-kube-api-access-tcrbr\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:52.279500 master-0 kubenswrapper[37036]: I0312 14:52:52.279492 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e34f13-efba-4d2b-b59f-aa4b83a99280-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:52.279500 master-0 kubenswrapper[37036]: I0312 14:52:52.279508 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e34f13-efba-4d2b-b59f-aa4b83a99280-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:52.545929 master-0 kubenswrapper[37036]: I0312 14:52:52.545337 37036 generic.go:334] "Generic (PLEG): container finished" podID="38e34f13-efba-4d2b-b59f-aa4b83a99280" containerID="9745092af9325d00f5ee3367c505b093a44cd1b2d3c61ba62ba6fa56b7639a93" exitCode=0 Mar 12 14:52:52.545929 master-0 kubenswrapper[37036]: I0312 14:52:52.545442 37036 generic.go:334] "Generic (PLEG): container finished" podID="38e34f13-efba-4d2b-b59f-aa4b83a99280" containerID="5a95f6721f84072b25a0ecb69b186972fd70618817e3d96b4837ed206c55d96b" exitCode=143 Mar 12 14:52:52.549924 master-0 kubenswrapper[37036]: I0312 14:52:52.547459 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"38e34f13-efba-4d2b-b59f-aa4b83a99280","Type":"ContainerDied","Data":"9745092af9325d00f5ee3367c505b093a44cd1b2d3c61ba62ba6fa56b7639a93"} Mar 12 14:52:52.549924 master-0 kubenswrapper[37036]: I0312 14:52:52.547498 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 14:52:52.549924 master-0 kubenswrapper[37036]: I0312 14:52:52.547579 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"38e34f13-efba-4d2b-b59f-aa4b83a99280","Type":"ContainerDied","Data":"5a95f6721f84072b25a0ecb69b186972fd70618817e3d96b4837ed206c55d96b"} Mar 12 14:52:52.549924 master-0 kubenswrapper[37036]: I0312 14:52:52.547601 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"38e34f13-efba-4d2b-b59f-aa4b83a99280","Type":"ContainerDied","Data":"837222c1ff58df3c4056a59cda5ab585007e71f8bf49d4bad28a53647a85af60"} Mar 12 14:52:52.549924 master-0 kubenswrapper[37036]: I0312 14:52:52.547637 37036 scope.go:117] "RemoveContainer" containerID="9745092af9325d00f5ee3367c505b093a44cd1b2d3c61ba62ba6fa56b7639a93" Mar 12 14:52:52.616669 master-0 kubenswrapper[37036]: I0312 14:52:52.615915 37036 scope.go:117] "RemoveContainer" containerID="5a95f6721f84072b25a0ecb69b186972fd70618817e3d96b4837ed206c55d96b" Mar 12 14:52:52.669304 master-0 kubenswrapper[37036]: I0312 14:52:52.669140 37036 scope.go:117] "RemoveContainer" containerID="9745092af9325d00f5ee3367c505b093a44cd1b2d3c61ba62ba6fa56b7639a93" Mar 12 14:52:52.677801 master-0 kubenswrapper[37036]: E0312 14:52:52.674089 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9745092af9325d00f5ee3367c505b093a44cd1b2d3c61ba62ba6fa56b7639a93\": container with ID starting with 9745092af9325d00f5ee3367c505b093a44cd1b2d3c61ba62ba6fa56b7639a93 not found: ID does not exist" containerID="9745092af9325d00f5ee3367c505b093a44cd1b2d3c61ba62ba6fa56b7639a93" Mar 12 14:52:52.677801 master-0 kubenswrapper[37036]: I0312 14:52:52.674170 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 14:52:52.677801 master-0 kubenswrapper[37036]: I0312 14:52:52.674191 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9745092af9325d00f5ee3367c505b093a44cd1b2d3c61ba62ba6fa56b7639a93"} err="failed to get container status \"9745092af9325d00f5ee3367c505b093a44cd1b2d3c61ba62ba6fa56b7639a93\": rpc error: code = NotFound desc = could not find container \"9745092af9325d00f5ee3367c505b093a44cd1b2d3c61ba62ba6fa56b7639a93\": container with ID starting with 9745092af9325d00f5ee3367c505b093a44cd1b2d3c61ba62ba6fa56b7639a93 not found: ID does not exist" Mar 12 14:52:52.677801 master-0 kubenswrapper[37036]: I0312 14:52:52.674314 37036 scope.go:117] "RemoveContainer" containerID="5a95f6721f84072b25a0ecb69b186972fd70618817e3d96b4837ed206c55d96b" Mar 12 14:52:52.678391 master-0 kubenswrapper[37036]: E0312 14:52:52.678339 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a95f6721f84072b25a0ecb69b186972fd70618817e3d96b4837ed206c55d96b\": container with ID starting with 5a95f6721f84072b25a0ecb69b186972fd70618817e3d96b4837ed206c55d96b not found: ID does not exist" containerID="5a95f6721f84072b25a0ecb69b186972fd70618817e3d96b4837ed206c55d96b" Mar 12 14:52:52.678499 master-0 kubenswrapper[37036]: I0312 14:52:52.678442 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a95f6721f84072b25a0ecb69b186972fd70618817e3d96b4837ed206c55d96b"} err="failed to get container status \"5a95f6721f84072b25a0ecb69b186972fd70618817e3d96b4837ed206c55d96b\": rpc error: code = NotFound desc = could not find container \"5a95f6721f84072b25a0ecb69b186972fd70618817e3d96b4837ed206c55d96b\": container with ID starting with 5a95f6721f84072b25a0ecb69b186972fd70618817e3d96b4837ed206c55d96b not found: ID does not exist" Mar 12 14:52:52.678499 master-0 kubenswrapper[37036]: I0312 14:52:52.678480 37036 scope.go:117] "RemoveContainer" containerID="9745092af9325d00f5ee3367c505b093a44cd1b2d3c61ba62ba6fa56b7639a93" Mar 12 14:52:52.679659 master-0 kubenswrapper[37036]: I0312 14:52:52.679623 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9745092af9325d00f5ee3367c505b093a44cd1b2d3c61ba62ba6fa56b7639a93"} err="failed to get container status \"9745092af9325d00f5ee3367c505b093a44cd1b2d3c61ba62ba6fa56b7639a93\": rpc error: code = NotFound desc = could not find container \"9745092af9325d00f5ee3367c505b093a44cd1b2d3c61ba62ba6fa56b7639a93\": container with ID starting with 9745092af9325d00f5ee3367c505b093a44cd1b2d3c61ba62ba6fa56b7639a93 not found: ID does not exist" Mar 12 14:52:52.679733 master-0 kubenswrapper[37036]: I0312 14:52:52.679683 37036 scope.go:117] "RemoveContainer" containerID="5a95f6721f84072b25a0ecb69b186972fd70618817e3d96b4837ed206c55d96b" Mar 12 14:52:52.680578 master-0 kubenswrapper[37036]: I0312 14:52:52.680523 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a95f6721f84072b25a0ecb69b186972fd70618817e3d96b4837ed206c55d96b"} err="failed to get container status \"5a95f6721f84072b25a0ecb69b186972fd70618817e3d96b4837ed206c55d96b\": rpc error: code = NotFound desc = could not find container \"5a95f6721f84072b25a0ecb69b186972fd70618817e3d96b4837ed206c55d96b\": container with ID starting with 5a95f6721f84072b25a0ecb69b186972fd70618817e3d96b4837ed206c55d96b not found: ID does not exist" Mar 12 14:52:52.719942 master-0 kubenswrapper[37036]: I0312 14:52:52.715247 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 14:52:52.797835 master-0 kubenswrapper[37036]: I0312 14:52:52.797781 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 12 14:52:52.808536 master-0 kubenswrapper[37036]: E0312 14:52:52.808493 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e34f13-efba-4d2b-b59f-aa4b83a99280" containerName="nova-metadata-log" Mar 12 14:52:52.808763 master-0 kubenswrapper[37036]: I0312 14:52:52.808750 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e34f13-efba-4d2b-b59f-aa4b83a99280" containerName="nova-metadata-log" Mar 12 14:52:52.808862 master-0 kubenswrapper[37036]: E0312 14:52:52.808850 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e34f13-efba-4d2b-b59f-aa4b83a99280" containerName="nova-metadata-metadata" Mar 12 14:52:52.809036 master-0 kubenswrapper[37036]: I0312 14:52:52.808936 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e34f13-efba-4d2b-b59f-aa4b83a99280" containerName="nova-metadata-metadata" Mar 12 14:52:52.812225 master-0 kubenswrapper[37036]: I0312 14:52:52.811885 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e34f13-efba-4d2b-b59f-aa4b83a99280" containerName="nova-metadata-metadata" Mar 12 14:52:52.812708 master-0 kubenswrapper[37036]: I0312 14:52:52.812689 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e34f13-efba-4d2b-b59f-aa4b83a99280" containerName="nova-metadata-log" Mar 12 14:52:52.819565 master-0 kubenswrapper[37036]: I0312 14:52:52.819338 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 14:52:52.823646 master-0 kubenswrapper[37036]: I0312 14:52:52.823610 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 12 14:52:52.824320 master-0 kubenswrapper[37036]: I0312 14:52:52.824299 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 12 14:52:52.833336 master-0 kubenswrapper[37036]: I0312 14:52:52.829393 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 14:52:52.918066 master-0 kubenswrapper[37036]: I0312 14:52:52.917403 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/844da800-0e8b-47b9-ac8f-4303f70f0cf3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"844da800-0e8b-47b9-ac8f-4303f70f0cf3\") " pod="openstack/nova-metadata-0" Mar 12 14:52:52.918066 master-0 kubenswrapper[37036]: I0312 14:52:52.917472 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/844da800-0e8b-47b9-ac8f-4303f70f0cf3-config-data\") pod \"nova-metadata-0\" (UID: \"844da800-0e8b-47b9-ac8f-4303f70f0cf3\") " pod="openstack/nova-metadata-0" Mar 12 14:52:52.918066 master-0 kubenswrapper[37036]: I0312 14:52:52.917596 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz5gn\" (UniqueName: \"kubernetes.io/projected/844da800-0e8b-47b9-ac8f-4303f70f0cf3-kube-api-access-fz5gn\") pod \"nova-metadata-0\" (UID: \"844da800-0e8b-47b9-ac8f-4303f70f0cf3\") " pod="openstack/nova-metadata-0" Mar 12 14:52:52.918066 master-0 kubenswrapper[37036]: I0312 14:52:52.917630 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/844da800-0e8b-47b9-ac8f-4303f70f0cf3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"844da800-0e8b-47b9-ac8f-4303f70f0cf3\") " pod="openstack/nova-metadata-0" Mar 12 14:52:52.918066 master-0 kubenswrapper[37036]: I0312 14:52:52.917700 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/844da800-0e8b-47b9-ac8f-4303f70f0cf3-logs\") pod \"nova-metadata-0\" (UID: \"844da800-0e8b-47b9-ac8f-4303f70f0cf3\") " pod="openstack/nova-metadata-0" Mar 12 14:52:53.021439 master-0 kubenswrapper[37036]: I0312 14:52:53.021380 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/844da800-0e8b-47b9-ac8f-4303f70f0cf3-logs\") pod \"nova-metadata-0\" (UID: \"844da800-0e8b-47b9-ac8f-4303f70f0cf3\") " pod="openstack/nova-metadata-0" Mar 12 14:52:53.021579 master-0 kubenswrapper[37036]: I0312 14:52:53.021515 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/844da800-0e8b-47b9-ac8f-4303f70f0cf3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"844da800-0e8b-47b9-ac8f-4303f70f0cf3\") " pod="openstack/nova-metadata-0" Mar 12 14:52:53.021579 master-0 kubenswrapper[37036]: I0312 14:52:53.021540 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/844da800-0e8b-47b9-ac8f-4303f70f0cf3-config-data\") pod \"nova-metadata-0\" (UID: \"844da800-0e8b-47b9-ac8f-4303f70f0cf3\") " pod="openstack/nova-metadata-0" Mar 12 14:52:53.021690 master-0 kubenswrapper[37036]: I0312 14:52:53.021628 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz5gn\" (UniqueName: \"kubernetes.io/projected/844da800-0e8b-47b9-ac8f-4303f70f0cf3-kube-api-access-fz5gn\") pod \"nova-metadata-0\" (UID: \"844da800-0e8b-47b9-ac8f-4303f70f0cf3\") " pod="openstack/nova-metadata-0" Mar 12 14:52:53.021690 master-0 kubenswrapper[37036]: I0312 14:52:53.021653 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/844da800-0e8b-47b9-ac8f-4303f70f0cf3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"844da800-0e8b-47b9-ac8f-4303f70f0cf3\") " pod="openstack/nova-metadata-0" Mar 12 14:52:53.022577 master-0 kubenswrapper[37036]: I0312 14:52:53.022531 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/844da800-0e8b-47b9-ac8f-4303f70f0cf3-logs\") pod \"nova-metadata-0\" (UID: \"844da800-0e8b-47b9-ac8f-4303f70f0cf3\") " pod="openstack/nova-metadata-0" Mar 12 14:52:53.026615 master-0 kubenswrapper[37036]: I0312 14:52:53.026564 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/844da800-0e8b-47b9-ac8f-4303f70f0cf3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"844da800-0e8b-47b9-ac8f-4303f70f0cf3\") " pod="openstack/nova-metadata-0" Mar 12 14:52:53.026785 master-0 kubenswrapper[37036]: I0312 14:52:53.026747 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/844da800-0e8b-47b9-ac8f-4303f70f0cf3-config-data\") pod \"nova-metadata-0\" (UID: \"844da800-0e8b-47b9-ac8f-4303f70f0cf3\") " pod="openstack/nova-metadata-0" Mar 12 14:52:53.029460 master-0 kubenswrapper[37036]: I0312 14:52:53.029408 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/844da800-0e8b-47b9-ac8f-4303f70f0cf3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"844da800-0e8b-47b9-ac8f-4303f70f0cf3\") " pod="openstack/nova-metadata-0" Mar 12 14:52:53.043192 master-0 kubenswrapper[37036]: I0312 14:52:53.043114 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz5gn\" (UniqueName: \"kubernetes.io/projected/844da800-0e8b-47b9-ac8f-4303f70f0cf3-kube-api-access-fz5gn\") pod \"nova-metadata-0\" (UID: \"844da800-0e8b-47b9-ac8f-4303f70f0cf3\") " pod="openstack/nova-metadata-0" Mar 12 14:52:53.154040 master-0 kubenswrapper[37036]: I0312 14:52:53.153961 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 14:52:53.281632 master-0 kubenswrapper[37036]: I0312 14:52:53.276163 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38e34f13-efba-4d2b-b59f-aa4b83a99280" path="/var/lib/kubelet/pods/38e34f13-efba-4d2b-b59f-aa4b83a99280/volumes" Mar 12 14:52:53.573923 master-0 kubenswrapper[37036]: I0312 14:52:53.571110 37036 generic.go:334] "Generic (PLEG): container finished" podID="8c0524b9-cbf3-40e3-9424-98b634ba1b10" containerID="5b9d6b8d3e41d2665fcb1a46393e0bcde5da7071984972a5a11923715f59c0b5" exitCode=0 Mar 12 14:52:53.573923 master-0 kubenswrapper[37036]: I0312 14:52:53.571179 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"8c0524b9-cbf3-40e3-9424-98b634ba1b10","Type":"ContainerDied","Data":"5b9d6b8d3e41d2665fcb1a46393e0bcde5da7071984972a5a11923715f59c0b5"} Mar 12 14:52:53.759189 master-0 kubenswrapper[37036]: I0312 14:52:53.758363 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 14:52:54.110776 master-0 kubenswrapper[37036]: I0312 14:52:54.110721 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 14:52:54.111038 master-0 kubenswrapper[37036]: I0312 14:52:54.111024 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 14:52:54.163620 master-0 kubenswrapper[37036]: I0312 14:52:54.163591 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 12 14:52:54.163761 master-0 kubenswrapper[37036]: I0312 14:52:54.163749 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 12 14:52:54.220873 master-0 kubenswrapper[37036]: I0312 14:52:54.220820 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 12 14:52:54.363106 master-0 kubenswrapper[37036]: I0312 14:52:54.359614 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:52:54.520134 master-0 kubenswrapper[37036]: I0312 14:52:54.520078 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b98899985-qcjxc" Mar 12 14:52:54.599821 master-0 kubenswrapper[37036]: I0312 14:52:54.595810 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"8c0524b9-cbf3-40e3-9424-98b634ba1b10","Type":"ContainerStarted","Data":"5365a4039387485f47a4d57ccab6cead004ccc0230b083ef9287ea578de19ba0"} Mar 12 14:52:54.599821 master-0 kubenswrapper[37036]: I0312 14:52:54.598295 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"844da800-0e8b-47b9-ac8f-4303f70f0cf3","Type":"ContainerStarted","Data":"004c61409f2e3a6bc567b74290d34bf02b030cfe07c08438dad5c59fa0629c69"} Mar 12 14:52:54.599821 master-0 kubenswrapper[37036]: I0312 14:52:54.598354 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"844da800-0e8b-47b9-ac8f-4303f70f0cf3","Type":"ContainerStarted","Data":"1cff1285df1e4926be4b0ea02a568b9d078b2a21313afdcc6fefda37adff72ee"} Mar 12 14:52:54.599821 master-0 kubenswrapper[37036]: I0312 14:52:54.598372 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"844da800-0e8b-47b9-ac8f-4303f70f0cf3","Type":"ContainerStarted","Data":"0d3912ad4de24958321ac7347ed3dea7028b0e0ea2a3fc0816093a04850a64a1"} Mar 12 14:52:54.660451 master-0 kubenswrapper[37036]: I0312 14:52:54.658447 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 12 14:52:54.660981 master-0 kubenswrapper[37036]: I0312 14:52:54.660946 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7947596457-rj5wn"] Mar 12 14:52:54.665622 master-0 kubenswrapper[37036]: I0312 14:52:54.664984 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7947596457-rj5wn" podUID="fbb29ced-f0e0-44d7-bd04-d332938eea7b" containerName="dnsmasq-dns" containerID="cri-o://468c8f2fa5121b935a4598fe81d551d0838bd7545898ee9321ac7fd0bd1de48b" gracePeriod=10 Mar 12 14:52:54.810036 master-0 kubenswrapper[37036]: I0312 14:52:54.809927 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.809870583 podStartE2EDuration="2.809870583s" podCreationTimestamp="2026-03-12 14:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:52:54.642930117 +0000 UTC m=+1033.650671054" watchObservedRunningTime="2026-03-12 14:52:54.809870583 +0000 UTC m=+1033.817611520" Mar 12 14:52:55.196349 master-0 kubenswrapper[37036]: I0312 14:52:55.196114 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c26314aa-b970-4b92-8037-7485b8d5b20b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.7:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 14:52:55.196610 master-0 kubenswrapper[37036]: I0312 14:52:55.196454 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c26314aa-b970-4b92-8037-7485b8d5b20b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.7:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 14:52:55.446011 master-0 kubenswrapper[37036]: I0312 14:52:55.442531 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7947596457-rj5wn" Mar 12 14:52:55.551940 master-0 kubenswrapper[37036]: I0312 14:52:55.550882 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-ovsdbserver-sb\") pod \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\" (UID: \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\") " Mar 12 14:52:55.551940 master-0 kubenswrapper[37036]: I0312 14:52:55.550971 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-ovsdbserver-nb\") pod \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\" (UID: \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\") " Mar 12 14:52:55.551940 master-0 kubenswrapper[37036]: I0312 14:52:55.551091 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8fxl\" (UniqueName: \"kubernetes.io/projected/fbb29ced-f0e0-44d7-bd04-d332938eea7b-kube-api-access-r8fxl\") pod \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\" (UID: \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\") " Mar 12 14:52:55.551940 master-0 kubenswrapper[37036]: I0312 14:52:55.551149 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-config\") pod \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\" (UID: \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\") " Mar 12 14:52:55.551940 master-0 kubenswrapper[37036]: I0312 14:52:55.551323 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-dns-swift-storage-0\") pod \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\" (UID: \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\") " Mar 12 14:52:55.551940 master-0 kubenswrapper[37036]: I0312 14:52:55.551376 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-dns-svc\") pod \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\" (UID: \"fbb29ced-f0e0-44d7-bd04-d332938eea7b\") " Mar 12 14:52:55.574917 master-0 kubenswrapper[37036]: I0312 14:52:55.570931 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbb29ced-f0e0-44d7-bd04-d332938eea7b-kube-api-access-r8fxl" (OuterVolumeSpecName: "kube-api-access-r8fxl") pod "fbb29ced-f0e0-44d7-bd04-d332938eea7b" (UID: "fbb29ced-f0e0-44d7-bd04-d332938eea7b"). InnerVolumeSpecName "kube-api-access-r8fxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:52:55.610924 master-0 kubenswrapper[37036]: I0312 14:52:55.608202 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-config" (OuterVolumeSpecName: "config") pod "fbb29ced-f0e0-44d7-bd04-d332938eea7b" (UID: "fbb29ced-f0e0-44d7-bd04-d332938eea7b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:52:55.656921 master-0 kubenswrapper[37036]: I0312 14:52:55.654335 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8fxl\" (UniqueName: \"kubernetes.io/projected/fbb29ced-f0e0-44d7-bd04-d332938eea7b-kube-api-access-r8fxl\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:55.656921 master-0 kubenswrapper[37036]: I0312 14:52:55.654377 37036 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:55.656921 master-0 kubenswrapper[37036]: I0312 14:52:55.654792 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fbb29ced-f0e0-44d7-bd04-d332938eea7b" (UID: "fbb29ced-f0e0-44d7-bd04-d332938eea7b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:52:55.670918 master-0 kubenswrapper[37036]: I0312 14:52:55.668540 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"8c0524b9-cbf3-40e3-9424-98b634ba1b10","Type":"ContainerStarted","Data":"b7de0847cee3985012533020f2509f5b3aa8543fb5f0dbd5d3e1e57063b3c4ca"} Mar 12 14:52:55.675918 master-0 kubenswrapper[37036]: I0312 14:52:55.673054 37036 generic.go:334] "Generic (PLEG): container finished" podID="fbb29ced-f0e0-44d7-bd04-d332938eea7b" containerID="468c8f2fa5121b935a4598fe81d551d0838bd7545898ee9321ac7fd0bd1de48b" exitCode=0 Mar 12 14:52:55.675918 master-0 kubenswrapper[37036]: I0312 14:52:55.673120 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7947596457-rj5wn" event={"ID":"fbb29ced-f0e0-44d7-bd04-d332938eea7b","Type":"ContainerDied","Data":"468c8f2fa5121b935a4598fe81d551d0838bd7545898ee9321ac7fd0bd1de48b"} Mar 12 14:52:55.675918 master-0 kubenswrapper[37036]: I0312 14:52:55.673151 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7947596457-rj5wn" event={"ID":"fbb29ced-f0e0-44d7-bd04-d332938eea7b","Type":"ContainerDied","Data":"180a4096e51ce18cd543e292b1b85b408608e09847aa43e08fa72b6448b6b506"} Mar 12 14:52:55.675918 master-0 kubenswrapper[37036]: I0312 14:52:55.673167 37036 scope.go:117] "RemoveContainer" containerID="468c8f2fa5121b935a4598fe81d551d0838bd7545898ee9321ac7fd0bd1de48b" Mar 12 14:52:55.675918 master-0 kubenswrapper[37036]: I0312 14:52:55.673280 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7947596457-rj5wn" Mar 12 14:52:55.702988 master-0 kubenswrapper[37036]: I0312 14:52:55.702378 37036 generic.go:334] "Generic (PLEG): container finished" podID="ccb059aa-827d-46f3-8218-8178e9eeafbd" containerID="9d9929604e826c941aa4dc2d411654444de355770aafd4dd9394826d6b33cd94" exitCode=0 Mar 12 14:52:55.705940 master-0 kubenswrapper[37036]: I0312 14:52:55.704706 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hvjms" event={"ID":"ccb059aa-827d-46f3-8218-8178e9eeafbd","Type":"ContainerDied","Data":"9d9929604e826c941aa4dc2d411654444de355770aafd4dd9394826d6b33cd94"} Mar 12 14:52:55.741919 master-0 kubenswrapper[37036]: I0312 14:52:55.736968 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fbb29ced-f0e0-44d7-bd04-d332938eea7b" (UID: "fbb29ced-f0e0-44d7-bd04-d332938eea7b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:52:55.741919 master-0 kubenswrapper[37036]: I0312 14:52:55.739439 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fbb29ced-f0e0-44d7-bd04-d332938eea7b" (UID: "fbb29ced-f0e0-44d7-bd04-d332938eea7b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:52:55.762934 master-0 kubenswrapper[37036]: I0312 14:52:55.758880 37036 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:55.762934 master-0 kubenswrapper[37036]: I0312 14:52:55.758947 37036 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:55.762934 master-0 kubenswrapper[37036]: I0312 14:52:55.758959 37036 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:55.856945 master-0 kubenswrapper[37036]: I0312 14:52:55.854306 37036 scope.go:117] "RemoveContainer" containerID="6248ed2f96f7cf2137fe52c924678c8e383999b87c2a87f25523bec8220c301a" Mar 12 14:52:55.866922 master-0 kubenswrapper[37036]: I0312 14:52:55.863465 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fbb29ced-f0e0-44d7-bd04-d332938eea7b" (UID: "fbb29ced-f0e0-44d7-bd04-d332938eea7b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:52:55.966139 master-0 kubenswrapper[37036]: I0312 14:52:55.965996 37036 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fbb29ced-f0e0-44d7-bd04-d332938eea7b-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:56.000716 master-0 kubenswrapper[37036]: I0312 14:52:56.000205 37036 scope.go:117] "RemoveContainer" containerID="468c8f2fa5121b935a4598fe81d551d0838bd7545898ee9321ac7fd0bd1de48b" Mar 12 14:52:56.006678 master-0 kubenswrapper[37036]: E0312 14:52:56.006418 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"468c8f2fa5121b935a4598fe81d551d0838bd7545898ee9321ac7fd0bd1de48b\": container with ID starting with 468c8f2fa5121b935a4598fe81d551d0838bd7545898ee9321ac7fd0bd1de48b not found: ID does not exist" containerID="468c8f2fa5121b935a4598fe81d551d0838bd7545898ee9321ac7fd0bd1de48b" Mar 12 14:52:56.006678 master-0 kubenswrapper[37036]: I0312 14:52:56.006516 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"468c8f2fa5121b935a4598fe81d551d0838bd7545898ee9321ac7fd0bd1de48b"} err="failed to get container status \"468c8f2fa5121b935a4598fe81d551d0838bd7545898ee9321ac7fd0bd1de48b\": rpc error: code = NotFound desc = could not find container \"468c8f2fa5121b935a4598fe81d551d0838bd7545898ee9321ac7fd0bd1de48b\": container with ID starting with 468c8f2fa5121b935a4598fe81d551d0838bd7545898ee9321ac7fd0bd1de48b not found: ID does not exist" Mar 12 14:52:56.006678 master-0 kubenswrapper[37036]: I0312 14:52:56.006547 37036 scope.go:117] "RemoveContainer" containerID="6248ed2f96f7cf2137fe52c924678c8e383999b87c2a87f25523bec8220c301a" Mar 12 14:52:56.007573 master-0 kubenswrapper[37036]: E0312 14:52:56.007420 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6248ed2f96f7cf2137fe52c924678c8e383999b87c2a87f25523bec8220c301a\": container with ID starting with 6248ed2f96f7cf2137fe52c924678c8e383999b87c2a87f25523bec8220c301a not found: ID does not exist" containerID="6248ed2f96f7cf2137fe52c924678c8e383999b87c2a87f25523bec8220c301a" Mar 12 14:52:56.007656 master-0 kubenswrapper[37036]: I0312 14:52:56.007568 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6248ed2f96f7cf2137fe52c924678c8e383999b87c2a87f25523bec8220c301a"} err="failed to get container status \"6248ed2f96f7cf2137fe52c924678c8e383999b87c2a87f25523bec8220c301a\": rpc error: code = NotFound desc = could not find container \"6248ed2f96f7cf2137fe52c924678c8e383999b87c2a87f25523bec8220c301a\": container with ID starting with 6248ed2f96f7cf2137fe52c924678c8e383999b87c2a87f25523bec8220c301a not found: ID does not exist" Mar 12 14:52:56.094443 master-0 kubenswrapper[37036]: I0312 14:52:56.093984 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7947596457-rj5wn"] Mar 12 14:52:56.116402 master-0 kubenswrapper[37036]: I0312 14:52:56.116342 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7947596457-rj5wn"] Mar 12 14:52:56.719046 master-0 kubenswrapper[37036]: I0312 14:52:56.718988 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"8c0524b9-cbf3-40e3-9424-98b634ba1b10","Type":"ContainerStarted","Data":"ac6993210609b3e8ed7c5b4af2ad9d490fcfee7f9bf3a91f9c6e299472408b1c"} Mar 12 14:52:56.720087 master-0 kubenswrapper[37036]: I0312 14:52:56.720021 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Mar 12 14:52:56.720164 master-0 kubenswrapper[37036]: I0312 14:52:56.720113 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Mar 12 14:52:56.772908 master-0 kubenswrapper[37036]: I0312 14:52:56.772633 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-conductor-0" podStartSLOduration=69.346954316 podStartE2EDuration="1m51.772611978s" podCreationTimestamp="2026-03-12 14:51:05 +0000 UTC" firstStartedPulling="2026-03-12 14:51:17.273339712 +0000 UTC m=+936.281080649" lastFinishedPulling="2026-03-12 14:51:59.698997374 +0000 UTC m=+978.706738311" observedRunningTime="2026-03-12 14:52:56.754886565 +0000 UTC m=+1035.762627532" watchObservedRunningTime="2026-03-12 14:52:56.772611978 +0000 UTC m=+1035.780352915" Mar 12 14:52:57.257956 master-0 kubenswrapper[37036]: I0312 14:52:57.254552 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbb29ced-f0e0-44d7-bd04-d332938eea7b" path="/var/lib/kubelet/pods/fbb29ced-f0e0-44d7-bd04-d332938eea7b/volumes" Mar 12 14:52:57.271048 master-0 kubenswrapper[37036]: I0312 14:52:57.270229 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hvjms" Mar 12 14:52:57.356346 master-0 kubenswrapper[37036]: I0312 14:52:57.356028 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccb059aa-827d-46f3-8218-8178e9eeafbd-scripts\") pod \"ccb059aa-827d-46f3-8218-8178e9eeafbd\" (UID: \"ccb059aa-827d-46f3-8218-8178e9eeafbd\") " Mar 12 14:52:57.356346 master-0 kubenswrapper[37036]: I0312 14:52:57.356246 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87s7j\" (UniqueName: \"kubernetes.io/projected/ccb059aa-827d-46f3-8218-8178e9eeafbd-kube-api-access-87s7j\") pod \"ccb059aa-827d-46f3-8218-8178e9eeafbd\" (UID: \"ccb059aa-827d-46f3-8218-8178e9eeafbd\") " Mar 12 14:52:57.356346 master-0 kubenswrapper[37036]: I0312 14:52:57.356362 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb059aa-827d-46f3-8218-8178e9eeafbd-config-data\") pod \"ccb059aa-827d-46f3-8218-8178e9eeafbd\" (UID: \"ccb059aa-827d-46f3-8218-8178e9eeafbd\") " Mar 12 14:52:57.356727 master-0 kubenswrapper[37036]: I0312 14:52:57.356416 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb059aa-827d-46f3-8218-8178e9eeafbd-combined-ca-bundle\") pod \"ccb059aa-827d-46f3-8218-8178e9eeafbd\" (UID: \"ccb059aa-827d-46f3-8218-8178e9eeafbd\") " Mar 12 14:52:57.380026 master-0 kubenswrapper[37036]: I0312 14:52:57.379879 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccb059aa-827d-46f3-8218-8178e9eeafbd-kube-api-access-87s7j" (OuterVolumeSpecName: "kube-api-access-87s7j") pod "ccb059aa-827d-46f3-8218-8178e9eeafbd" (UID: "ccb059aa-827d-46f3-8218-8178e9eeafbd"). InnerVolumeSpecName "kube-api-access-87s7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:52:57.380298 master-0 kubenswrapper[37036]: I0312 14:52:57.380027 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb059aa-827d-46f3-8218-8178e9eeafbd-scripts" (OuterVolumeSpecName: "scripts") pod "ccb059aa-827d-46f3-8218-8178e9eeafbd" (UID: "ccb059aa-827d-46f3-8218-8178e9eeafbd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:52:57.442944 master-0 kubenswrapper[37036]: I0312 14:52:57.438483 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-conductor-0" Mar 12 14:52:57.452746 master-0 kubenswrapper[37036]: I0312 14:52:57.452660 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb059aa-827d-46f3-8218-8178e9eeafbd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ccb059aa-827d-46f3-8218-8178e9eeafbd" (UID: "ccb059aa-827d-46f3-8218-8178e9eeafbd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:52:57.461826 master-0 kubenswrapper[37036]: I0312 14:52:57.461763 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb059aa-827d-46f3-8218-8178e9eeafbd-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:57.461826 master-0 kubenswrapper[37036]: I0312 14:52:57.461824 37036 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccb059aa-827d-46f3-8218-8178e9eeafbd-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:57.461826 master-0 kubenswrapper[37036]: I0312 14:52:57.461835 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87s7j\" (UniqueName: \"kubernetes.io/projected/ccb059aa-827d-46f3-8218-8178e9eeafbd-kube-api-access-87s7j\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:57.483863 master-0 kubenswrapper[37036]: I0312 14:52:57.483808 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb059aa-827d-46f3-8218-8178e9eeafbd-config-data" (OuterVolumeSpecName: "config-data") pod "ccb059aa-827d-46f3-8218-8178e9eeafbd" (UID: "ccb059aa-827d-46f3-8218-8178e9eeafbd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:52:57.565758 master-0 kubenswrapper[37036]: I0312 14:52:57.564431 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb059aa-827d-46f3-8218-8178e9eeafbd-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:52:57.742535 master-0 kubenswrapper[37036]: I0312 14:52:57.741073 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hvjms" event={"ID":"ccb059aa-827d-46f3-8218-8178e9eeafbd","Type":"ContainerDied","Data":"603f67e32048c755c313d35c6ff6f55173fee78eaa82d11ab62231d1e946e4e0"} Mar 12 14:52:57.742535 master-0 kubenswrapper[37036]: I0312 14:52:57.741333 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="603f67e32048c755c313d35c6ff6f55173fee78eaa82d11ab62231d1e946e4e0" Mar 12 14:52:57.742535 master-0 kubenswrapper[37036]: I0312 14:52:57.741748 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hvjms" Mar 12 14:52:57.950017 master-0 kubenswrapper[37036]: I0312 14:52:57.948497 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 14:52:57.950017 master-0 kubenswrapper[37036]: I0312 14:52:57.948811 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c26314aa-b970-4b92-8037-7485b8d5b20b" containerName="nova-api-log" containerID="cri-o://631e6e95777e01adedcf1bf2e1756babcef4928935880353f50d3247fc2cf43a" gracePeriod=30 Mar 12 14:52:57.950017 master-0 kubenswrapper[37036]: I0312 14:52:57.949052 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c26314aa-b970-4b92-8037-7485b8d5b20b" containerName="nova-api-api" containerID="cri-o://2a8d7b2f366ebe6213104dbff2a30441e2cc0e9ea1518c28b55dbe08143c8527" gracePeriod=30 Mar 12 14:52:57.981764 master-0 kubenswrapper[37036]: I0312 14:52:57.980014 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 14:52:57.981764 master-0 kubenswrapper[37036]: I0312 14:52:57.980351 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="18749a86-eed1-4fa8-b31d-98f0a3fc67fb" containerName="nova-scheduler-scheduler" containerID="cri-o://ac0dff2d2482ccfd5fd6d524f32466e8f407f9a336eb4c4c8a041a8168d40121" gracePeriod=30 Mar 12 14:52:57.992965 master-0 kubenswrapper[37036]: I0312 14:52:57.992781 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 14:52:57.993168 master-0 kubenswrapper[37036]: I0312 14:52:57.993072 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="844da800-0e8b-47b9-ac8f-4303f70f0cf3" containerName="nova-metadata-log" containerID="cri-o://1cff1285df1e4926be4b0ea02a568b9d078b2a21313afdcc6fefda37adff72ee" gracePeriod=30 Mar 12 14:52:57.993807 master-0 kubenswrapper[37036]: I0312 14:52:57.993218 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="844da800-0e8b-47b9-ac8f-4303f70f0cf3" containerName="nova-metadata-metadata" containerID="cri-o://004c61409f2e3a6bc567b74290d34bf02b030cfe07c08438dad5c59fa0629c69" gracePeriod=30 Mar 12 14:52:58.154548 master-0 kubenswrapper[37036]: I0312 14:52:58.154481 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 14:52:58.154788 master-0 kubenswrapper[37036]: I0312 14:52:58.154560 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 14:52:59.163764 master-0 kubenswrapper[37036]: E0312 14:52:59.163634 37036 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ac0dff2d2482ccfd5fd6d524f32466e8f407f9a336eb4c4c8a041a8168d40121 is running failed: container process not found" containerID="ac0dff2d2482ccfd5fd6d524f32466e8f407f9a336eb4c4c8a041a8168d40121" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 14:52:59.164320 master-0 kubenswrapper[37036]: E0312 14:52:59.164136 37036 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ac0dff2d2482ccfd5fd6d524f32466e8f407f9a336eb4c4c8a041a8168d40121 is running failed: container process not found" containerID="ac0dff2d2482ccfd5fd6d524f32466e8f407f9a336eb4c4c8a041a8168d40121" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 14:52:59.164567 master-0 kubenswrapper[37036]: E0312 14:52:59.164519 37036 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ac0dff2d2482ccfd5fd6d524f32466e8f407f9a336eb4c4c8a041a8168d40121 is running failed: container process not found" containerID="ac0dff2d2482ccfd5fd6d524f32466e8f407f9a336eb4c4c8a041a8168d40121" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 14:52:59.164639 master-0 kubenswrapper[37036]: E0312 14:52:59.164566 37036 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ac0dff2d2482ccfd5fd6d524f32466e8f407f9a336eb4c4c8a041a8168d40121 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="18749a86-eed1-4fa8-b31d-98f0a3fc67fb" containerName="nova-scheduler-scheduler" Mar 12 14:52:59.601025 master-0 kubenswrapper[37036]: I0312 14:52:59.600975 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-conductor-0" Mar 12 14:52:59.829957 master-0 kubenswrapper[37036]: I0312 14:52:59.829653 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Mar 12 14:53:00.824396 master-0 kubenswrapper[37036]: I0312 14:53:00.824291 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Mar 12 14:53:04.164249 master-0 kubenswrapper[37036]: E0312 14:53:04.164185 37036 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ac0dff2d2482ccfd5fd6d524f32466e8f407f9a336eb4c4c8a041a8168d40121 is running failed: container process not found" containerID="ac0dff2d2482ccfd5fd6d524f32466e8f407f9a336eb4c4c8a041a8168d40121" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 14:53:04.164970 master-0 kubenswrapper[37036]: E0312 14:53:04.164760 37036 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ac0dff2d2482ccfd5fd6d524f32466e8f407f9a336eb4c4c8a041a8168d40121 is running failed: container process not found" containerID="ac0dff2d2482ccfd5fd6d524f32466e8f407f9a336eb4c4c8a041a8168d40121" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 14:53:04.165306 master-0 kubenswrapper[37036]: E0312 14:53:04.165238 37036 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ac0dff2d2482ccfd5fd6d524f32466e8f407f9a336eb4c4c8a041a8168d40121 is running failed: container process not found" containerID="ac0dff2d2482ccfd5fd6d524f32466e8f407f9a336eb4c4c8a041a8168d40121" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 14:53:04.165306 master-0 kubenswrapper[37036]: E0312 14:53:04.165279 37036 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ac0dff2d2482ccfd5fd6d524f32466e8f407f9a336eb4c4c8a041a8168d40121 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="18749a86-eed1-4fa8-b31d-98f0a3fc67fb" containerName="nova-scheduler-scheduler" Mar 12 14:53:04.879273 master-0 kubenswrapper[37036]: I0312 14:53:04.879143 37036 generic.go:334] "Generic (PLEG): container finished" podID="844da800-0e8b-47b9-ac8f-4303f70f0cf3" containerID="004c61409f2e3a6bc567b74290d34bf02b030cfe07c08438dad5c59fa0629c69" exitCode=0 Mar 12 14:53:04.879273 master-0 kubenswrapper[37036]: I0312 14:53:04.879186 37036 generic.go:334] "Generic (PLEG): container finished" podID="844da800-0e8b-47b9-ac8f-4303f70f0cf3" containerID="1cff1285df1e4926be4b0ea02a568b9d078b2a21313afdcc6fefda37adff72ee" exitCode=143 Mar 12 14:53:04.879273 master-0 kubenswrapper[37036]: I0312 14:53:04.879211 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"844da800-0e8b-47b9-ac8f-4303f70f0cf3","Type":"ContainerDied","Data":"004c61409f2e3a6bc567b74290d34bf02b030cfe07c08438dad5c59fa0629c69"} Mar 12 14:53:04.879273 master-0 kubenswrapper[37036]: I0312 14:53:04.879267 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"844da800-0e8b-47b9-ac8f-4303f70f0cf3","Type":"ContainerDied","Data":"1cff1285df1e4926be4b0ea02a568b9d078b2a21313afdcc6fefda37adff72ee"} Mar 12 14:53:04.882540 master-0 kubenswrapper[37036]: I0312 14:53:04.882505 37036 generic.go:334] "Generic (PLEG): container finished" podID="c26314aa-b970-4b92-8037-7485b8d5b20b" containerID="2a8d7b2f366ebe6213104dbff2a30441e2cc0e9ea1518c28b55dbe08143c8527" exitCode=0 Mar 12 14:53:04.882540 master-0 kubenswrapper[37036]: I0312 14:53:04.882530 37036 generic.go:334] "Generic (PLEG): container finished" podID="c26314aa-b970-4b92-8037-7485b8d5b20b" containerID="631e6e95777e01adedcf1bf2e1756babcef4928935880353f50d3247fc2cf43a" exitCode=143 Mar 12 14:53:04.882746 master-0 kubenswrapper[37036]: I0312 14:53:04.882571 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c26314aa-b970-4b92-8037-7485b8d5b20b","Type":"ContainerDied","Data":"2a8d7b2f366ebe6213104dbff2a30441e2cc0e9ea1518c28b55dbe08143c8527"} Mar 12 14:53:04.882746 master-0 kubenswrapper[37036]: I0312 14:53:04.882588 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c26314aa-b970-4b92-8037-7485b8d5b20b","Type":"ContainerDied","Data":"631e6e95777e01adedcf1bf2e1756babcef4928935880353f50d3247fc2cf43a"} Mar 12 14:53:04.884508 master-0 kubenswrapper[37036]: I0312 14:53:04.884476 37036 generic.go:334] "Generic (PLEG): container finished" podID="18749a86-eed1-4fa8-b31d-98f0a3fc67fb" containerID="ac0dff2d2482ccfd5fd6d524f32466e8f407f9a336eb4c4c8a041a8168d40121" exitCode=0 Mar 12 14:53:04.884611 master-0 kubenswrapper[37036]: I0312 14:53:04.884521 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"18749a86-eed1-4fa8-b31d-98f0a3fc67fb","Type":"ContainerDied","Data":"ac0dff2d2482ccfd5fd6d524f32466e8f407f9a336eb4c4c8a041a8168d40121"} Mar 12 14:53:04.889324 master-0 kubenswrapper[37036]: I0312 14:53:04.888630 37036 generic.go:334] "Generic (PLEG): container finished" podID="46e4d8ed-5640-49bc-ae47-44c113072fab" containerID="21c24d49863539a1e314662967888c609742206be5e838a00017444084a9bd56" exitCode=0 Mar 12 14:53:04.889324 master-0 kubenswrapper[37036]: I0312 14:53:04.888660 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-q72km" event={"ID":"46e4d8ed-5640-49bc-ae47-44c113072fab","Type":"ContainerDied","Data":"21c24d49863539a1e314662967888c609742206be5e838a00017444084a9bd56"} Mar 12 14:53:05.217191 master-0 kubenswrapper[37036]: I0312 14:53:05.217155 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 14:53:05.229487 master-0 kubenswrapper[37036]: I0312 14:53:05.228599 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 14:53:05.261521 master-0 kubenswrapper[37036]: I0312 14:53:05.258661 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 14:53:05.403023 master-0 kubenswrapper[37036]: I0312 14:53:05.400743 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18749a86-eed1-4fa8-b31d-98f0a3fc67fb-config-data\") pod \"18749a86-eed1-4fa8-b31d-98f0a3fc67fb\" (UID: \"18749a86-eed1-4fa8-b31d-98f0a3fc67fb\") " Mar 12 14:53:05.403023 master-0 kubenswrapper[37036]: I0312 14:53:05.400878 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26314aa-b970-4b92-8037-7485b8d5b20b-combined-ca-bundle\") pod \"c26314aa-b970-4b92-8037-7485b8d5b20b\" (UID: \"c26314aa-b970-4b92-8037-7485b8d5b20b\") " Mar 12 14:53:05.403023 master-0 kubenswrapper[37036]: I0312 14:53:05.400933 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/844da800-0e8b-47b9-ac8f-4303f70f0cf3-combined-ca-bundle\") pod \"844da800-0e8b-47b9-ac8f-4303f70f0cf3\" (UID: \"844da800-0e8b-47b9-ac8f-4303f70f0cf3\") " Mar 12 14:53:05.403023 master-0 kubenswrapper[37036]: I0312 14:53:05.401122 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz5gn\" (UniqueName: \"kubernetes.io/projected/844da800-0e8b-47b9-ac8f-4303f70f0cf3-kube-api-access-fz5gn\") pod \"844da800-0e8b-47b9-ac8f-4303f70f0cf3\" (UID: \"844da800-0e8b-47b9-ac8f-4303f70f0cf3\") " Mar 12 14:53:05.403023 master-0 kubenswrapper[37036]: I0312 14:53:05.401156 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm2d7\" (UniqueName: \"kubernetes.io/projected/c26314aa-b970-4b92-8037-7485b8d5b20b-kube-api-access-pm2d7\") pod \"c26314aa-b970-4b92-8037-7485b8d5b20b\" (UID: \"c26314aa-b970-4b92-8037-7485b8d5b20b\") " Mar 12 14:53:05.403023 master-0 kubenswrapper[37036]: I0312 14:53:05.401237 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24kzt\" (UniqueName: \"kubernetes.io/projected/18749a86-eed1-4fa8-b31d-98f0a3fc67fb-kube-api-access-24kzt\") pod \"18749a86-eed1-4fa8-b31d-98f0a3fc67fb\" (UID: \"18749a86-eed1-4fa8-b31d-98f0a3fc67fb\") " Mar 12 14:53:05.403023 master-0 kubenswrapper[37036]: I0312 14:53:05.401286 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c26314aa-b970-4b92-8037-7485b8d5b20b-config-data\") pod \"c26314aa-b970-4b92-8037-7485b8d5b20b\" (UID: \"c26314aa-b970-4b92-8037-7485b8d5b20b\") " Mar 12 14:53:05.403023 master-0 kubenswrapper[37036]: I0312 14:53:05.401329 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/844da800-0e8b-47b9-ac8f-4303f70f0cf3-logs\") pod \"844da800-0e8b-47b9-ac8f-4303f70f0cf3\" (UID: \"844da800-0e8b-47b9-ac8f-4303f70f0cf3\") " Mar 12 14:53:05.403023 master-0 kubenswrapper[37036]: I0312 14:53:05.401360 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c26314aa-b970-4b92-8037-7485b8d5b20b-logs\") pod \"c26314aa-b970-4b92-8037-7485b8d5b20b\" (UID: \"c26314aa-b970-4b92-8037-7485b8d5b20b\") " Mar 12 14:53:05.403023 master-0 kubenswrapper[37036]: I0312 14:53:05.401408 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/844da800-0e8b-47b9-ac8f-4303f70f0cf3-config-data\") pod \"844da800-0e8b-47b9-ac8f-4303f70f0cf3\" (UID: \"844da800-0e8b-47b9-ac8f-4303f70f0cf3\") " Mar 12 14:53:05.403023 master-0 kubenswrapper[37036]: I0312 14:53:05.401479 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/844da800-0e8b-47b9-ac8f-4303f70f0cf3-nova-metadata-tls-certs\") pod \"844da800-0e8b-47b9-ac8f-4303f70f0cf3\" (UID: \"844da800-0e8b-47b9-ac8f-4303f70f0cf3\") " Mar 12 14:53:05.403023 master-0 kubenswrapper[37036]: I0312 14:53:05.401519 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18749a86-eed1-4fa8-b31d-98f0a3fc67fb-combined-ca-bundle\") pod \"18749a86-eed1-4fa8-b31d-98f0a3fc67fb\" (UID: \"18749a86-eed1-4fa8-b31d-98f0a3fc67fb\") " Mar 12 14:53:05.417943 master-0 kubenswrapper[37036]: I0312 14:53:05.411457 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/844da800-0e8b-47b9-ac8f-4303f70f0cf3-logs" (OuterVolumeSpecName: "logs") pod "844da800-0e8b-47b9-ac8f-4303f70f0cf3" (UID: "844da800-0e8b-47b9-ac8f-4303f70f0cf3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:53:05.417943 master-0 kubenswrapper[37036]: I0312 14:53:05.415751 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c26314aa-b970-4b92-8037-7485b8d5b20b-logs" (OuterVolumeSpecName: "logs") pod "c26314aa-b970-4b92-8037-7485b8d5b20b" (UID: "c26314aa-b970-4b92-8037-7485b8d5b20b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:53:05.443707 master-0 kubenswrapper[37036]: I0312 14:53:05.429505 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c26314aa-b970-4b92-8037-7485b8d5b20b-kube-api-access-pm2d7" (OuterVolumeSpecName: "kube-api-access-pm2d7") pod "c26314aa-b970-4b92-8037-7485b8d5b20b" (UID: "c26314aa-b970-4b92-8037-7485b8d5b20b"). InnerVolumeSpecName "kube-api-access-pm2d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:53:05.443707 master-0 kubenswrapper[37036]: I0312 14:53:05.429581 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/844da800-0e8b-47b9-ac8f-4303f70f0cf3-kube-api-access-fz5gn" (OuterVolumeSpecName: "kube-api-access-fz5gn") pod "844da800-0e8b-47b9-ac8f-4303f70f0cf3" (UID: "844da800-0e8b-47b9-ac8f-4303f70f0cf3"). InnerVolumeSpecName "kube-api-access-fz5gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:53:05.451616 master-0 kubenswrapper[37036]: I0312 14:53:05.451537 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18749a86-eed1-4fa8-b31d-98f0a3fc67fb-kube-api-access-24kzt" (OuterVolumeSpecName: "kube-api-access-24kzt") pod "18749a86-eed1-4fa8-b31d-98f0a3fc67fb" (UID: "18749a86-eed1-4fa8-b31d-98f0a3fc67fb"). InnerVolumeSpecName "kube-api-access-24kzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:53:05.505874 master-0 kubenswrapper[37036]: I0312 14:53:05.505811 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz5gn\" (UniqueName: \"kubernetes.io/projected/844da800-0e8b-47b9-ac8f-4303f70f0cf3-kube-api-access-fz5gn\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:05.505874 master-0 kubenswrapper[37036]: I0312 14:53:05.505866 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm2d7\" (UniqueName: \"kubernetes.io/projected/c26314aa-b970-4b92-8037-7485b8d5b20b-kube-api-access-pm2d7\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:05.505874 master-0 kubenswrapper[37036]: I0312 14:53:05.505878 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24kzt\" (UniqueName: \"kubernetes.io/projected/18749a86-eed1-4fa8-b31d-98f0a3fc67fb-kube-api-access-24kzt\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:05.506225 master-0 kubenswrapper[37036]: I0312 14:53:05.505887 37036 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/844da800-0e8b-47b9-ac8f-4303f70f0cf3-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:05.506225 master-0 kubenswrapper[37036]: I0312 14:53:05.505918 37036 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c26314aa-b970-4b92-8037-7485b8d5b20b-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:05.587387 master-0 kubenswrapper[37036]: I0312 14:53:05.587299 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18749a86-eed1-4fa8-b31d-98f0a3fc67fb-config-data" (OuterVolumeSpecName: "config-data") pod "18749a86-eed1-4fa8-b31d-98f0a3fc67fb" (UID: "18749a86-eed1-4fa8-b31d-98f0a3fc67fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:05.599425 master-0 kubenswrapper[37036]: I0312 14:53:05.598745 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/844da800-0e8b-47b9-ac8f-4303f70f0cf3-config-data" (OuterVolumeSpecName: "config-data") pod "844da800-0e8b-47b9-ac8f-4303f70f0cf3" (UID: "844da800-0e8b-47b9-ac8f-4303f70f0cf3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:05.611176 master-0 kubenswrapper[37036]: I0312 14:53:05.607740 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/844da800-0e8b-47b9-ac8f-4303f70f0cf3-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:05.611176 master-0 kubenswrapper[37036]: I0312 14:53:05.607792 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18749a86-eed1-4fa8-b31d-98f0a3fc67fb-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:05.615490 master-0 kubenswrapper[37036]: I0312 14:53:05.615445 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18749a86-eed1-4fa8-b31d-98f0a3fc67fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18749a86-eed1-4fa8-b31d-98f0a3fc67fb" (UID: "18749a86-eed1-4fa8-b31d-98f0a3fc67fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:05.644310 master-0 kubenswrapper[37036]: I0312 14:53:05.643158 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c26314aa-b970-4b92-8037-7485b8d5b20b-config-data" (OuterVolumeSpecName: "config-data") pod "c26314aa-b970-4b92-8037-7485b8d5b20b" (UID: "c26314aa-b970-4b92-8037-7485b8d5b20b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:05.646872 master-0 kubenswrapper[37036]: I0312 14:53:05.646822 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/844da800-0e8b-47b9-ac8f-4303f70f0cf3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "844da800-0e8b-47b9-ac8f-4303f70f0cf3" (UID: "844da800-0e8b-47b9-ac8f-4303f70f0cf3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:05.654578 master-0 kubenswrapper[37036]: I0312 14:53:05.654220 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c26314aa-b970-4b92-8037-7485b8d5b20b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c26314aa-b970-4b92-8037-7485b8d5b20b" (UID: "c26314aa-b970-4b92-8037-7485b8d5b20b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:05.682912 master-0 kubenswrapper[37036]: I0312 14:53:05.682850 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/844da800-0e8b-47b9-ac8f-4303f70f0cf3-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "844da800-0e8b-47b9-ac8f-4303f70f0cf3" (UID: "844da800-0e8b-47b9-ac8f-4303f70f0cf3"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:05.709536 master-0 kubenswrapper[37036]: I0312 14:53:05.709469 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c26314aa-b970-4b92-8037-7485b8d5b20b-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:05.709536 master-0 kubenswrapper[37036]: I0312 14:53:05.709526 37036 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/844da800-0e8b-47b9-ac8f-4303f70f0cf3-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:05.709536 master-0 kubenswrapper[37036]: I0312 14:53:05.709546 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18749a86-eed1-4fa8-b31d-98f0a3fc67fb-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:05.709931 master-0 kubenswrapper[37036]: I0312 14:53:05.709557 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c26314aa-b970-4b92-8037-7485b8d5b20b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:05.709931 master-0 kubenswrapper[37036]: I0312 14:53:05.709566 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/844da800-0e8b-47b9-ac8f-4303f70f0cf3-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:05.908201 master-0 kubenswrapper[37036]: I0312 14:53:05.908076 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c26314aa-b970-4b92-8037-7485b8d5b20b","Type":"ContainerDied","Data":"694b033464c22e5c50fcf6c0332fcc50c9fcc18663ffc32e2815238fe4a4790d"} Mar 12 14:53:05.908201 master-0 kubenswrapper[37036]: I0312 14:53:05.908143 37036 scope.go:117] "RemoveContainer" containerID="2a8d7b2f366ebe6213104dbff2a30441e2cc0e9ea1518c28b55dbe08143c8527" Mar 12 14:53:05.909248 master-0 kubenswrapper[37036]: I0312 14:53:05.909208 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 14:53:05.914784 master-0 kubenswrapper[37036]: I0312 14:53:05.914624 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"2a440e7b-a37d-4e7e-9873-ac70bc709a60","Type":"ContainerStarted","Data":"0bc61569ebc4456fbd2423bafa8f1801f5753a13746efca295086523e31782ab"} Mar 12 14:53:05.914883 master-0 kubenswrapper[37036]: I0312 14:53:05.914826 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 14:53:05.921935 master-0 kubenswrapper[37036]: I0312 14:53:05.921625 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"18749a86-eed1-4fa8-b31d-98f0a3fc67fb","Type":"ContainerDied","Data":"b166050c38b65737a8b55c168f109a49622fba7f0fc622a8add704d9e9a714ad"} Mar 12 14:53:05.921935 master-0 kubenswrapper[37036]: I0312 14:53:05.921636 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 14:53:05.929015 master-0 kubenswrapper[37036]: I0312 14:53:05.926162 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 14:53:05.929015 master-0 kubenswrapper[37036]: I0312 14:53:05.928868 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"844da800-0e8b-47b9-ac8f-4303f70f0cf3","Type":"ContainerDied","Data":"0d3912ad4de24958321ac7347ed3dea7028b0e0ea2a3fc0816093a04850a64a1"} Mar 12 14:53:05.958935 master-0 kubenswrapper[37036]: I0312 14:53:05.956483 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-compute-ironic-compute-0" podStartSLOduration=2.4687465189999998 podStartE2EDuration="22.956463023s" podCreationTimestamp="2026-03-12 14:52:43 +0000 UTC" firstStartedPulling="2026-03-12 14:52:44.724215479 +0000 UTC m=+1023.731956416" lastFinishedPulling="2026-03-12 14:53:05.211931983 +0000 UTC m=+1044.219672920" observedRunningTime="2026-03-12 14:53:05.938559185 +0000 UTC m=+1044.946300142" watchObservedRunningTime="2026-03-12 14:53:05.956463023 +0000 UTC m=+1044.964203960" Mar 12 14:53:05.984250 master-0 kubenswrapper[37036]: I0312 14:53:05.983968 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 14:53:05.985366 master-0 kubenswrapper[37036]: I0312 14:53:05.984812 37036 scope.go:117] "RemoveContainer" containerID="631e6e95777e01adedcf1bf2e1756babcef4928935880353f50d3247fc2cf43a" Mar 12 14:53:05.987207 master-0 kubenswrapper[37036]: I0312 14:53:05.987045 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 14:53:05.999195 master-0 kubenswrapper[37036]: I0312 14:53:05.998978 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 14:53:06.031068 master-0 kubenswrapper[37036]: I0312 14:53:06.031004 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 14:53:06.044330 master-0 kubenswrapper[37036]: I0312 14:53:06.044273 37036 scope.go:117] "RemoveContainer" containerID="ac0dff2d2482ccfd5fd6d524f32466e8f407f9a336eb4c4c8a041a8168d40121" Mar 12 14:53:06.120407 master-0 kubenswrapper[37036]: I0312 14:53:06.119732 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 14:53:06.120407 master-0 kubenswrapper[37036]: E0312 14:53:06.120350 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="844da800-0e8b-47b9-ac8f-4303f70f0cf3" containerName="nova-metadata-log" Mar 12 14:53:06.120407 master-0 kubenswrapper[37036]: I0312 14:53:06.120365 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="844da800-0e8b-47b9-ac8f-4303f70f0cf3" containerName="nova-metadata-log" Mar 12 14:53:06.120407 master-0 kubenswrapper[37036]: E0312 14:53:06.120393 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c26314aa-b970-4b92-8037-7485b8d5b20b" containerName="nova-api-log" Mar 12 14:53:06.120407 master-0 kubenswrapper[37036]: I0312 14:53:06.120400 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="c26314aa-b970-4b92-8037-7485b8d5b20b" containerName="nova-api-log" Mar 12 14:53:06.120407 master-0 kubenswrapper[37036]: E0312 14:53:06.120409 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb29ced-f0e0-44d7-bd04-d332938eea7b" containerName="dnsmasq-dns" Mar 12 14:53:06.120407 master-0 kubenswrapper[37036]: I0312 14:53:06.120415 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb29ced-f0e0-44d7-bd04-d332938eea7b" containerName="dnsmasq-dns" Mar 12 14:53:06.120407 master-0 kubenswrapper[37036]: E0312 14:53:06.120427 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb29ced-f0e0-44d7-bd04-d332938eea7b" containerName="init" Mar 12 14:53:06.121022 master-0 kubenswrapper[37036]: I0312 14:53:06.120434 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb29ced-f0e0-44d7-bd04-d332938eea7b" containerName="init" Mar 12 14:53:06.121022 master-0 kubenswrapper[37036]: E0312 14:53:06.120448 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccb059aa-827d-46f3-8218-8178e9eeafbd" containerName="nova-manage" Mar 12 14:53:06.121022 master-0 kubenswrapper[37036]: I0312 14:53:06.120456 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccb059aa-827d-46f3-8218-8178e9eeafbd" containerName="nova-manage" Mar 12 14:53:06.121022 master-0 kubenswrapper[37036]: E0312 14:53:06.120478 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18749a86-eed1-4fa8-b31d-98f0a3fc67fb" containerName="nova-scheduler-scheduler" Mar 12 14:53:06.121022 master-0 kubenswrapper[37036]: I0312 14:53:06.120484 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="18749a86-eed1-4fa8-b31d-98f0a3fc67fb" containerName="nova-scheduler-scheduler" Mar 12 14:53:06.121022 master-0 kubenswrapper[37036]: E0312 14:53:06.120521 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c26314aa-b970-4b92-8037-7485b8d5b20b" containerName="nova-api-api" Mar 12 14:53:06.121022 master-0 kubenswrapper[37036]: I0312 14:53:06.120528 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="c26314aa-b970-4b92-8037-7485b8d5b20b" containerName="nova-api-api" Mar 12 14:53:06.121022 master-0 kubenswrapper[37036]: E0312 14:53:06.120539 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="844da800-0e8b-47b9-ac8f-4303f70f0cf3" containerName="nova-metadata-metadata" Mar 12 14:53:06.121022 master-0 kubenswrapper[37036]: I0312 14:53:06.120545 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="844da800-0e8b-47b9-ac8f-4303f70f0cf3" containerName="nova-metadata-metadata" Mar 12 14:53:06.121022 master-0 kubenswrapper[37036]: I0312 14:53:06.120753 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbb29ced-f0e0-44d7-bd04-d332938eea7b" containerName="dnsmasq-dns" Mar 12 14:53:06.121022 master-0 kubenswrapper[37036]: I0312 14:53:06.120775 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="c26314aa-b970-4b92-8037-7485b8d5b20b" containerName="nova-api-api" Mar 12 14:53:06.121022 master-0 kubenswrapper[37036]: I0312 14:53:06.120785 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="c26314aa-b970-4b92-8037-7485b8d5b20b" containerName="nova-api-log" Mar 12 14:53:06.121022 master-0 kubenswrapper[37036]: I0312 14:53:06.120804 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="844da800-0e8b-47b9-ac8f-4303f70f0cf3" containerName="nova-metadata-log" Mar 12 14:53:06.121022 master-0 kubenswrapper[37036]: I0312 14:53:06.120813 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccb059aa-827d-46f3-8218-8178e9eeafbd" containerName="nova-manage" Mar 12 14:53:06.121022 master-0 kubenswrapper[37036]: I0312 14:53:06.120826 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="18749a86-eed1-4fa8-b31d-98f0a3fc67fb" containerName="nova-scheduler-scheduler" Mar 12 14:53:06.121022 master-0 kubenswrapper[37036]: I0312 14:53:06.120838 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="844da800-0e8b-47b9-ac8f-4303f70f0cf3" containerName="nova-metadata-metadata" Mar 12 14:53:06.122251 master-0 kubenswrapper[37036]: I0312 14:53:06.122100 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 14:53:06.124562 master-0 kubenswrapper[37036]: I0312 14:53:06.124516 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 12 14:53:06.142330 master-0 kubenswrapper[37036]: I0312 14:53:06.140589 37036 scope.go:117] "RemoveContainer" containerID="004c61409f2e3a6bc567b74290d34bf02b030cfe07c08438dad5c59fa0629c69" Mar 12 14:53:06.144738 master-0 kubenswrapper[37036]: I0312 14:53:06.144502 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 12 14:53:06.193107 master-0 kubenswrapper[37036]: I0312 14:53:06.192591 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 14:53:06.197118 master-0 kubenswrapper[37036]: I0312 14:53:06.197063 37036 scope.go:117] "RemoveContainer" containerID="1cff1285df1e4926be4b0ea02a568b9d078b2a21313afdcc6fefda37adff72ee" Mar 12 14:53:06.210123 master-0 kubenswrapper[37036]: I0312 14:53:06.209921 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 14:53:06.229508 master-0 kubenswrapper[37036]: I0312 14:53:06.226823 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29b7dc25-a599-4d70-ab20-134de7116d36-config-data\") pod \"nova-scheduler-0\" (UID: \"29b7dc25-a599-4d70-ab20-134de7116d36\") " pod="openstack/nova-scheduler-0" Mar 12 14:53:06.229508 master-0 kubenswrapper[37036]: I0312 14:53:06.226904 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b7dc25-a599-4d70-ab20-134de7116d36-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"29b7dc25-a599-4d70-ab20-134de7116d36\") " pod="openstack/nova-scheduler-0" Mar 12 14:53:06.229508 master-0 kubenswrapper[37036]: I0312 14:53:06.227015 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbhw5\" (UniqueName: \"kubernetes.io/projected/29b7dc25-a599-4d70-ab20-134de7116d36-kube-api-access-vbhw5\") pod \"nova-scheduler-0\" (UID: \"29b7dc25-a599-4d70-ab20-134de7116d36\") " pod="openstack/nova-scheduler-0" Mar 12 14:53:06.229508 master-0 kubenswrapper[37036]: I0312 14:53:06.228381 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 14:53:06.229508 master-0 kubenswrapper[37036]: I0312 14:53:06.228491 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 14:53:06.238743 master-0 kubenswrapper[37036]: I0312 14:53:06.238418 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 14:53:06.283387 master-0 kubenswrapper[37036]: I0312 14:53:06.282531 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 14:53:06.311721 master-0 kubenswrapper[37036]: I0312 14:53:06.311613 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 14:53:06.330804 master-0 kubenswrapper[37036]: I0312 14:53:06.329754 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59f3abc9-a919-4f6b-8031-35d8546ad90e-logs\") pod \"nova-api-0\" (UID: \"59f3abc9-a919-4f6b-8031-35d8546ad90e\") " pod="openstack/nova-api-0" Mar 12 14:53:06.330804 master-0 kubenswrapper[37036]: I0312 14:53:06.329840 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29b7dc25-a599-4d70-ab20-134de7116d36-config-data\") pod \"nova-scheduler-0\" (UID: \"29b7dc25-a599-4d70-ab20-134de7116d36\") " pod="openstack/nova-scheduler-0" Mar 12 14:53:06.330804 master-0 kubenswrapper[37036]: I0312 14:53:06.329929 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b7dc25-a599-4d70-ab20-134de7116d36-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"29b7dc25-a599-4d70-ab20-134de7116d36\") " pod="openstack/nova-scheduler-0" Mar 12 14:53:06.330804 master-0 kubenswrapper[37036]: I0312 14:53:06.329963 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59f3abc9-a919-4f6b-8031-35d8546ad90e-config-data\") pod \"nova-api-0\" (UID: \"59f3abc9-a919-4f6b-8031-35d8546ad90e\") " pod="openstack/nova-api-0" Mar 12 14:53:06.330804 master-0 kubenswrapper[37036]: I0312 14:53:06.330013 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvfdv\" (UniqueName: \"kubernetes.io/projected/59f3abc9-a919-4f6b-8031-35d8546ad90e-kube-api-access-hvfdv\") pod \"nova-api-0\" (UID: \"59f3abc9-a919-4f6b-8031-35d8546ad90e\") " pod="openstack/nova-api-0" Mar 12 14:53:06.330804 master-0 kubenswrapper[37036]: I0312 14:53:06.330081 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbhw5\" (UniqueName: \"kubernetes.io/projected/29b7dc25-a599-4d70-ab20-134de7116d36-kube-api-access-vbhw5\") pod \"nova-scheduler-0\" (UID: \"29b7dc25-a599-4d70-ab20-134de7116d36\") " pod="openstack/nova-scheduler-0" Mar 12 14:53:06.330804 master-0 kubenswrapper[37036]: I0312 14:53:06.330158 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59f3abc9-a919-4f6b-8031-35d8546ad90e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"59f3abc9-a919-4f6b-8031-35d8546ad90e\") " pod="openstack/nova-api-0" Mar 12 14:53:06.334438 master-0 kubenswrapper[37036]: I0312 14:53:06.334047 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29b7dc25-a599-4d70-ab20-134de7116d36-config-data\") pod \"nova-scheduler-0\" (UID: \"29b7dc25-a599-4d70-ab20-134de7116d36\") " pod="openstack/nova-scheduler-0" Mar 12 14:53:06.353963 master-0 kubenswrapper[37036]: I0312 14:53:06.352984 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b7dc25-a599-4d70-ab20-134de7116d36-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"29b7dc25-a599-4d70-ab20-134de7116d36\") " pod="openstack/nova-scheduler-0" Mar 12 14:53:06.358505 master-0 kubenswrapper[37036]: I0312 14:53:06.357765 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbhw5\" (UniqueName: \"kubernetes.io/projected/29b7dc25-a599-4d70-ab20-134de7116d36-kube-api-access-vbhw5\") pod \"nova-scheduler-0\" (UID: \"29b7dc25-a599-4d70-ab20-134de7116d36\") " pod="openstack/nova-scheduler-0" Mar 12 14:53:06.358505 master-0 kubenswrapper[37036]: I0312 14:53:06.358274 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 12 14:53:06.360945 master-0 kubenswrapper[37036]: I0312 14:53:06.360859 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 14:53:06.365793 master-0 kubenswrapper[37036]: I0312 14:53:06.364444 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 12 14:53:06.365793 master-0 kubenswrapper[37036]: I0312 14:53:06.364625 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 12 14:53:06.375948 master-0 kubenswrapper[37036]: I0312 14:53:06.375855 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 14:53:06.432743 master-0 kubenswrapper[37036]: I0312 14:53:06.432607 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59f3abc9-a919-4f6b-8031-35d8546ad90e-config-data\") pod \"nova-api-0\" (UID: \"59f3abc9-a919-4f6b-8031-35d8546ad90e\") " pod="openstack/nova-api-0" Mar 12 14:53:06.432743 master-0 kubenswrapper[37036]: I0312 14:53:06.432717 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvfdv\" (UniqueName: \"kubernetes.io/projected/59f3abc9-a919-4f6b-8031-35d8546ad90e-kube-api-access-hvfdv\") pod \"nova-api-0\" (UID: \"59f3abc9-a919-4f6b-8031-35d8546ad90e\") " pod="openstack/nova-api-0" Mar 12 14:53:06.432985 master-0 kubenswrapper[37036]: I0312 14:53:06.432757 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prm86\" (UniqueName: \"kubernetes.io/projected/41dbd10c-af5b-4927-899a-f2661eede49e-kube-api-access-prm86\") pod \"nova-metadata-0\" (UID: \"41dbd10c-af5b-4927-899a-f2661eede49e\") " pod="openstack/nova-metadata-0" Mar 12 14:53:06.432985 master-0 kubenswrapper[37036]: I0312 14:53:06.432806 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41dbd10c-af5b-4927-899a-f2661eede49e-logs\") pod \"nova-metadata-0\" (UID: \"41dbd10c-af5b-4927-899a-f2661eede49e\") " pod="openstack/nova-metadata-0" Mar 12 14:53:06.432985 master-0 kubenswrapper[37036]: I0312 14:53:06.432882 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41dbd10c-af5b-4927-899a-f2661eede49e-config-data\") pod \"nova-metadata-0\" (UID: \"41dbd10c-af5b-4927-899a-f2661eede49e\") " pod="openstack/nova-metadata-0" Mar 12 14:53:06.432985 master-0 kubenswrapper[37036]: I0312 14:53:06.432925 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59f3abc9-a919-4f6b-8031-35d8546ad90e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"59f3abc9-a919-4f6b-8031-35d8546ad90e\") " pod="openstack/nova-api-0" Mar 12 14:53:06.432985 master-0 kubenswrapper[37036]: I0312 14:53:06.432949 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41dbd10c-af5b-4927-899a-f2661eede49e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"41dbd10c-af5b-4927-899a-f2661eede49e\") " pod="openstack/nova-metadata-0" Mar 12 14:53:06.433154 master-0 kubenswrapper[37036]: I0312 14:53:06.432996 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59f3abc9-a919-4f6b-8031-35d8546ad90e-logs\") pod \"nova-api-0\" (UID: \"59f3abc9-a919-4f6b-8031-35d8546ad90e\") " pod="openstack/nova-api-0" Mar 12 14:53:06.433154 master-0 kubenswrapper[37036]: I0312 14:53:06.433032 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/41dbd10c-af5b-4927-899a-f2661eede49e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"41dbd10c-af5b-4927-899a-f2661eede49e\") " pod="openstack/nova-metadata-0" Mar 12 14:53:06.434688 master-0 kubenswrapper[37036]: I0312 14:53:06.434632 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59f3abc9-a919-4f6b-8031-35d8546ad90e-logs\") pod \"nova-api-0\" (UID: \"59f3abc9-a919-4f6b-8031-35d8546ad90e\") " pod="openstack/nova-api-0" Mar 12 14:53:06.436603 master-0 kubenswrapper[37036]: I0312 14:53:06.436562 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59f3abc9-a919-4f6b-8031-35d8546ad90e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"59f3abc9-a919-4f6b-8031-35d8546ad90e\") " pod="openstack/nova-api-0" Mar 12 14:53:06.440080 master-0 kubenswrapper[37036]: I0312 14:53:06.440007 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59f3abc9-a919-4f6b-8031-35d8546ad90e-config-data\") pod \"nova-api-0\" (UID: \"59f3abc9-a919-4f6b-8031-35d8546ad90e\") " pod="openstack/nova-api-0" Mar 12 14:53:06.453501 master-0 kubenswrapper[37036]: I0312 14:53:06.453454 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvfdv\" (UniqueName: \"kubernetes.io/projected/59f3abc9-a919-4f6b-8031-35d8546ad90e-kube-api-access-hvfdv\") pod \"nova-api-0\" (UID: \"59f3abc9-a919-4f6b-8031-35d8546ad90e\") " pod="openstack/nova-api-0" Mar 12 14:53:06.464280 master-0 kubenswrapper[37036]: I0312 14:53:06.464226 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 14:53:06.480594 master-0 kubenswrapper[37036]: I0312 14:53:06.480472 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-q72km" Mar 12 14:53:06.535779 master-0 kubenswrapper[37036]: I0312 14:53:06.534716 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e4d8ed-5640-49bc-ae47-44c113072fab-config-data\") pod \"46e4d8ed-5640-49bc-ae47-44c113072fab\" (UID: \"46e4d8ed-5640-49bc-ae47-44c113072fab\") " Mar 12 14:53:06.535779 master-0 kubenswrapper[37036]: I0312 14:53:06.534945 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46e4d8ed-5640-49bc-ae47-44c113072fab-combined-ca-bundle\") pod \"46e4d8ed-5640-49bc-ae47-44c113072fab\" (UID: \"46e4d8ed-5640-49bc-ae47-44c113072fab\") " Mar 12 14:53:06.535779 master-0 kubenswrapper[37036]: I0312 14:53:06.535070 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk4l4\" (UniqueName: \"kubernetes.io/projected/46e4d8ed-5640-49bc-ae47-44c113072fab-kube-api-access-zk4l4\") pod \"46e4d8ed-5640-49bc-ae47-44c113072fab\" (UID: \"46e4d8ed-5640-49bc-ae47-44c113072fab\") " Mar 12 14:53:06.535779 master-0 kubenswrapper[37036]: I0312 14:53:06.535102 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46e4d8ed-5640-49bc-ae47-44c113072fab-scripts\") pod \"46e4d8ed-5640-49bc-ae47-44c113072fab\" (UID: \"46e4d8ed-5640-49bc-ae47-44c113072fab\") " Mar 12 14:53:06.535779 master-0 kubenswrapper[37036]: I0312 14:53:06.535520 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prm86\" (UniqueName: \"kubernetes.io/projected/41dbd10c-af5b-4927-899a-f2661eede49e-kube-api-access-prm86\") pod \"nova-metadata-0\" (UID: \"41dbd10c-af5b-4927-899a-f2661eede49e\") " pod="openstack/nova-metadata-0" Mar 12 14:53:06.535779 master-0 kubenswrapper[37036]: I0312 14:53:06.535572 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41dbd10c-af5b-4927-899a-f2661eede49e-logs\") pod \"nova-metadata-0\" (UID: \"41dbd10c-af5b-4927-899a-f2661eede49e\") " pod="openstack/nova-metadata-0" Mar 12 14:53:06.535779 master-0 kubenswrapper[37036]: I0312 14:53:06.535643 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41dbd10c-af5b-4927-899a-f2661eede49e-config-data\") pod \"nova-metadata-0\" (UID: \"41dbd10c-af5b-4927-899a-f2661eede49e\") " pod="openstack/nova-metadata-0" Mar 12 14:53:06.535779 master-0 kubenswrapper[37036]: I0312 14:53:06.535669 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41dbd10c-af5b-4927-899a-f2661eede49e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"41dbd10c-af5b-4927-899a-f2661eede49e\") " pod="openstack/nova-metadata-0" Mar 12 14:53:06.537038 master-0 kubenswrapper[37036]: I0312 14:53:06.536982 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/41dbd10c-af5b-4927-899a-f2661eede49e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"41dbd10c-af5b-4927-899a-f2661eede49e\") " pod="openstack/nova-metadata-0" Mar 12 14:53:06.544937 master-0 kubenswrapper[37036]: I0312 14:53:06.542255 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46e4d8ed-5640-49bc-ae47-44c113072fab-scripts" (OuterVolumeSpecName: "scripts") pod "46e4d8ed-5640-49bc-ae47-44c113072fab" (UID: "46e4d8ed-5640-49bc-ae47-44c113072fab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:06.544937 master-0 kubenswrapper[37036]: I0312 14:53:06.542707 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41dbd10c-af5b-4927-899a-f2661eede49e-logs\") pod \"nova-metadata-0\" (UID: \"41dbd10c-af5b-4927-899a-f2661eede49e\") " pod="openstack/nova-metadata-0" Mar 12 14:53:06.557315 master-0 kubenswrapper[37036]: I0312 14:53:06.557231 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46e4d8ed-5640-49bc-ae47-44c113072fab-kube-api-access-zk4l4" (OuterVolumeSpecName: "kube-api-access-zk4l4") pod "46e4d8ed-5640-49bc-ae47-44c113072fab" (UID: "46e4d8ed-5640-49bc-ae47-44c113072fab"). InnerVolumeSpecName "kube-api-access-zk4l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:53:06.565313 master-0 kubenswrapper[37036]: I0312 14:53:06.562943 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41dbd10c-af5b-4927-899a-f2661eede49e-config-data\") pod \"nova-metadata-0\" (UID: \"41dbd10c-af5b-4927-899a-f2661eede49e\") " pod="openstack/nova-metadata-0" Mar 12 14:53:06.565313 master-0 kubenswrapper[37036]: I0312 14:53:06.565097 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/41dbd10c-af5b-4927-899a-f2661eede49e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"41dbd10c-af5b-4927-899a-f2661eede49e\") " pod="openstack/nova-metadata-0" Mar 12 14:53:06.565313 master-0 kubenswrapper[37036]: I0312 14:53:06.565229 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prm86\" (UniqueName: \"kubernetes.io/projected/41dbd10c-af5b-4927-899a-f2661eede49e-kube-api-access-prm86\") pod \"nova-metadata-0\" (UID: \"41dbd10c-af5b-4927-899a-f2661eede49e\") " pod="openstack/nova-metadata-0" Mar 12 14:53:06.571085 master-0 kubenswrapper[37036]: I0312 14:53:06.571033 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41dbd10c-af5b-4927-899a-f2661eede49e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"41dbd10c-af5b-4927-899a-f2661eede49e\") " pod="openstack/nova-metadata-0" Mar 12 14:53:06.577741 master-0 kubenswrapper[37036]: I0312 14:53:06.577672 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46e4d8ed-5640-49bc-ae47-44c113072fab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46e4d8ed-5640-49bc-ae47-44c113072fab" (UID: "46e4d8ed-5640-49bc-ae47-44c113072fab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:06.606292 master-0 kubenswrapper[37036]: I0312 14:53:06.606099 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46e4d8ed-5640-49bc-ae47-44c113072fab-config-data" (OuterVolumeSpecName: "config-data") pod "46e4d8ed-5640-49bc-ae47-44c113072fab" (UID: "46e4d8ed-5640-49bc-ae47-44c113072fab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:06.639811 master-0 kubenswrapper[37036]: I0312 14:53:06.639733 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46e4d8ed-5640-49bc-ae47-44c113072fab-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:06.639811 master-0 kubenswrapper[37036]: I0312 14:53:06.639786 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk4l4\" (UniqueName: \"kubernetes.io/projected/46e4d8ed-5640-49bc-ae47-44c113072fab-kube-api-access-zk4l4\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:06.639811 master-0 kubenswrapper[37036]: I0312 14:53:06.639798 37036 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46e4d8ed-5640-49bc-ae47-44c113072fab-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:06.639811 master-0 kubenswrapper[37036]: I0312 14:53:06.639807 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e4d8ed-5640-49bc-ae47-44c113072fab-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:06.640283 master-0 kubenswrapper[37036]: I0312 14:53:06.640010 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 14:53:06.774525 master-0 kubenswrapper[37036]: I0312 14:53:06.774302 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 14:53:06.965642 master-0 kubenswrapper[37036]: I0312 14:53:06.965518 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-q72km" event={"ID":"46e4d8ed-5640-49bc-ae47-44c113072fab","Type":"ContainerDied","Data":"97a06cca7fbccb48c12eaf6a14b7f344c44c7445087652f518f4905fee5bdb46"} Mar 12 14:53:06.965642 master-0 kubenswrapper[37036]: I0312 14:53:06.965572 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97a06cca7fbccb48c12eaf6a14b7f344c44c7445087652f518f4905fee5bdb46" Mar 12 14:53:06.965858 master-0 kubenswrapper[37036]: I0312 14:53:06.965655 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-q72km" Mar 12 14:53:07.006785 master-0 kubenswrapper[37036]: I0312 14:53:07.006710 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 14:53:07.025470 master-0 kubenswrapper[37036]: W0312 14:53:07.021049 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29b7dc25_a599_4d70_ab20_134de7116d36.slice/crio-fa7b5d075465e552484fbd9672e9b9f7db591955e604b1a26b8695e5f7852190 WatchSource:0}: Error finding container fa7b5d075465e552484fbd9672e9b9f7db591955e604b1a26b8695e5f7852190: Status 404 returned error can't find the container with id fa7b5d075465e552484fbd9672e9b9f7db591955e604b1a26b8695e5f7852190 Mar 12 14:53:07.156703 master-0 kubenswrapper[37036]: I0312 14:53:07.156617 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 14:53:07.157411 master-0 kubenswrapper[37036]: E0312 14:53:07.157371 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e4d8ed-5640-49bc-ae47-44c113072fab" containerName="nova-cell1-conductor-db-sync" Mar 12 14:53:07.157411 master-0 kubenswrapper[37036]: I0312 14:53:07.157401 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e4d8ed-5640-49bc-ae47-44c113072fab" containerName="nova-cell1-conductor-db-sync" Mar 12 14:53:07.158359 master-0 kubenswrapper[37036]: I0312 14:53:07.157740 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="46e4d8ed-5640-49bc-ae47-44c113072fab" containerName="nova-cell1-conductor-db-sync" Mar 12 14:53:07.160010 master-0 kubenswrapper[37036]: I0312 14:53:07.159783 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 12 14:53:07.164842 master-0 kubenswrapper[37036]: I0312 14:53:07.162401 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 12 14:53:07.174455 master-0 kubenswrapper[37036]: I0312 14:53:07.174095 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 14:53:07.197913 master-0 kubenswrapper[37036]: W0312 14:53:07.197266 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59f3abc9_a919_4f6b_8031_35d8546ad90e.slice/crio-c33aa0aba65a115e0273a1c26bfe208ac925a8317156362d7baa26f3062ba407 WatchSource:0}: Error finding container c33aa0aba65a115e0273a1c26bfe208ac925a8317156362d7baa26f3062ba407: Status 404 returned error can't find the container with id c33aa0aba65a115e0273a1c26bfe208ac925a8317156362d7baa26f3062ba407 Mar 12 14:53:07.215979 master-0 kubenswrapper[37036]: I0312 14:53:07.208384 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 14:53:07.279333 master-0 kubenswrapper[37036]: I0312 14:53:07.279162 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18749a86-eed1-4fa8-b31d-98f0a3fc67fb" path="/var/lib/kubelet/pods/18749a86-eed1-4fa8-b31d-98f0a3fc67fb/volumes" Mar 12 14:53:07.279878 master-0 kubenswrapper[37036]: I0312 14:53:07.279470 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9jpd\" (UniqueName: \"kubernetes.io/projected/29a0e0e6-b7b6-4632-94d9-86c24da56df4-kube-api-access-c9jpd\") pod \"nova-cell1-conductor-0\" (UID: \"29a0e0e6-b7b6-4632-94d9-86c24da56df4\") " pod="openstack/nova-cell1-conductor-0" Mar 12 14:53:07.279997 master-0 kubenswrapper[37036]: I0312 14:53:07.279970 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29a0e0e6-b7b6-4632-94d9-86c24da56df4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"29a0e0e6-b7b6-4632-94d9-86c24da56df4\") " pod="openstack/nova-cell1-conductor-0" Mar 12 14:53:07.280236 master-0 kubenswrapper[37036]: I0312 14:53:07.280109 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29a0e0e6-b7b6-4632-94d9-86c24da56df4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"29a0e0e6-b7b6-4632-94d9-86c24da56df4\") " pod="openstack/nova-cell1-conductor-0" Mar 12 14:53:07.280236 master-0 kubenswrapper[37036]: I0312 14:53:07.280131 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="844da800-0e8b-47b9-ac8f-4303f70f0cf3" path="/var/lib/kubelet/pods/844da800-0e8b-47b9-ac8f-4303f70f0cf3/volumes" Mar 12 14:53:07.281175 master-0 kubenswrapper[37036]: I0312 14:53:07.281074 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c26314aa-b970-4b92-8037-7485b8d5b20b" path="/var/lib/kubelet/pods/c26314aa-b970-4b92-8037-7485b8d5b20b/volumes" Mar 12 14:53:07.384782 master-0 kubenswrapper[37036]: I0312 14:53:07.384709 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29a0e0e6-b7b6-4632-94d9-86c24da56df4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"29a0e0e6-b7b6-4632-94d9-86c24da56df4\") " pod="openstack/nova-cell1-conductor-0" Mar 12 14:53:07.386448 master-0 kubenswrapper[37036]: I0312 14:53:07.386356 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9jpd\" (UniqueName: \"kubernetes.io/projected/29a0e0e6-b7b6-4632-94d9-86c24da56df4-kube-api-access-c9jpd\") pod \"nova-cell1-conductor-0\" (UID: \"29a0e0e6-b7b6-4632-94d9-86c24da56df4\") " pod="openstack/nova-cell1-conductor-0" Mar 12 14:53:07.386831 master-0 kubenswrapper[37036]: I0312 14:53:07.386446 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29a0e0e6-b7b6-4632-94d9-86c24da56df4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"29a0e0e6-b7b6-4632-94d9-86c24da56df4\") " pod="openstack/nova-cell1-conductor-0" Mar 12 14:53:07.389821 master-0 kubenswrapper[37036]: I0312 14:53:07.389759 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29a0e0e6-b7b6-4632-94d9-86c24da56df4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"29a0e0e6-b7b6-4632-94d9-86c24da56df4\") " pod="openstack/nova-cell1-conductor-0" Mar 12 14:53:07.390922 master-0 kubenswrapper[37036]: I0312 14:53:07.390404 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29a0e0e6-b7b6-4632-94d9-86c24da56df4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"29a0e0e6-b7b6-4632-94d9-86c24da56df4\") " pod="openstack/nova-cell1-conductor-0" Mar 12 14:53:07.411007 master-0 kubenswrapper[37036]: I0312 14:53:07.409449 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9jpd\" (UniqueName: \"kubernetes.io/projected/29a0e0e6-b7b6-4632-94d9-86c24da56df4-kube-api-access-c9jpd\") pod \"nova-cell1-conductor-0\" (UID: \"29a0e0e6-b7b6-4632-94d9-86c24da56df4\") " pod="openstack/nova-cell1-conductor-0" Mar 12 14:53:07.450319 master-0 kubenswrapper[37036]: I0312 14:53:07.450275 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 14:53:07.491922 master-0 kubenswrapper[37036]: I0312 14:53:07.491852 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 12 14:53:08.016923 master-0 kubenswrapper[37036]: I0312 14:53:08.014364 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"29b7dc25-a599-4d70-ab20-134de7116d36","Type":"ContainerStarted","Data":"35b8098d082887e6625505328b0325b48a9845b6052dd7c30e869af7468c5051"} Mar 12 14:53:08.016923 master-0 kubenswrapper[37036]: I0312 14:53:08.014441 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"29b7dc25-a599-4d70-ab20-134de7116d36","Type":"ContainerStarted","Data":"fa7b5d075465e552484fbd9672e9b9f7db591955e604b1a26b8695e5f7852190"} Mar 12 14:53:08.023166 master-0 kubenswrapper[37036]: I0312 14:53:08.022880 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59f3abc9-a919-4f6b-8031-35d8546ad90e","Type":"ContainerStarted","Data":"108e1dcd08a14749037efb543d6b5a5548a76604aa485c419257dca0c840ea8f"} Mar 12 14:53:08.023166 master-0 kubenswrapper[37036]: I0312 14:53:08.023014 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59f3abc9-a919-4f6b-8031-35d8546ad90e","Type":"ContainerStarted","Data":"9666af9c947cfa072bcbba4525ba508081eb1c462ea53a68391910ae07fba578"} Mar 12 14:53:08.023166 master-0 kubenswrapper[37036]: I0312 14:53:08.023030 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59f3abc9-a919-4f6b-8031-35d8546ad90e","Type":"ContainerStarted","Data":"c33aa0aba65a115e0273a1c26bfe208ac925a8317156362d7baa26f3062ba407"} Mar 12 14:53:08.043712 master-0 kubenswrapper[37036]: I0312 14:53:08.043651 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"41dbd10c-af5b-4927-899a-f2661eede49e","Type":"ContainerStarted","Data":"b8e95615a47ca2ad8f86312cb3bf6d93155dfe7d9ef87cc466c970c7a0e320b2"} Mar 12 14:53:08.043712 master-0 kubenswrapper[37036]: I0312 14:53:08.043708 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"41dbd10c-af5b-4927-899a-f2661eede49e","Type":"ContainerStarted","Data":"35dfbe220b75e7d1b3b211033be3823ed86ca037e9d9026570a3be1589fd597e"} Mar 12 14:53:08.054015 master-0 kubenswrapper[37036]: I0312 14:53:08.053942 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 14:53:08.057175 master-0 kubenswrapper[37036]: I0312 14:53:08.057101 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.05484889 podStartE2EDuration="3.05484889s" podCreationTimestamp="2026-03-12 14:53:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:53:08.042251933 +0000 UTC m=+1047.049992890" watchObservedRunningTime="2026-03-12 14:53:08.05484889 +0000 UTC m=+1047.062589847" Mar 12 14:53:08.084004 master-0 kubenswrapper[37036]: I0312 14:53:08.083887 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.083868928 podStartE2EDuration="2.083868928s" podCreationTimestamp="2026-03-12 14:53:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:53:08.068524573 +0000 UTC m=+1047.076265510" watchObservedRunningTime="2026-03-12 14:53:08.083868928 +0000 UTC m=+1047.091609865" Mar 12 14:53:09.059355 master-0 kubenswrapper[37036]: I0312 14:53:09.059261 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"29a0e0e6-b7b6-4632-94d9-86c24da56df4","Type":"ContainerStarted","Data":"9ea1005c80bbda9070a4ebeb829aa3c9fd5444d8451dede94683c3e19383d695"} Mar 12 14:53:09.059355 master-0 kubenswrapper[37036]: I0312 14:53:09.059339 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"29a0e0e6-b7b6-4632-94d9-86c24da56df4","Type":"ContainerStarted","Data":"6e06b0861bf5cffd05ef085256fb007d14fc546b40f10476c4e0df1d52f551c0"} Mar 12 14:53:09.060053 master-0 kubenswrapper[37036]: I0312 14:53:09.059419 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 12 14:53:09.063671 master-0 kubenswrapper[37036]: I0312 14:53:09.063620 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"41dbd10c-af5b-4927-899a-f2661eede49e","Type":"ContainerStarted","Data":"46d50b6503878f989c61731a777aceed07e0f2296d9287c5960f12b857206c5c"} Mar 12 14:53:09.094444 master-0 kubenswrapper[37036]: I0312 14:53:09.094346 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.094323541 podStartE2EDuration="2.094323541s" podCreationTimestamp="2026-03-12 14:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:53:09.084995663 +0000 UTC m=+1048.092736600" watchObservedRunningTime="2026-03-12 14:53:09.094323541 +0000 UTC m=+1048.102064478" Mar 12 14:53:09.125927 master-0 kubenswrapper[37036]: I0312 14:53:09.125817 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.1257949800000002 podStartE2EDuration="3.12579498s" podCreationTimestamp="2026-03-12 14:53:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:53:09.111448469 +0000 UTC m=+1048.119189426" watchObservedRunningTime="2026-03-12 14:53:09.12579498 +0000 UTC m=+1048.133535917" Mar 12 14:53:11.464491 master-0 kubenswrapper[37036]: I0312 14:53:11.464397 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 12 14:53:11.775865 master-0 kubenswrapper[37036]: I0312 14:53:11.775810 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 14:53:11.777274 master-0 kubenswrapper[37036]: I0312 14:53:11.776113 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 14:53:16.464990 master-0 kubenswrapper[37036]: I0312 14:53:16.464922 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 12 14:53:16.494698 master-0 kubenswrapper[37036]: I0312 14:53:16.494627 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 12 14:53:16.641292 master-0 kubenswrapper[37036]: I0312 14:53:16.641209 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 14:53:16.641292 master-0 kubenswrapper[37036]: I0312 14:53:16.641273 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 14:53:16.777592 master-0 kubenswrapper[37036]: I0312 14:53:16.777509 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 14:53:16.777592 master-0 kubenswrapper[37036]: I0312 14:53:16.777578 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 14:53:17.191392 master-0 kubenswrapper[37036]: I0312 14:53:17.191255 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 12 14:53:17.549620 master-0 kubenswrapper[37036]: I0312 14:53:17.549533 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 12 14:53:17.724462 master-0 kubenswrapper[37036]: I0312 14:53:17.724166 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="59f3abc9-a919-4f6b-8031-35d8546ad90e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.15:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 14:53:17.724462 master-0 kubenswrapper[37036]: I0312 14:53:17.724137 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="59f3abc9-a919-4f6b-8031-35d8546ad90e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.15:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 14:53:17.812366 master-0 kubenswrapper[37036]: I0312 14:53:17.812145 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="41dbd10c-af5b-4927-899a-f2661eede49e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.16:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:53:17.812366 master-0 kubenswrapper[37036]: I0312 14:53:17.812144 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="41dbd10c-af5b-4927-899a-f2661eede49e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.16:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:53:22.002554 master-0 kubenswrapper[37036]: I0312 14:53:22.002493 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:53:22.177291 master-0 kubenswrapper[37036]: I0312 14:53:22.175699 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4997802f-9a57-43da-8580-059b53e904c8-config-data\") pod \"4997802f-9a57-43da-8580-059b53e904c8\" (UID: \"4997802f-9a57-43da-8580-059b53e904c8\") " Mar 12 14:53:22.177291 master-0 kubenswrapper[37036]: I0312 14:53:22.175978 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxhzg\" (UniqueName: \"kubernetes.io/projected/4997802f-9a57-43da-8580-059b53e904c8-kube-api-access-vxhzg\") pod \"4997802f-9a57-43da-8580-059b53e904c8\" (UID: \"4997802f-9a57-43da-8580-059b53e904c8\") " Mar 12 14:53:22.177291 master-0 kubenswrapper[37036]: I0312 14:53:22.176106 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4997802f-9a57-43da-8580-059b53e904c8-combined-ca-bundle\") pod \"4997802f-9a57-43da-8580-059b53e904c8\" (UID: \"4997802f-9a57-43da-8580-059b53e904c8\") " Mar 12 14:53:22.181939 master-0 kubenswrapper[37036]: I0312 14:53:22.181848 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4997802f-9a57-43da-8580-059b53e904c8-kube-api-access-vxhzg" (OuterVolumeSpecName: "kube-api-access-vxhzg") pod "4997802f-9a57-43da-8580-059b53e904c8" (UID: "4997802f-9a57-43da-8580-059b53e904c8"). InnerVolumeSpecName "kube-api-access-vxhzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:53:22.210805 master-0 kubenswrapper[37036]: I0312 14:53:22.210643 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4997802f-9a57-43da-8580-059b53e904c8-config-data" (OuterVolumeSpecName: "config-data") pod "4997802f-9a57-43da-8580-059b53e904c8" (UID: "4997802f-9a57-43da-8580-059b53e904c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:22.213204 master-0 kubenswrapper[37036]: I0312 14:53:22.212940 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4997802f-9a57-43da-8580-059b53e904c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4997802f-9a57-43da-8580-059b53e904c8" (UID: "4997802f-9a57-43da-8580-059b53e904c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:22.234019 master-0 kubenswrapper[37036]: I0312 14:53:22.233948 37036 generic.go:334] "Generic (PLEG): container finished" podID="4997802f-9a57-43da-8580-059b53e904c8" containerID="efa6d664d37b5d0b4ce207daa20da33480eabb55584c122de9da9a76aa822315" exitCode=137 Mar 12 14:53:22.234019 master-0 kubenswrapper[37036]: I0312 14:53:22.234010 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4997802f-9a57-43da-8580-059b53e904c8","Type":"ContainerDied","Data":"efa6d664d37b5d0b4ce207daa20da33480eabb55584c122de9da9a76aa822315"} Mar 12 14:53:22.234019 master-0 kubenswrapper[37036]: I0312 14:53:22.234031 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4997802f-9a57-43da-8580-059b53e904c8","Type":"ContainerDied","Data":"2b06fda22261c17e67a14dc28f0b16ec656c0fd7ac6990dbfbf2646fe22ab260"} Mar 12 14:53:22.234350 master-0 kubenswrapper[37036]: I0312 14:53:22.234050 37036 scope.go:117] "RemoveContainer" containerID="efa6d664d37b5d0b4ce207daa20da33480eabb55584c122de9da9a76aa822315" Mar 12 14:53:22.234350 master-0 kubenswrapper[37036]: I0312 14:53:22.234274 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:53:22.279675 master-0 kubenswrapper[37036]: I0312 14:53:22.279609 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxhzg\" (UniqueName: \"kubernetes.io/projected/4997802f-9a57-43da-8580-059b53e904c8-kube-api-access-vxhzg\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:22.279675 master-0 kubenswrapper[37036]: I0312 14:53:22.279664 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4997802f-9a57-43da-8580-059b53e904c8-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:22.279675 master-0 kubenswrapper[37036]: I0312 14:53:22.279675 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4997802f-9a57-43da-8580-059b53e904c8-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:22.309614 master-0 kubenswrapper[37036]: I0312 14:53:22.309529 37036 scope.go:117] "RemoveContainer" containerID="efa6d664d37b5d0b4ce207daa20da33480eabb55584c122de9da9a76aa822315" Mar 12 14:53:22.310166 master-0 kubenswrapper[37036]: E0312 14:53:22.310092 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efa6d664d37b5d0b4ce207daa20da33480eabb55584c122de9da9a76aa822315\": container with ID starting with efa6d664d37b5d0b4ce207daa20da33480eabb55584c122de9da9a76aa822315 not found: ID does not exist" containerID="efa6d664d37b5d0b4ce207daa20da33480eabb55584c122de9da9a76aa822315" Mar 12 14:53:22.310285 master-0 kubenswrapper[37036]: I0312 14:53:22.310231 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efa6d664d37b5d0b4ce207daa20da33480eabb55584c122de9da9a76aa822315"} err="failed to get container status \"efa6d664d37b5d0b4ce207daa20da33480eabb55584c122de9da9a76aa822315\": rpc error: code = NotFound desc = could not find container \"efa6d664d37b5d0b4ce207daa20da33480eabb55584c122de9da9a76aa822315\": container with ID starting with efa6d664d37b5d0b4ce207daa20da33480eabb55584c122de9da9a76aa822315 not found: ID does not exist" Mar 12 14:53:22.315539 master-0 kubenswrapper[37036]: I0312 14:53:22.315464 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 14:53:22.330132 master-0 kubenswrapper[37036]: I0312 14:53:22.330041 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 14:53:22.389977 master-0 kubenswrapper[37036]: I0312 14:53:22.389904 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 14:53:22.390575 master-0 kubenswrapper[37036]: E0312 14:53:22.390540 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4997802f-9a57-43da-8580-059b53e904c8" containerName="nova-cell1-novncproxy-novncproxy" Mar 12 14:53:22.390575 master-0 kubenswrapper[37036]: I0312 14:53:22.390568 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="4997802f-9a57-43da-8580-059b53e904c8" containerName="nova-cell1-novncproxy-novncproxy" Mar 12 14:53:22.391008 master-0 kubenswrapper[37036]: I0312 14:53:22.390978 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="4997802f-9a57-43da-8580-059b53e904c8" containerName="nova-cell1-novncproxy-novncproxy" Mar 12 14:53:22.391808 master-0 kubenswrapper[37036]: I0312 14:53:22.391775 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:53:22.394272 master-0 kubenswrapper[37036]: I0312 14:53:22.394227 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 12 14:53:22.394499 master-0 kubenswrapper[37036]: I0312 14:53:22.394467 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 12 14:53:22.394797 master-0 kubenswrapper[37036]: I0312 14:53:22.394761 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 12 14:53:22.414029 master-0 kubenswrapper[37036]: I0312 14:53:22.413238 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 14:53:22.486417 master-0 kubenswrapper[37036]: I0312 14:53:22.486235 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa4193cf-d004-497f-b8da-736467c10ced-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa4193cf-d004-497f-b8da-736467c10ced\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:53:22.486417 master-0 kubenswrapper[37036]: I0312 14:53:22.486363 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz5fq\" (UniqueName: \"kubernetes.io/projected/aa4193cf-d004-497f-b8da-736467c10ced-kube-api-access-sz5fq\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa4193cf-d004-497f-b8da-736467c10ced\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:53:22.486882 master-0 kubenswrapper[37036]: I0312 14:53:22.486531 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa4193cf-d004-497f-b8da-736467c10ced-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa4193cf-d004-497f-b8da-736467c10ced\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:53:22.486882 master-0 kubenswrapper[37036]: I0312 14:53:22.486569 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa4193cf-d004-497f-b8da-736467c10ced-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa4193cf-d004-497f-b8da-736467c10ced\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:53:22.486882 master-0 kubenswrapper[37036]: I0312 14:53:22.486593 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa4193cf-d004-497f-b8da-736467c10ced-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa4193cf-d004-497f-b8da-736467c10ced\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:53:22.588603 master-0 kubenswrapper[37036]: I0312 14:53:22.588524 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa4193cf-d004-497f-b8da-736467c10ced-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa4193cf-d004-497f-b8da-736467c10ced\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:53:22.589078 master-0 kubenswrapper[37036]: I0312 14:53:22.588857 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz5fq\" (UniqueName: \"kubernetes.io/projected/aa4193cf-d004-497f-b8da-736467c10ced-kube-api-access-sz5fq\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa4193cf-d004-497f-b8da-736467c10ced\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:53:22.589078 master-0 kubenswrapper[37036]: I0312 14:53:22.588982 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa4193cf-d004-497f-b8da-736467c10ced-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa4193cf-d004-497f-b8da-736467c10ced\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:53:22.589078 master-0 kubenswrapper[37036]: I0312 14:53:22.589025 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa4193cf-d004-497f-b8da-736467c10ced-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa4193cf-d004-497f-b8da-736467c10ced\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:53:22.589078 master-0 kubenswrapper[37036]: I0312 14:53:22.589057 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa4193cf-d004-497f-b8da-736467c10ced-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa4193cf-d004-497f-b8da-736467c10ced\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:53:22.596042 master-0 kubenswrapper[37036]: I0312 14:53:22.594119 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa4193cf-d004-497f-b8da-736467c10ced-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa4193cf-d004-497f-b8da-736467c10ced\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:53:22.597644 master-0 kubenswrapper[37036]: I0312 14:53:22.597548 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa4193cf-d004-497f-b8da-736467c10ced-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa4193cf-d004-497f-b8da-736467c10ced\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:53:22.597979 master-0 kubenswrapper[37036]: I0312 14:53:22.597781 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa4193cf-d004-497f-b8da-736467c10ced-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa4193cf-d004-497f-b8da-736467c10ced\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:53:22.599309 master-0 kubenswrapper[37036]: I0312 14:53:22.599261 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa4193cf-d004-497f-b8da-736467c10ced-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa4193cf-d004-497f-b8da-736467c10ced\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:53:22.608717 master-0 kubenswrapper[37036]: I0312 14:53:22.608652 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz5fq\" (UniqueName: \"kubernetes.io/projected/aa4193cf-d004-497f-b8da-736467c10ced-kube-api-access-sz5fq\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa4193cf-d004-497f-b8da-736467c10ced\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:53:22.728526 master-0 kubenswrapper[37036]: I0312 14:53:22.728450 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:53:23.257055 master-0 kubenswrapper[37036]: I0312 14:53:23.257007 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4997802f-9a57-43da-8580-059b53e904c8" path="/var/lib/kubelet/pods/4997802f-9a57-43da-8580-059b53e904c8/volumes" Mar 12 14:53:23.261845 master-0 kubenswrapper[37036]: I0312 14:53:23.259490 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 14:53:24.270260 master-0 kubenswrapper[37036]: I0312 14:53:24.270187 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"aa4193cf-d004-497f-b8da-736467c10ced","Type":"ContainerStarted","Data":"b7e6e29d6a3985f427b588015dc42a8583481211ab9f67a3296158b6c11a5dd8"} Mar 12 14:53:24.270260 master-0 kubenswrapper[37036]: I0312 14:53:24.270248 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"aa4193cf-d004-497f-b8da-736467c10ced","Type":"ContainerStarted","Data":"2129789db6573e594be6f0f6bcdd6fc672db36eeec9a2cab7f049b61322359fe"} Mar 12 14:53:24.293591 master-0 kubenswrapper[37036]: I0312 14:53:24.293489 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.293470054 podStartE2EDuration="2.293470054s" podCreationTimestamp="2026-03-12 14:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:53:24.29086965 +0000 UTC m=+1063.298610587" watchObservedRunningTime="2026-03-12 14:53:24.293470054 +0000 UTC m=+1063.301211001" Mar 12 14:53:26.644260 master-0 kubenswrapper[37036]: I0312 14:53:26.644171 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 14:53:26.644968 master-0 kubenswrapper[37036]: I0312 14:53:26.644344 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 14:53:26.644968 master-0 kubenswrapper[37036]: I0312 14:53:26.644885 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 14:53:26.644968 master-0 kubenswrapper[37036]: I0312 14:53:26.644944 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 14:53:26.648047 master-0 kubenswrapper[37036]: I0312 14:53:26.648024 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 14:53:26.648633 master-0 kubenswrapper[37036]: I0312 14:53:26.648614 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 14:53:26.815141 master-0 kubenswrapper[37036]: I0312 14:53:26.813227 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 12 14:53:26.835813 master-0 kubenswrapper[37036]: I0312 14:53:26.835768 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 12 14:53:26.836162 master-0 kubenswrapper[37036]: I0312 14:53:26.835978 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 12 14:53:26.985733 master-0 kubenswrapper[37036]: I0312 14:53:26.982121 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-676dcc7665-z72s6"] Mar 12 14:53:27.043989 master-0 kubenswrapper[37036]: I0312 14:53:27.043870 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-676dcc7665-z72s6" Mar 12 14:53:27.069169 master-0 kubenswrapper[37036]: I0312 14:53:27.069100 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-676dcc7665-z72s6"] Mar 12 14:53:27.161926 master-0 kubenswrapper[37036]: I0312 14:53:27.148250 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30518ff6-a619-4340-9982-2662a8475370-ovsdbserver-nb\") pod \"dnsmasq-dns-676dcc7665-z72s6\" (UID: \"30518ff6-a619-4340-9982-2662a8475370\") " pod="openstack/dnsmasq-dns-676dcc7665-z72s6" Mar 12 14:53:27.161926 master-0 kubenswrapper[37036]: I0312 14:53:27.148346 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30518ff6-a619-4340-9982-2662a8475370-config\") pod \"dnsmasq-dns-676dcc7665-z72s6\" (UID: \"30518ff6-a619-4340-9982-2662a8475370\") " pod="openstack/dnsmasq-dns-676dcc7665-z72s6" Mar 12 14:53:27.161926 master-0 kubenswrapper[37036]: I0312 14:53:27.148804 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30518ff6-a619-4340-9982-2662a8475370-ovsdbserver-sb\") pod \"dnsmasq-dns-676dcc7665-z72s6\" (UID: \"30518ff6-a619-4340-9982-2662a8475370\") " pod="openstack/dnsmasq-dns-676dcc7665-z72s6" Mar 12 14:53:27.161926 master-0 kubenswrapper[37036]: I0312 14:53:27.149008 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbbqn\" (UniqueName: \"kubernetes.io/projected/30518ff6-a619-4340-9982-2662a8475370-kube-api-access-mbbqn\") pod \"dnsmasq-dns-676dcc7665-z72s6\" (UID: \"30518ff6-a619-4340-9982-2662a8475370\") " pod="openstack/dnsmasq-dns-676dcc7665-z72s6" Mar 12 14:53:27.161926 master-0 kubenswrapper[37036]: I0312 14:53:27.149163 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30518ff6-a619-4340-9982-2662a8475370-dns-svc\") pod \"dnsmasq-dns-676dcc7665-z72s6\" (UID: \"30518ff6-a619-4340-9982-2662a8475370\") " pod="openstack/dnsmasq-dns-676dcc7665-z72s6" Mar 12 14:53:27.161926 master-0 kubenswrapper[37036]: I0312 14:53:27.149246 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30518ff6-a619-4340-9982-2662a8475370-dns-swift-storage-0\") pod \"dnsmasq-dns-676dcc7665-z72s6\" (UID: \"30518ff6-a619-4340-9982-2662a8475370\") " pod="openstack/dnsmasq-dns-676dcc7665-z72s6" Mar 12 14:53:27.259438 master-0 kubenswrapper[37036]: I0312 14:53:27.259360 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30518ff6-a619-4340-9982-2662a8475370-dns-svc\") pod \"dnsmasq-dns-676dcc7665-z72s6\" (UID: \"30518ff6-a619-4340-9982-2662a8475370\") " pod="openstack/dnsmasq-dns-676dcc7665-z72s6" Mar 12 14:53:27.259712 master-0 kubenswrapper[37036]: I0312 14:53:27.259479 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30518ff6-a619-4340-9982-2662a8475370-dns-swift-storage-0\") pod \"dnsmasq-dns-676dcc7665-z72s6\" (UID: \"30518ff6-a619-4340-9982-2662a8475370\") " pod="openstack/dnsmasq-dns-676dcc7665-z72s6" Mar 12 14:53:27.259712 master-0 kubenswrapper[37036]: I0312 14:53:27.259549 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30518ff6-a619-4340-9982-2662a8475370-ovsdbserver-nb\") pod \"dnsmasq-dns-676dcc7665-z72s6\" (UID: \"30518ff6-a619-4340-9982-2662a8475370\") " pod="openstack/dnsmasq-dns-676dcc7665-z72s6" Mar 12 14:53:27.259712 master-0 kubenswrapper[37036]: I0312 14:53:27.259571 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30518ff6-a619-4340-9982-2662a8475370-config\") pod \"dnsmasq-dns-676dcc7665-z72s6\" (UID: \"30518ff6-a619-4340-9982-2662a8475370\") " pod="openstack/dnsmasq-dns-676dcc7665-z72s6" Mar 12 14:53:27.259893 master-0 kubenswrapper[37036]: I0312 14:53:27.259759 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30518ff6-a619-4340-9982-2662a8475370-ovsdbserver-sb\") pod \"dnsmasq-dns-676dcc7665-z72s6\" (UID: \"30518ff6-a619-4340-9982-2662a8475370\") " pod="openstack/dnsmasq-dns-676dcc7665-z72s6" Mar 12 14:53:27.259893 master-0 kubenswrapper[37036]: I0312 14:53:27.259804 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbbqn\" (UniqueName: \"kubernetes.io/projected/30518ff6-a619-4340-9982-2662a8475370-kube-api-access-mbbqn\") pod \"dnsmasq-dns-676dcc7665-z72s6\" (UID: \"30518ff6-a619-4340-9982-2662a8475370\") " pod="openstack/dnsmasq-dns-676dcc7665-z72s6" Mar 12 14:53:27.261275 master-0 kubenswrapper[37036]: I0312 14:53:27.261140 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30518ff6-a619-4340-9982-2662a8475370-dns-svc\") pod \"dnsmasq-dns-676dcc7665-z72s6\" (UID: \"30518ff6-a619-4340-9982-2662a8475370\") " pod="openstack/dnsmasq-dns-676dcc7665-z72s6" Mar 12 14:53:27.270922 master-0 kubenswrapper[37036]: I0312 14:53:27.266867 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30518ff6-a619-4340-9982-2662a8475370-ovsdbserver-nb\") pod \"dnsmasq-dns-676dcc7665-z72s6\" (UID: \"30518ff6-a619-4340-9982-2662a8475370\") " pod="openstack/dnsmasq-dns-676dcc7665-z72s6" Mar 12 14:53:27.270922 master-0 kubenswrapper[37036]: I0312 14:53:27.267814 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30518ff6-a619-4340-9982-2662a8475370-config\") pod \"dnsmasq-dns-676dcc7665-z72s6\" (UID: \"30518ff6-a619-4340-9982-2662a8475370\") " pod="openstack/dnsmasq-dns-676dcc7665-z72s6" Mar 12 14:53:27.270922 master-0 kubenswrapper[37036]: I0312 14:53:27.267938 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30518ff6-a619-4340-9982-2662a8475370-dns-swift-storage-0\") pod \"dnsmasq-dns-676dcc7665-z72s6\" (UID: \"30518ff6-a619-4340-9982-2662a8475370\") " pod="openstack/dnsmasq-dns-676dcc7665-z72s6" Mar 12 14:53:27.270922 master-0 kubenswrapper[37036]: I0312 14:53:27.268400 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30518ff6-a619-4340-9982-2662a8475370-ovsdbserver-sb\") pod \"dnsmasq-dns-676dcc7665-z72s6\" (UID: \"30518ff6-a619-4340-9982-2662a8475370\") " pod="openstack/dnsmasq-dns-676dcc7665-z72s6" Mar 12 14:53:27.304919 master-0 kubenswrapper[37036]: I0312 14:53:27.301198 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbbqn\" (UniqueName: \"kubernetes.io/projected/30518ff6-a619-4340-9982-2662a8475370-kube-api-access-mbbqn\") pod \"dnsmasq-dns-676dcc7665-z72s6\" (UID: \"30518ff6-a619-4340-9982-2662a8475370\") " pod="openstack/dnsmasq-dns-676dcc7665-z72s6" Mar 12 14:53:27.421926 master-0 kubenswrapper[37036]: I0312 14:53:27.417254 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-676dcc7665-z72s6" Mar 12 14:53:27.471437 master-0 kubenswrapper[37036]: I0312 14:53:27.470920 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 12 14:53:27.731881 master-0 kubenswrapper[37036]: I0312 14:53:27.729310 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:53:27.979245 master-0 kubenswrapper[37036]: I0312 14:53:27.979203 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-676dcc7665-z72s6"] Mar 12 14:53:28.345725 master-0 kubenswrapper[37036]: I0312 14:53:28.345583 37036 generic.go:334] "Generic (PLEG): container finished" podID="30518ff6-a619-4340-9982-2662a8475370" containerID="e4451680c2065f662ac428d16b833e8c4191bbd8bc9cb0a0220e6ef58013e90c" exitCode=0 Mar 12 14:53:28.346070 master-0 kubenswrapper[37036]: I0312 14:53:28.346015 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-676dcc7665-z72s6" event={"ID":"30518ff6-a619-4340-9982-2662a8475370","Type":"ContainerDied","Data":"e4451680c2065f662ac428d16b833e8c4191bbd8bc9cb0a0220e6ef58013e90c"} Mar 12 14:53:28.346159 master-0 kubenswrapper[37036]: I0312 14:53:28.346091 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-676dcc7665-z72s6" event={"ID":"30518ff6-a619-4340-9982-2662a8475370","Type":"ContainerStarted","Data":"b051dc40a79bd4fefd77ef8897ad047106e14e269194d4c43ab494dd19fabdfe"} Mar 12 14:53:29.358869 master-0 kubenswrapper[37036]: I0312 14:53:29.358810 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-676dcc7665-z72s6" event={"ID":"30518ff6-a619-4340-9982-2662a8475370","Type":"ContainerStarted","Data":"3a3129013cdf0a808bc906c10e6ab498eca8ff46127264879a7617b6f5a7fa55"} Mar 12 14:53:29.384956 master-0 kubenswrapper[37036]: I0312 14:53:29.384853 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-676dcc7665-z72s6" podStartSLOduration=3.3848316609999998 podStartE2EDuration="3.384831661s" podCreationTimestamp="2026-03-12 14:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:53:29.384452442 +0000 UTC m=+1068.392193389" watchObservedRunningTime="2026-03-12 14:53:29.384831661 +0000 UTC m=+1068.392572598" Mar 12 14:53:29.777258 master-0 kubenswrapper[37036]: I0312 14:53:29.777167 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 14:53:29.777535 master-0 kubenswrapper[37036]: I0312 14:53:29.777401 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="59f3abc9-a919-4f6b-8031-35d8546ad90e" containerName="nova-api-log" containerID="cri-o://9666af9c947cfa072bcbba4525ba508081eb1c462ea53a68391910ae07fba578" gracePeriod=30 Mar 12 14:53:29.777535 master-0 kubenswrapper[37036]: I0312 14:53:29.777452 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="59f3abc9-a919-4f6b-8031-35d8546ad90e" containerName="nova-api-api" containerID="cri-o://108e1dcd08a14749037efb543d6b5a5548a76604aa485c419257dca0c840ea8f" gracePeriod=30 Mar 12 14:53:30.377548 master-0 kubenswrapper[37036]: I0312 14:53:30.377487 37036 generic.go:334] "Generic (PLEG): container finished" podID="59f3abc9-a919-4f6b-8031-35d8546ad90e" containerID="9666af9c947cfa072bcbba4525ba508081eb1c462ea53a68391910ae07fba578" exitCode=143 Mar 12 14:53:30.378581 master-0 kubenswrapper[37036]: I0312 14:53:30.378535 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59f3abc9-a919-4f6b-8031-35d8546ad90e","Type":"ContainerDied","Data":"9666af9c947cfa072bcbba4525ba508081eb1c462ea53a68391910ae07fba578"} Mar 12 14:53:30.378743 master-0 kubenswrapper[37036]: I0312 14:53:30.378644 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-676dcc7665-z72s6" Mar 12 14:53:32.728831 master-0 kubenswrapper[37036]: I0312 14:53:32.728770 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:53:32.772283 master-0 kubenswrapper[37036]: I0312 14:53:32.772239 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:53:33.699845 master-0 kubenswrapper[37036]: I0312 14:53:33.699732 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 14:53:33.701950 master-0 kubenswrapper[37036]: I0312 14:53:33.700608 37036 generic.go:334] "Generic (PLEG): container finished" podID="59f3abc9-a919-4f6b-8031-35d8546ad90e" containerID="108e1dcd08a14749037efb543d6b5a5548a76604aa485c419257dca0c840ea8f" exitCode=0 Mar 12 14:53:33.701950 master-0 kubenswrapper[37036]: I0312 14:53:33.701154 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59f3abc9-a919-4f6b-8031-35d8546ad90e","Type":"ContainerDied","Data":"108e1dcd08a14749037efb543d6b5a5548a76604aa485c419257dca0c840ea8f"} Mar 12 14:53:33.701950 master-0 kubenswrapper[37036]: I0312 14:53:33.701228 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59f3abc9-a919-4f6b-8031-35d8546ad90e","Type":"ContainerDied","Data":"c33aa0aba65a115e0273a1c26bfe208ac925a8317156362d7baa26f3062ba407"} Mar 12 14:53:33.701950 master-0 kubenswrapper[37036]: I0312 14:53:33.701250 37036 scope.go:117] "RemoveContainer" containerID="108e1dcd08a14749037efb543d6b5a5548a76604aa485c419257dca0c840ea8f" Mar 12 14:53:33.747637 master-0 kubenswrapper[37036]: I0312 14:53:33.747602 37036 scope.go:117] "RemoveContainer" containerID="9666af9c947cfa072bcbba4525ba508081eb1c462ea53a68391910ae07fba578" Mar 12 14:53:33.749582 master-0 kubenswrapper[37036]: I0312 14:53:33.749549 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 12 14:53:33.819026 master-0 kubenswrapper[37036]: I0312 14:53:33.810638 37036 scope.go:117] "RemoveContainer" containerID="108e1dcd08a14749037efb543d6b5a5548a76604aa485c419257dca0c840ea8f" Mar 12 14:53:33.819026 master-0 kubenswrapper[37036]: E0312 14:53:33.815191 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"108e1dcd08a14749037efb543d6b5a5548a76604aa485c419257dca0c840ea8f\": container with ID starting with 108e1dcd08a14749037efb543d6b5a5548a76604aa485c419257dca0c840ea8f not found: ID does not exist" containerID="108e1dcd08a14749037efb543d6b5a5548a76604aa485c419257dca0c840ea8f" Mar 12 14:53:33.819026 master-0 kubenswrapper[37036]: I0312 14:53:33.815257 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"108e1dcd08a14749037efb543d6b5a5548a76604aa485c419257dca0c840ea8f"} err="failed to get container status \"108e1dcd08a14749037efb543d6b5a5548a76604aa485c419257dca0c840ea8f\": rpc error: code = NotFound desc = could not find container \"108e1dcd08a14749037efb543d6b5a5548a76604aa485c419257dca0c840ea8f\": container with ID starting with 108e1dcd08a14749037efb543d6b5a5548a76604aa485c419257dca0c840ea8f not found: ID does not exist" Mar 12 14:53:33.819026 master-0 kubenswrapper[37036]: I0312 14:53:33.815294 37036 scope.go:117] "RemoveContainer" containerID="9666af9c947cfa072bcbba4525ba508081eb1c462ea53a68391910ae07fba578" Mar 12 14:53:33.819026 master-0 kubenswrapper[37036]: E0312 14:53:33.817823 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9666af9c947cfa072bcbba4525ba508081eb1c462ea53a68391910ae07fba578\": container with ID starting with 9666af9c947cfa072bcbba4525ba508081eb1c462ea53a68391910ae07fba578 not found: ID does not exist" containerID="9666af9c947cfa072bcbba4525ba508081eb1c462ea53a68391910ae07fba578" Mar 12 14:53:33.819026 master-0 kubenswrapper[37036]: I0312 14:53:33.817859 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9666af9c947cfa072bcbba4525ba508081eb1c462ea53a68391910ae07fba578"} err="failed to get container status \"9666af9c947cfa072bcbba4525ba508081eb1c462ea53a68391910ae07fba578\": rpc error: code = NotFound desc = could not find container \"9666af9c947cfa072bcbba4525ba508081eb1c462ea53a68391910ae07fba578\": container with ID starting with 9666af9c947cfa072bcbba4525ba508081eb1c462ea53a68391910ae07fba578 not found: ID does not exist" Mar 12 14:53:33.897934 master-0 kubenswrapper[37036]: I0312 14:53:33.897437 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvfdv\" (UniqueName: \"kubernetes.io/projected/59f3abc9-a919-4f6b-8031-35d8546ad90e-kube-api-access-hvfdv\") pod \"59f3abc9-a919-4f6b-8031-35d8546ad90e\" (UID: \"59f3abc9-a919-4f6b-8031-35d8546ad90e\") " Mar 12 14:53:33.898354 master-0 kubenswrapper[37036]: I0312 14:53:33.898335 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59f3abc9-a919-4f6b-8031-35d8546ad90e-config-data\") pod \"59f3abc9-a919-4f6b-8031-35d8546ad90e\" (UID: \"59f3abc9-a919-4f6b-8031-35d8546ad90e\") " Mar 12 14:53:33.898498 master-0 kubenswrapper[37036]: I0312 14:53:33.898485 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59f3abc9-a919-4f6b-8031-35d8546ad90e-logs\") pod \"59f3abc9-a919-4f6b-8031-35d8546ad90e\" (UID: \"59f3abc9-a919-4f6b-8031-35d8546ad90e\") " Mar 12 14:53:33.899009 master-0 kubenswrapper[37036]: I0312 14:53:33.898993 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59f3abc9-a919-4f6b-8031-35d8546ad90e-combined-ca-bundle\") pod \"59f3abc9-a919-4f6b-8031-35d8546ad90e\" (UID: \"59f3abc9-a919-4f6b-8031-35d8546ad90e\") " Mar 12 14:53:33.899283 master-0 kubenswrapper[37036]: I0312 14:53:33.898875 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59f3abc9-a919-4f6b-8031-35d8546ad90e-logs" (OuterVolumeSpecName: "logs") pod "59f3abc9-a919-4f6b-8031-35d8546ad90e" (UID: "59f3abc9-a919-4f6b-8031-35d8546ad90e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:53:33.904827 master-0 kubenswrapper[37036]: I0312 14:53:33.904774 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59f3abc9-a919-4f6b-8031-35d8546ad90e-kube-api-access-hvfdv" (OuterVolumeSpecName: "kube-api-access-hvfdv") pod "59f3abc9-a919-4f6b-8031-35d8546ad90e" (UID: "59f3abc9-a919-4f6b-8031-35d8546ad90e"). InnerVolumeSpecName "kube-api-access-hvfdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:53:33.905862 master-0 kubenswrapper[37036]: I0312 14:53:33.905473 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvfdv\" (UniqueName: \"kubernetes.io/projected/59f3abc9-a919-4f6b-8031-35d8546ad90e-kube-api-access-hvfdv\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:33.905862 master-0 kubenswrapper[37036]: I0312 14:53:33.905510 37036 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59f3abc9-a919-4f6b-8031-35d8546ad90e-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:33.970505 master-0 kubenswrapper[37036]: I0312 14:53:33.970202 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-26qtm"] Mar 12 14:53:33.971503 master-0 kubenswrapper[37036]: E0312 14:53:33.971337 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f3abc9-a919-4f6b-8031-35d8546ad90e" containerName="nova-api-log" Mar 12 14:53:33.971503 master-0 kubenswrapper[37036]: I0312 14:53:33.971377 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f3abc9-a919-4f6b-8031-35d8546ad90e" containerName="nova-api-log" Mar 12 14:53:33.971503 master-0 kubenswrapper[37036]: E0312 14:53:33.971428 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f3abc9-a919-4f6b-8031-35d8546ad90e" containerName="nova-api-api" Mar 12 14:53:33.971503 master-0 kubenswrapper[37036]: I0312 14:53:33.971442 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f3abc9-a919-4f6b-8031-35d8546ad90e" containerName="nova-api-api" Mar 12 14:53:33.975747 master-0 kubenswrapper[37036]: I0312 14:53:33.974499 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59f3abc9-a919-4f6b-8031-35d8546ad90e-config-data" (OuterVolumeSpecName: "config-data") pod "59f3abc9-a919-4f6b-8031-35d8546ad90e" (UID: "59f3abc9-a919-4f6b-8031-35d8546ad90e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:33.984320 master-0 kubenswrapper[37036]: I0312 14:53:33.984066 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="59f3abc9-a919-4f6b-8031-35d8546ad90e" containerName="nova-api-api" Mar 12 14:53:33.984320 master-0 kubenswrapper[37036]: I0312 14:53:33.984201 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="59f3abc9-a919-4f6b-8031-35d8546ad90e" containerName="nova-api-log" Mar 12 14:53:33.988321 master-0 kubenswrapper[37036]: I0312 14:53:33.987082 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-26qtm" Mar 12 14:53:33.997316 master-0 kubenswrapper[37036]: I0312 14:53:33.997264 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-host-discover-q6h7h"] Mar 12 14:53:34.000255 master-0 kubenswrapper[37036]: I0312 14:53:33.999451 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-q6h7h" Mar 12 14:53:34.002910 master-0 kubenswrapper[37036]: I0312 14:53:34.002832 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 12 14:53:34.003420 master-0 kubenswrapper[37036]: I0312 14:53:34.003404 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 12 14:53:34.007236 master-0 kubenswrapper[37036]: I0312 14:53:34.007156 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-26qtm"] Mar 12 14:53:34.012118 master-0 kubenswrapper[37036]: I0312 14:53:34.012051 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5mkb\" (UniqueName: \"kubernetes.io/projected/ccc5c739-be50-4e7f-a490-f901f062e630-kube-api-access-d5mkb\") pod \"nova-cell1-host-discover-q6h7h\" (UID: \"ccc5c739-be50-4e7f-a490-f901f062e630\") " pod="openstack/nova-cell1-host-discover-q6h7h" Mar 12 14:53:34.012310 master-0 kubenswrapper[37036]: I0312 14:53:34.012133 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlq9q\" (UniqueName: \"kubernetes.io/projected/40de96d4-7f02-4bc3-a660-957d6b986159-kube-api-access-wlq9q\") pod \"nova-cell1-cell-mapping-26qtm\" (UID: \"40de96d4-7f02-4bc3-a660-957d6b986159\") " pod="openstack/nova-cell1-cell-mapping-26qtm" Mar 12 14:53:34.012310 master-0 kubenswrapper[37036]: I0312 14:53:34.012192 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccc5c739-be50-4e7f-a490-f901f062e630-scripts\") pod \"nova-cell1-host-discover-q6h7h\" (UID: \"ccc5c739-be50-4e7f-a490-f901f062e630\") " pod="openstack/nova-cell1-host-discover-q6h7h" Mar 12 14:53:34.012423 master-0 kubenswrapper[37036]: I0312 14:53:34.012331 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40de96d4-7f02-4bc3-a660-957d6b986159-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-26qtm\" (UID: \"40de96d4-7f02-4bc3-a660-957d6b986159\") " pod="openstack/nova-cell1-cell-mapping-26qtm" Mar 12 14:53:34.012423 master-0 kubenswrapper[37036]: I0312 14:53:34.012375 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40de96d4-7f02-4bc3-a660-957d6b986159-scripts\") pod \"nova-cell1-cell-mapping-26qtm\" (UID: \"40de96d4-7f02-4bc3-a660-957d6b986159\") " pod="openstack/nova-cell1-cell-mapping-26qtm" Mar 12 14:53:34.012489 master-0 kubenswrapper[37036]: I0312 14:53:34.012466 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40de96d4-7f02-4bc3-a660-957d6b986159-config-data\") pod \"nova-cell1-cell-mapping-26qtm\" (UID: \"40de96d4-7f02-4bc3-a660-957d6b986159\") " pod="openstack/nova-cell1-cell-mapping-26qtm" Mar 12 14:53:34.012525 master-0 kubenswrapper[37036]: I0312 14:53:34.012501 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccc5c739-be50-4e7f-a490-f901f062e630-config-data\") pod \"nova-cell1-host-discover-q6h7h\" (UID: \"ccc5c739-be50-4e7f-a490-f901f062e630\") " pod="openstack/nova-cell1-host-discover-q6h7h" Mar 12 14:53:34.012561 master-0 kubenswrapper[37036]: I0312 14:53:34.012531 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccc5c739-be50-4e7f-a490-f901f062e630-combined-ca-bundle\") pod \"nova-cell1-host-discover-q6h7h\" (UID: \"ccc5c739-be50-4e7f-a490-f901f062e630\") " pod="openstack/nova-cell1-host-discover-q6h7h" Mar 12 14:53:34.015079 master-0 kubenswrapper[37036]: I0312 14:53:34.014588 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59f3abc9-a919-4f6b-8031-35d8546ad90e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59f3abc9-a919-4f6b-8031-35d8546ad90e" (UID: "59f3abc9-a919-4f6b-8031-35d8546ad90e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:34.017460 master-0 kubenswrapper[37036]: I0312 14:53:34.016466 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59f3abc9-a919-4f6b-8031-35d8546ad90e-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:34.032836 master-0 kubenswrapper[37036]: I0312 14:53:34.032768 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-q6h7h"] Mar 12 14:53:34.118357 master-0 kubenswrapper[37036]: I0312 14:53:34.118308 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40de96d4-7f02-4bc3-a660-957d6b986159-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-26qtm\" (UID: \"40de96d4-7f02-4bc3-a660-957d6b986159\") " pod="openstack/nova-cell1-cell-mapping-26qtm" Mar 12 14:53:34.118457 master-0 kubenswrapper[37036]: I0312 14:53:34.118371 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40de96d4-7f02-4bc3-a660-957d6b986159-scripts\") pod \"nova-cell1-cell-mapping-26qtm\" (UID: \"40de96d4-7f02-4bc3-a660-957d6b986159\") " pod="openstack/nova-cell1-cell-mapping-26qtm" Mar 12 14:53:34.118457 master-0 kubenswrapper[37036]: I0312 14:53:34.118408 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40de96d4-7f02-4bc3-a660-957d6b986159-config-data\") pod \"nova-cell1-cell-mapping-26qtm\" (UID: \"40de96d4-7f02-4bc3-a660-957d6b986159\") " pod="openstack/nova-cell1-cell-mapping-26qtm" Mar 12 14:53:34.118457 master-0 kubenswrapper[37036]: I0312 14:53:34.118426 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccc5c739-be50-4e7f-a490-f901f062e630-config-data\") pod \"nova-cell1-host-discover-q6h7h\" (UID: \"ccc5c739-be50-4e7f-a490-f901f062e630\") " pod="openstack/nova-cell1-host-discover-q6h7h" Mar 12 14:53:34.118457 master-0 kubenswrapper[37036]: I0312 14:53:34.118444 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccc5c739-be50-4e7f-a490-f901f062e630-combined-ca-bundle\") pod \"nova-cell1-host-discover-q6h7h\" (UID: \"ccc5c739-be50-4e7f-a490-f901f062e630\") " pod="openstack/nova-cell1-host-discover-q6h7h" Mar 12 14:53:34.118601 master-0 kubenswrapper[37036]: I0312 14:53:34.118577 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5mkb\" (UniqueName: \"kubernetes.io/projected/ccc5c739-be50-4e7f-a490-f901f062e630-kube-api-access-d5mkb\") pod \"nova-cell1-host-discover-q6h7h\" (UID: \"ccc5c739-be50-4e7f-a490-f901f062e630\") " pod="openstack/nova-cell1-host-discover-q6h7h" Mar 12 14:53:34.118643 master-0 kubenswrapper[37036]: I0312 14:53:34.118606 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlq9q\" (UniqueName: \"kubernetes.io/projected/40de96d4-7f02-4bc3-a660-957d6b986159-kube-api-access-wlq9q\") pod \"nova-cell1-cell-mapping-26qtm\" (UID: \"40de96d4-7f02-4bc3-a660-957d6b986159\") " pod="openstack/nova-cell1-cell-mapping-26qtm" Mar 12 14:53:34.118643 master-0 kubenswrapper[37036]: I0312 14:53:34.118630 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccc5c739-be50-4e7f-a490-f901f062e630-scripts\") pod \"nova-cell1-host-discover-q6h7h\" (UID: \"ccc5c739-be50-4e7f-a490-f901f062e630\") " pod="openstack/nova-cell1-host-discover-q6h7h" Mar 12 14:53:34.120471 master-0 kubenswrapper[37036]: I0312 14:53:34.120445 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59f3abc9-a919-4f6b-8031-35d8546ad90e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:34.124483 master-0 kubenswrapper[37036]: I0312 14:53:34.124181 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40de96d4-7f02-4bc3-a660-957d6b986159-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-26qtm\" (UID: \"40de96d4-7f02-4bc3-a660-957d6b986159\") " pod="openstack/nova-cell1-cell-mapping-26qtm" Mar 12 14:53:34.125604 master-0 kubenswrapper[37036]: I0312 14:53:34.125543 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40de96d4-7f02-4bc3-a660-957d6b986159-config-data\") pod \"nova-cell1-cell-mapping-26qtm\" (UID: \"40de96d4-7f02-4bc3-a660-957d6b986159\") " pod="openstack/nova-cell1-cell-mapping-26qtm" Mar 12 14:53:34.131119 master-0 kubenswrapper[37036]: I0312 14:53:34.131056 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40de96d4-7f02-4bc3-a660-957d6b986159-scripts\") pod \"nova-cell1-cell-mapping-26qtm\" (UID: \"40de96d4-7f02-4bc3-a660-957d6b986159\") " pod="openstack/nova-cell1-cell-mapping-26qtm" Mar 12 14:53:34.132283 master-0 kubenswrapper[37036]: I0312 14:53:34.132250 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccc5c739-be50-4e7f-a490-f901f062e630-combined-ca-bundle\") pod \"nova-cell1-host-discover-q6h7h\" (UID: \"ccc5c739-be50-4e7f-a490-f901f062e630\") " pod="openstack/nova-cell1-host-discover-q6h7h" Mar 12 14:53:34.133522 master-0 kubenswrapper[37036]: I0312 14:53:34.133463 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccc5c739-be50-4e7f-a490-f901f062e630-scripts\") pod \"nova-cell1-host-discover-q6h7h\" (UID: \"ccc5c739-be50-4e7f-a490-f901f062e630\") " pod="openstack/nova-cell1-host-discover-q6h7h" Mar 12 14:53:34.134524 master-0 kubenswrapper[37036]: I0312 14:53:34.134480 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccc5c739-be50-4e7f-a490-f901f062e630-config-data\") pod \"nova-cell1-host-discover-q6h7h\" (UID: \"ccc5c739-be50-4e7f-a490-f901f062e630\") " pod="openstack/nova-cell1-host-discover-q6h7h" Mar 12 14:53:34.144554 master-0 kubenswrapper[37036]: I0312 14:53:34.144396 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5mkb\" (UniqueName: \"kubernetes.io/projected/ccc5c739-be50-4e7f-a490-f901f062e630-kube-api-access-d5mkb\") pod \"nova-cell1-host-discover-q6h7h\" (UID: \"ccc5c739-be50-4e7f-a490-f901f062e630\") " pod="openstack/nova-cell1-host-discover-q6h7h" Mar 12 14:53:34.144848 master-0 kubenswrapper[37036]: I0312 14:53:34.144822 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlq9q\" (UniqueName: \"kubernetes.io/projected/40de96d4-7f02-4bc3-a660-957d6b986159-kube-api-access-wlq9q\") pod \"nova-cell1-cell-mapping-26qtm\" (UID: \"40de96d4-7f02-4bc3-a660-957d6b986159\") " pod="openstack/nova-cell1-cell-mapping-26qtm" Mar 12 14:53:34.401083 master-0 kubenswrapper[37036]: I0312 14:53:34.400994 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-26qtm" Mar 12 14:53:34.425365 master-0 kubenswrapper[37036]: I0312 14:53:34.425259 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-q6h7h" Mar 12 14:53:34.729167 master-0 kubenswrapper[37036]: I0312 14:53:34.729106 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 14:53:34.817945 master-0 kubenswrapper[37036]: I0312 14:53:34.815679 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 14:53:34.848961 master-0 kubenswrapper[37036]: I0312 14:53:34.833808 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 12 14:53:34.848961 master-0 kubenswrapper[37036]: I0312 14:53:34.847817 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 14:53:34.865121 master-0 kubenswrapper[37036]: I0312 14:53:34.850758 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 14:53:34.865121 master-0 kubenswrapper[37036]: I0312 14:53:34.853389 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 12 14:53:34.865121 master-0 kubenswrapper[37036]: I0312 14:53:34.853606 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 12 14:53:34.865121 master-0 kubenswrapper[37036]: I0312 14:53:34.854506 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 14:53:34.865121 master-0 kubenswrapper[37036]: I0312 14:53:34.863140 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 14:53:34.955640 master-0 kubenswrapper[37036]: I0312 14:53:34.955499 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4r8g\" (UniqueName: \"kubernetes.io/projected/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-kube-api-access-q4r8g\") pod \"nova-api-0\" (UID: \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\") " pod="openstack/nova-api-0" Mar 12 14:53:34.955640 master-0 kubenswrapper[37036]: I0312 14:53:34.955590 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-public-tls-certs\") pod \"nova-api-0\" (UID: \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\") " pod="openstack/nova-api-0" Mar 12 14:53:34.955640 master-0 kubenswrapper[37036]: I0312 14:53:34.955623 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-config-data\") pod \"nova-api-0\" (UID: \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\") " pod="openstack/nova-api-0" Mar 12 14:53:34.955968 master-0 kubenswrapper[37036]: I0312 14:53:34.955738 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\") " pod="openstack/nova-api-0" Mar 12 14:53:34.955968 master-0 kubenswrapper[37036]: I0312 14:53:34.955813 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\") " pod="openstack/nova-api-0" Mar 12 14:53:34.955968 master-0 kubenswrapper[37036]: I0312 14:53:34.955859 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-logs\") pod \"nova-api-0\" (UID: \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\") " pod="openstack/nova-api-0" Mar 12 14:53:34.962354 master-0 kubenswrapper[37036]: I0312 14:53:34.962298 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-26qtm"] Mar 12 14:53:35.069399 master-0 kubenswrapper[37036]: I0312 14:53:35.069329 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4r8g\" (UniqueName: \"kubernetes.io/projected/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-kube-api-access-q4r8g\") pod \"nova-api-0\" (UID: \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\") " pod="openstack/nova-api-0" Mar 12 14:53:35.069620 master-0 kubenswrapper[37036]: I0312 14:53:35.069587 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-public-tls-certs\") pod \"nova-api-0\" (UID: \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\") " pod="openstack/nova-api-0" Mar 12 14:53:35.069669 master-0 kubenswrapper[37036]: I0312 14:53:35.069650 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-config-data\") pod \"nova-api-0\" (UID: \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\") " pod="openstack/nova-api-0" Mar 12 14:53:35.069986 master-0 kubenswrapper[37036]: I0312 14:53:35.069955 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\") " pod="openstack/nova-api-0" Mar 12 14:53:35.070082 master-0 kubenswrapper[37036]: I0312 14:53:35.070062 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\") " pod="openstack/nova-api-0" Mar 12 14:53:35.070142 master-0 kubenswrapper[37036]: I0312 14:53:35.070112 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-logs\") pod \"nova-api-0\" (UID: \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\") " pod="openstack/nova-api-0" Mar 12 14:53:35.070739 master-0 kubenswrapper[37036]: I0312 14:53:35.070711 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-logs\") pod \"nova-api-0\" (UID: \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\") " pod="openstack/nova-api-0" Mar 12 14:53:35.073907 master-0 kubenswrapper[37036]: I0312 14:53:35.073859 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\") " pod="openstack/nova-api-0" Mar 12 14:53:35.074511 master-0 kubenswrapper[37036]: I0312 14:53:35.074469 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\") " pod="openstack/nova-api-0" Mar 12 14:53:35.084043 master-0 kubenswrapper[37036]: I0312 14:53:35.083955 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-config-data\") pod \"nova-api-0\" (UID: \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\") " pod="openstack/nova-api-0" Mar 12 14:53:35.086703 master-0 kubenswrapper[37036]: I0312 14:53:35.086512 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-public-tls-certs\") pod \"nova-api-0\" (UID: \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\") " pod="openstack/nova-api-0" Mar 12 14:53:35.095403 master-0 kubenswrapper[37036]: I0312 14:53:35.093223 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-q6h7h"] Mar 12 14:53:35.095403 master-0 kubenswrapper[37036]: I0312 14:53:35.093679 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4r8g\" (UniqueName: \"kubernetes.io/projected/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-kube-api-access-q4r8g\") pod \"nova-api-0\" (UID: \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\") " pod="openstack/nova-api-0" Mar 12 14:53:35.187349 master-0 kubenswrapper[37036]: I0312 14:53:35.187077 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 14:53:35.251267 master-0 kubenswrapper[37036]: I0312 14:53:35.251215 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59f3abc9-a919-4f6b-8031-35d8546ad90e" path="/var/lib/kubelet/pods/59f3abc9-a919-4f6b-8031-35d8546ad90e/volumes" Mar 12 14:53:35.707378 master-0 kubenswrapper[37036]: I0312 14:53:35.707316 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 14:53:35.712003 master-0 kubenswrapper[37036]: W0312 14:53:35.711956 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5eeaa131_5527_4cd6_8e3b_e3dd359bcf3a.slice/crio-7e48aa3990bbb825bee0f56f77ffc3c5a7346b4abfe9100e7464c5795dcc16f1 WatchSource:0}: Error finding container 7e48aa3990bbb825bee0f56f77ffc3c5a7346b4abfe9100e7464c5795dcc16f1: Status 404 returned error can't find the container with id 7e48aa3990bbb825bee0f56f77ffc3c5a7346b4abfe9100e7464c5795dcc16f1 Mar 12 14:53:35.751705 master-0 kubenswrapper[37036]: I0312 14:53:35.751634 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-26qtm" event={"ID":"40de96d4-7f02-4bc3-a660-957d6b986159","Type":"ContainerStarted","Data":"01a04603021a310ded4c2ac4245e98ec78b86ae594963f1ba71df7aa693c427f"} Mar 12 14:53:35.751705 master-0 kubenswrapper[37036]: I0312 14:53:35.751701 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-26qtm" event={"ID":"40de96d4-7f02-4bc3-a660-957d6b986159","Type":"ContainerStarted","Data":"85d1363179369b0b7a7c0a4da770725e47d50443efaaa1b8b623fe996ab3f384"} Mar 12 14:53:35.753994 master-0 kubenswrapper[37036]: I0312 14:53:35.753946 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a","Type":"ContainerStarted","Data":"7e48aa3990bbb825bee0f56f77ffc3c5a7346b4abfe9100e7464c5795dcc16f1"} Mar 12 14:53:35.757435 master-0 kubenswrapper[37036]: I0312 14:53:35.757378 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-q6h7h" event={"ID":"ccc5c739-be50-4e7f-a490-f901f062e630","Type":"ContainerStarted","Data":"991c54771055af5fed8547ec8ca8b062907d328b0d761152f4f0d859a1d7ff8e"} Mar 12 14:53:35.757517 master-0 kubenswrapper[37036]: I0312 14:53:35.757440 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-q6h7h" event={"ID":"ccc5c739-be50-4e7f-a490-f901f062e630","Type":"ContainerStarted","Data":"1789b4788da038fc92790193b3964242beb888d8d5466c05ca57fc2e24f3ca4e"} Mar 12 14:53:35.789910 master-0 kubenswrapper[37036]: I0312 14:53:35.789797 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-26qtm" podStartSLOduration=2.7897751299999998 podStartE2EDuration="2.78977513s" podCreationTimestamp="2026-03-12 14:53:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:53:35.774641151 +0000 UTC m=+1074.782382098" watchObservedRunningTime="2026-03-12 14:53:35.78977513 +0000 UTC m=+1074.797516067" Mar 12 14:53:35.827279 master-0 kubenswrapper[37036]: I0312 14:53:35.827199 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-host-discover-q6h7h" podStartSLOduration=2.827175404 podStartE2EDuration="2.827175404s" podCreationTimestamp="2026-03-12 14:53:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:53:35.801227581 +0000 UTC m=+1074.808968518" watchObservedRunningTime="2026-03-12 14:53:35.827175404 +0000 UTC m=+1074.834916341" Mar 12 14:53:36.825732 master-0 kubenswrapper[37036]: I0312 14:53:36.825611 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a","Type":"ContainerStarted","Data":"50a803004ce750679cf33e283ae7f0c84135253788243881dd9ee48d4b468b1d"} Mar 12 14:53:36.825732 master-0 kubenswrapper[37036]: I0312 14:53:36.825669 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a","Type":"ContainerStarted","Data":"d359a7ab08300b8242fb1ea43100a752b9043f500655dc3c8c463df0aa0709cd"} Mar 12 14:53:36.862429 master-0 kubenswrapper[37036]: I0312 14:53:36.857194 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.857171874 podStartE2EDuration="2.857171874s" podCreationTimestamp="2026-03-12 14:53:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:53:36.852489099 +0000 UTC m=+1075.860230056" watchObservedRunningTime="2026-03-12 14:53:36.857171874 +0000 UTC m=+1075.864912811" Mar 12 14:53:37.419380 master-0 kubenswrapper[37036]: I0312 14:53:37.419228 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-676dcc7665-z72s6" Mar 12 14:53:37.566920 master-0 kubenswrapper[37036]: I0312 14:53:37.560136 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b98899985-qcjxc"] Mar 12 14:53:37.566920 master-0 kubenswrapper[37036]: I0312 14:53:37.560532 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b98899985-qcjxc" podUID="4f146506-a967-4da7-b1cb-57ba34e55eae" containerName="dnsmasq-dns" containerID="cri-o://e9cb804899b4791907de75c5787d8348bfcdfab33c441329276c18bc32f87f37" gracePeriod=10 Mar 12 14:53:37.782560 master-0 kubenswrapper[37036]: E0312 14:53:37.782497 37036 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f146506_a967_4da7_b1cb_57ba34e55eae.slice/crio-conmon-e9cb804899b4791907de75c5787d8348bfcdfab33c441329276c18bc32f87f37.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f146506_a967_4da7_b1cb_57ba34e55eae.slice/crio-e9cb804899b4791907de75c5787d8348bfcdfab33c441329276c18bc32f87f37.scope\": RecentStats: unable to find data in memory cache]" Mar 12 14:53:37.783240 master-0 kubenswrapper[37036]: E0312 14:53:37.783194 37036 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f146506_a967_4da7_b1cb_57ba34e55eae.slice/crio-e9cb804899b4791907de75c5787d8348bfcdfab33c441329276c18bc32f87f37.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f146506_a967_4da7_b1cb_57ba34e55eae.slice/crio-conmon-e9cb804899b4791907de75c5787d8348bfcdfab33c441329276c18bc32f87f37.scope\": RecentStats: unable to find data in memory cache]" Mar 12 14:53:37.844207 master-0 kubenswrapper[37036]: I0312 14:53:37.844131 37036 generic.go:334] "Generic (PLEG): container finished" podID="4f146506-a967-4da7-b1cb-57ba34e55eae" containerID="e9cb804899b4791907de75c5787d8348bfcdfab33c441329276c18bc32f87f37" exitCode=0 Mar 12 14:53:37.844530 master-0 kubenswrapper[37036]: I0312 14:53:37.844202 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b98899985-qcjxc" event={"ID":"4f146506-a967-4da7-b1cb-57ba34e55eae","Type":"ContainerDied","Data":"e9cb804899b4791907de75c5787d8348bfcdfab33c441329276c18bc32f87f37"} Mar 12 14:53:38.321466 master-0 kubenswrapper[37036]: I0312 14:53:38.321360 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b98899985-qcjxc" Mar 12 14:53:38.391492 master-0 kubenswrapper[37036]: I0312 14:53:38.391425 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-ovsdbserver-sb\") pod \"4f146506-a967-4da7-b1cb-57ba34e55eae\" (UID: \"4f146506-a967-4da7-b1cb-57ba34e55eae\") " Mar 12 14:53:38.391810 master-0 kubenswrapper[37036]: I0312 14:53:38.391532 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-dns-svc\") pod \"4f146506-a967-4da7-b1cb-57ba34e55eae\" (UID: \"4f146506-a967-4da7-b1cb-57ba34e55eae\") " Mar 12 14:53:38.391810 master-0 kubenswrapper[37036]: I0312 14:53:38.391565 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-dns-swift-storage-0\") pod \"4f146506-a967-4da7-b1cb-57ba34e55eae\" (UID: \"4f146506-a967-4da7-b1cb-57ba34e55eae\") " Mar 12 14:53:38.391810 master-0 kubenswrapper[37036]: I0312 14:53:38.391601 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vnlc\" (UniqueName: \"kubernetes.io/projected/4f146506-a967-4da7-b1cb-57ba34e55eae-kube-api-access-9vnlc\") pod \"4f146506-a967-4da7-b1cb-57ba34e55eae\" (UID: \"4f146506-a967-4da7-b1cb-57ba34e55eae\") " Mar 12 14:53:38.391810 master-0 kubenswrapper[37036]: I0312 14:53:38.391629 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-config\") pod \"4f146506-a967-4da7-b1cb-57ba34e55eae\" (UID: \"4f146506-a967-4da7-b1cb-57ba34e55eae\") " Mar 12 14:53:38.391986 master-0 kubenswrapper[37036]: I0312 14:53:38.391834 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-ovsdbserver-nb\") pod \"4f146506-a967-4da7-b1cb-57ba34e55eae\" (UID: \"4f146506-a967-4da7-b1cb-57ba34e55eae\") " Mar 12 14:53:38.412996 master-0 kubenswrapper[37036]: I0312 14:53:38.412287 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f146506-a967-4da7-b1cb-57ba34e55eae-kube-api-access-9vnlc" (OuterVolumeSpecName: "kube-api-access-9vnlc") pod "4f146506-a967-4da7-b1cb-57ba34e55eae" (UID: "4f146506-a967-4da7-b1cb-57ba34e55eae"). InnerVolumeSpecName "kube-api-access-9vnlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:53:38.449497 master-0 kubenswrapper[37036]: I0312 14:53:38.449428 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4f146506-a967-4da7-b1cb-57ba34e55eae" (UID: "4f146506-a967-4da7-b1cb-57ba34e55eae"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:53:38.452486 master-0 kubenswrapper[37036]: I0312 14:53:38.452325 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4f146506-a967-4da7-b1cb-57ba34e55eae" (UID: "4f146506-a967-4da7-b1cb-57ba34e55eae"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:53:38.461169 master-0 kubenswrapper[37036]: I0312 14:53:38.461105 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-config" (OuterVolumeSpecName: "config") pod "4f146506-a967-4da7-b1cb-57ba34e55eae" (UID: "4f146506-a967-4da7-b1cb-57ba34e55eae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:53:38.462769 master-0 kubenswrapper[37036]: I0312 14:53:38.462743 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4f146506-a967-4da7-b1cb-57ba34e55eae" (UID: "4f146506-a967-4da7-b1cb-57ba34e55eae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:53:38.471363 master-0 kubenswrapper[37036]: I0312 14:53:38.471294 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4f146506-a967-4da7-b1cb-57ba34e55eae" (UID: "4f146506-a967-4da7-b1cb-57ba34e55eae"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:53:38.508914 master-0 kubenswrapper[37036]: I0312 14:53:38.508440 37036 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:38.508914 master-0 kubenswrapper[37036]: I0312 14:53:38.508519 37036 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:38.508914 master-0 kubenswrapper[37036]: I0312 14:53:38.508541 37036 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:38.508914 master-0 kubenswrapper[37036]: I0312 14:53:38.508555 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vnlc\" (UniqueName: \"kubernetes.io/projected/4f146506-a967-4da7-b1cb-57ba34e55eae-kube-api-access-9vnlc\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:38.508914 master-0 kubenswrapper[37036]: I0312 14:53:38.508569 37036 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:38.508914 master-0 kubenswrapper[37036]: I0312 14:53:38.508601 37036 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f146506-a967-4da7-b1cb-57ba34e55eae-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:38.858855 master-0 kubenswrapper[37036]: I0312 14:53:38.858176 37036 generic.go:334] "Generic (PLEG): container finished" podID="ccc5c739-be50-4e7f-a490-f901f062e630" containerID="991c54771055af5fed8547ec8ca8b062907d328b0d761152f4f0d859a1d7ff8e" exitCode=0 Mar 12 14:53:38.858855 master-0 kubenswrapper[37036]: I0312 14:53:38.858229 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-q6h7h" event={"ID":"ccc5c739-be50-4e7f-a490-f901f062e630","Type":"ContainerDied","Data":"991c54771055af5fed8547ec8ca8b062907d328b0d761152f4f0d859a1d7ff8e"} Mar 12 14:53:38.861022 master-0 kubenswrapper[37036]: I0312 14:53:38.860946 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b98899985-qcjxc" event={"ID":"4f146506-a967-4da7-b1cb-57ba34e55eae","Type":"ContainerDied","Data":"33854a1087f99e6b93e6f0643e6cb739c82e01b35dc7f499b1d2715958a6f975"} Mar 12 14:53:38.861022 master-0 kubenswrapper[37036]: I0312 14:53:38.860993 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b98899985-qcjxc" Mar 12 14:53:38.861195 master-0 kubenswrapper[37036]: I0312 14:53:38.861001 37036 scope.go:117] "RemoveContainer" containerID="e9cb804899b4791907de75c5787d8348bfcdfab33c441329276c18bc32f87f37" Mar 12 14:53:38.890425 master-0 kubenswrapper[37036]: I0312 14:53:38.890330 37036 scope.go:117] "RemoveContainer" containerID="320f295bd7f1413cc99a1a20a2c53b20e3bc1f764231c07886d04c601cb4521f" Mar 12 14:53:38.913982 master-0 kubenswrapper[37036]: I0312 14:53:38.912574 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b98899985-qcjxc"] Mar 12 14:53:38.929011 master-0 kubenswrapper[37036]: I0312 14:53:38.927034 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b98899985-qcjxc"] Mar 12 14:53:39.252927 master-0 kubenswrapper[37036]: I0312 14:53:39.250801 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f146506-a967-4da7-b1cb-57ba34e55eae" path="/var/lib/kubelet/pods/4f146506-a967-4da7-b1cb-57ba34e55eae/volumes" Mar 12 14:53:40.328567 master-0 kubenswrapper[37036]: I0312 14:53:40.328515 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-q6h7h" Mar 12 14:53:40.487808 master-0 kubenswrapper[37036]: I0312 14:53:40.486862 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccc5c739-be50-4e7f-a490-f901f062e630-config-data\") pod \"ccc5c739-be50-4e7f-a490-f901f062e630\" (UID: \"ccc5c739-be50-4e7f-a490-f901f062e630\") " Mar 12 14:53:40.487808 master-0 kubenswrapper[37036]: I0312 14:53:40.487091 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5mkb\" (UniqueName: \"kubernetes.io/projected/ccc5c739-be50-4e7f-a490-f901f062e630-kube-api-access-d5mkb\") pod \"ccc5c739-be50-4e7f-a490-f901f062e630\" (UID: \"ccc5c739-be50-4e7f-a490-f901f062e630\") " Mar 12 14:53:40.487808 master-0 kubenswrapper[37036]: I0312 14:53:40.487236 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccc5c739-be50-4e7f-a490-f901f062e630-scripts\") pod \"ccc5c739-be50-4e7f-a490-f901f062e630\" (UID: \"ccc5c739-be50-4e7f-a490-f901f062e630\") " Mar 12 14:53:40.487808 master-0 kubenswrapper[37036]: I0312 14:53:40.487406 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccc5c739-be50-4e7f-a490-f901f062e630-combined-ca-bundle\") pod \"ccc5c739-be50-4e7f-a490-f901f062e630\" (UID: \"ccc5c739-be50-4e7f-a490-f901f062e630\") " Mar 12 14:53:40.504014 master-0 kubenswrapper[37036]: I0312 14:53:40.503420 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccc5c739-be50-4e7f-a490-f901f062e630-kube-api-access-d5mkb" (OuterVolumeSpecName: "kube-api-access-d5mkb") pod "ccc5c739-be50-4e7f-a490-f901f062e630" (UID: "ccc5c739-be50-4e7f-a490-f901f062e630"). InnerVolumeSpecName "kube-api-access-d5mkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:53:40.504014 master-0 kubenswrapper[37036]: I0312 14:53:40.503517 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccc5c739-be50-4e7f-a490-f901f062e630-scripts" (OuterVolumeSpecName: "scripts") pod "ccc5c739-be50-4e7f-a490-f901f062e630" (UID: "ccc5c739-be50-4e7f-a490-f901f062e630"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:40.523058 master-0 kubenswrapper[37036]: I0312 14:53:40.521068 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccc5c739-be50-4e7f-a490-f901f062e630-config-data" (OuterVolumeSpecName: "config-data") pod "ccc5c739-be50-4e7f-a490-f901f062e630" (UID: "ccc5c739-be50-4e7f-a490-f901f062e630"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:40.523327 master-0 kubenswrapper[37036]: I0312 14:53:40.523061 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccc5c739-be50-4e7f-a490-f901f062e630-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ccc5c739-be50-4e7f-a490-f901f062e630" (UID: "ccc5c739-be50-4e7f-a490-f901f062e630"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:40.592762 master-0 kubenswrapper[37036]: I0312 14:53:40.592678 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccc5c739-be50-4e7f-a490-f901f062e630-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:40.592762 master-0 kubenswrapper[37036]: I0312 14:53:40.592738 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccc5c739-be50-4e7f-a490-f901f062e630-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:40.592762 master-0 kubenswrapper[37036]: I0312 14:53:40.592755 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5mkb\" (UniqueName: \"kubernetes.io/projected/ccc5c739-be50-4e7f-a490-f901f062e630-kube-api-access-d5mkb\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:40.592762 master-0 kubenswrapper[37036]: I0312 14:53:40.592770 37036 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccc5c739-be50-4e7f-a490-f901f062e630-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:40.908949 master-0 kubenswrapper[37036]: I0312 14:53:40.908860 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-q6h7h" Mar 12 14:53:40.915288 master-0 kubenswrapper[37036]: I0312 14:53:40.908775 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-q6h7h" event={"ID":"ccc5c739-be50-4e7f-a490-f901f062e630","Type":"ContainerDied","Data":"1789b4788da038fc92790193b3964242beb888d8d5466c05ca57fc2e24f3ca4e"} Mar 12 14:53:40.915524 master-0 kubenswrapper[37036]: I0312 14:53:40.915402 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1789b4788da038fc92790193b3964242beb888d8d5466c05ca57fc2e24f3ca4e" Mar 12 14:53:41.921159 master-0 kubenswrapper[37036]: I0312 14:53:41.921098 37036 generic.go:334] "Generic (PLEG): container finished" podID="40de96d4-7f02-4bc3-a660-957d6b986159" containerID="01a04603021a310ded4c2ac4245e98ec78b86ae594963f1ba71df7aa693c427f" exitCode=0 Mar 12 14:53:41.921721 master-0 kubenswrapper[37036]: I0312 14:53:41.921167 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-26qtm" event={"ID":"40de96d4-7f02-4bc3-a660-957d6b986159","Type":"ContainerDied","Data":"01a04603021a310ded4c2ac4245e98ec78b86ae594963f1ba71df7aa693c427f"} Mar 12 14:53:43.480816 master-0 kubenswrapper[37036]: I0312 14:53:43.480770 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-26qtm" Mar 12 14:53:43.575121 master-0 kubenswrapper[37036]: I0312 14:53:43.575068 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40de96d4-7f02-4bc3-a660-957d6b986159-scripts\") pod \"40de96d4-7f02-4bc3-a660-957d6b986159\" (UID: \"40de96d4-7f02-4bc3-a660-957d6b986159\") " Mar 12 14:53:43.575617 master-0 kubenswrapper[37036]: I0312 14:53:43.575585 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40de96d4-7f02-4bc3-a660-957d6b986159-config-data\") pod \"40de96d4-7f02-4bc3-a660-957d6b986159\" (UID: \"40de96d4-7f02-4bc3-a660-957d6b986159\") " Mar 12 14:53:43.576023 master-0 kubenswrapper[37036]: I0312 14:53:43.575955 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlq9q\" (UniqueName: \"kubernetes.io/projected/40de96d4-7f02-4bc3-a660-957d6b986159-kube-api-access-wlq9q\") pod \"40de96d4-7f02-4bc3-a660-957d6b986159\" (UID: \"40de96d4-7f02-4bc3-a660-957d6b986159\") " Mar 12 14:53:43.576149 master-0 kubenswrapper[37036]: I0312 14:53:43.576127 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40de96d4-7f02-4bc3-a660-957d6b986159-combined-ca-bundle\") pod \"40de96d4-7f02-4bc3-a660-957d6b986159\" (UID: \"40de96d4-7f02-4bc3-a660-957d6b986159\") " Mar 12 14:53:43.590006 master-0 kubenswrapper[37036]: I0312 14:53:43.589925 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40de96d4-7f02-4bc3-a660-957d6b986159-scripts" (OuterVolumeSpecName: "scripts") pod "40de96d4-7f02-4bc3-a660-957d6b986159" (UID: "40de96d4-7f02-4bc3-a660-957d6b986159"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:43.590197 master-0 kubenswrapper[37036]: I0312 14:53:43.590001 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40de96d4-7f02-4bc3-a660-957d6b986159-kube-api-access-wlq9q" (OuterVolumeSpecName: "kube-api-access-wlq9q") pod "40de96d4-7f02-4bc3-a660-957d6b986159" (UID: "40de96d4-7f02-4bc3-a660-957d6b986159"). InnerVolumeSpecName "kube-api-access-wlq9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:53:43.605074 master-0 kubenswrapper[37036]: I0312 14:53:43.605024 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40de96d4-7f02-4bc3-a660-957d6b986159-config-data" (OuterVolumeSpecName: "config-data") pod "40de96d4-7f02-4bc3-a660-957d6b986159" (UID: "40de96d4-7f02-4bc3-a660-957d6b986159"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:43.606356 master-0 kubenswrapper[37036]: I0312 14:53:43.606309 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40de96d4-7f02-4bc3-a660-957d6b986159-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40de96d4-7f02-4bc3-a660-957d6b986159" (UID: "40de96d4-7f02-4bc3-a660-957d6b986159"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:43.680341 master-0 kubenswrapper[37036]: I0312 14:53:43.680286 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40de96d4-7f02-4bc3-a660-957d6b986159-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:43.680548 master-0 kubenswrapper[37036]: I0312 14:53:43.680376 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlq9q\" (UniqueName: \"kubernetes.io/projected/40de96d4-7f02-4bc3-a660-957d6b986159-kube-api-access-wlq9q\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:43.680548 master-0 kubenswrapper[37036]: I0312 14:53:43.680396 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40de96d4-7f02-4bc3-a660-957d6b986159-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:43.680548 master-0 kubenswrapper[37036]: I0312 14:53:43.680406 37036 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40de96d4-7f02-4bc3-a660-957d6b986159-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:43.944941 master-0 kubenswrapper[37036]: I0312 14:53:43.944783 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-26qtm" event={"ID":"40de96d4-7f02-4bc3-a660-957d6b986159","Type":"ContainerDied","Data":"85d1363179369b0b7a7c0a4da770725e47d50443efaaa1b8b623fe996ab3f384"} Mar 12 14:53:43.944941 master-0 kubenswrapper[37036]: I0312 14:53:43.944830 37036 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85d1363179369b0b7a7c0a4da770725e47d50443efaaa1b8b623fe996ab3f384" Mar 12 14:53:43.944941 master-0 kubenswrapper[37036]: I0312 14:53:43.944885 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-26qtm" Mar 12 14:53:44.141892 master-0 kubenswrapper[37036]: I0312 14:53:44.141819 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 14:53:44.142723 master-0 kubenswrapper[37036]: I0312 14:53:44.142655 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a" containerName="nova-api-log" containerID="cri-o://d359a7ab08300b8242fb1ea43100a752b9043f500655dc3c8c463df0aa0709cd" gracePeriod=30 Mar 12 14:53:44.142810 master-0 kubenswrapper[37036]: I0312 14:53:44.142730 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a" containerName="nova-api-api" containerID="cri-o://50a803004ce750679cf33e283ae7f0c84135253788243881dd9ee48d4b468b1d" gracePeriod=30 Mar 12 14:53:44.172575 master-0 kubenswrapper[37036]: I0312 14:53:44.172507 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 14:53:44.172841 master-0 kubenswrapper[37036]: I0312 14:53:44.172767 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="29b7dc25-a599-4d70-ab20-134de7116d36" containerName="nova-scheduler-scheduler" containerID="cri-o://35b8098d082887e6625505328b0325b48a9845b6052dd7c30e869af7468c5051" gracePeriod=30 Mar 12 14:53:44.306097 master-0 kubenswrapper[37036]: I0312 14:53:44.306008 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 14:53:44.306355 master-0 kubenswrapper[37036]: I0312 14:53:44.306319 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="41dbd10c-af5b-4927-899a-f2661eede49e" containerName="nova-metadata-log" containerID="cri-o://b8e95615a47ca2ad8f86312cb3bf6d93155dfe7d9ef87cc466c970c7a0e320b2" gracePeriod=30 Mar 12 14:53:44.306500 master-0 kubenswrapper[37036]: I0312 14:53:44.306414 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="41dbd10c-af5b-4927-899a-f2661eede49e" containerName="nova-metadata-metadata" containerID="cri-o://46d50b6503878f989c61731a777aceed07e0f2296d9287c5960f12b857206c5c" gracePeriod=30 Mar 12 14:53:44.899077 master-0 kubenswrapper[37036]: I0312 14:53:44.899020 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 14:53:44.969810 master-0 kubenswrapper[37036]: I0312 14:53:44.969750 37036 generic.go:334] "Generic (PLEG): container finished" podID="5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a" containerID="50a803004ce750679cf33e283ae7f0c84135253788243881dd9ee48d4b468b1d" exitCode=0 Mar 12 14:53:44.969810 master-0 kubenswrapper[37036]: I0312 14:53:44.969797 37036 generic.go:334] "Generic (PLEG): container finished" podID="5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a" containerID="d359a7ab08300b8242fb1ea43100a752b9043f500655dc3c8c463df0aa0709cd" exitCode=143 Mar 12 14:53:44.970074 master-0 kubenswrapper[37036]: I0312 14:53:44.969912 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 14:53:44.970074 master-0 kubenswrapper[37036]: I0312 14:53:44.969952 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a","Type":"ContainerDied","Data":"50a803004ce750679cf33e283ae7f0c84135253788243881dd9ee48d4b468b1d"} Mar 12 14:53:44.970074 master-0 kubenswrapper[37036]: I0312 14:53:44.969994 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a","Type":"ContainerDied","Data":"d359a7ab08300b8242fb1ea43100a752b9043f500655dc3c8c463df0aa0709cd"} Mar 12 14:53:44.970074 master-0 kubenswrapper[37036]: I0312 14:53:44.970012 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a","Type":"ContainerDied","Data":"7e48aa3990bbb825bee0f56f77ffc3c5a7346b4abfe9100e7464c5795dcc16f1"} Mar 12 14:53:44.970191 master-0 kubenswrapper[37036]: I0312 14:53:44.970078 37036 scope.go:117] "RemoveContainer" containerID="50a803004ce750679cf33e283ae7f0c84135253788243881dd9ee48d4b468b1d" Mar 12 14:53:44.973803 master-0 kubenswrapper[37036]: I0312 14:53:44.973750 37036 generic.go:334] "Generic (PLEG): container finished" podID="41dbd10c-af5b-4927-899a-f2661eede49e" containerID="b8e95615a47ca2ad8f86312cb3bf6d93155dfe7d9ef87cc466c970c7a0e320b2" exitCode=143 Mar 12 14:53:44.974025 master-0 kubenswrapper[37036]: I0312 14:53:44.973807 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"41dbd10c-af5b-4927-899a-f2661eede49e","Type":"ContainerDied","Data":"b8e95615a47ca2ad8f86312cb3bf6d93155dfe7d9ef87cc466c970c7a0e320b2"} Mar 12 14:53:44.998434 master-0 kubenswrapper[37036]: I0312 14:53:44.998406 37036 scope.go:117] "RemoveContainer" containerID="d359a7ab08300b8242fb1ea43100a752b9043f500655dc3c8c463df0aa0709cd" Mar 12 14:53:45.017254 master-0 kubenswrapper[37036]: I0312 14:53:45.017186 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-internal-tls-certs\") pod \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\" (UID: \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\") " Mar 12 14:53:45.017594 master-0 kubenswrapper[37036]: I0312 14:53:45.017571 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-logs\") pod \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\" (UID: \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\") " Mar 12 14:53:45.017669 master-0 kubenswrapper[37036]: I0312 14:53:45.017654 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-public-tls-certs\") pod \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\" (UID: \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\") " Mar 12 14:53:45.017780 master-0 kubenswrapper[37036]: I0312 14:53:45.017738 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-combined-ca-bundle\") pod \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\" (UID: \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\") " Mar 12 14:53:45.017824 master-0 kubenswrapper[37036]: I0312 14:53:45.017788 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4r8g\" (UniqueName: \"kubernetes.io/projected/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-kube-api-access-q4r8g\") pod \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\" (UID: \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\") " Mar 12 14:53:45.017824 master-0 kubenswrapper[37036]: I0312 14:53:45.017807 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-config-data\") pod \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\" (UID: \"5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a\") " Mar 12 14:53:45.018866 master-0 kubenswrapper[37036]: I0312 14:53:45.018836 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-logs" (OuterVolumeSpecName: "logs") pod "5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a" (UID: "5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:53:45.019343 master-0 kubenswrapper[37036]: I0312 14:53:45.019315 37036 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:45.036129 master-0 kubenswrapper[37036]: I0312 14:53:45.024021 37036 scope.go:117] "RemoveContainer" containerID="50a803004ce750679cf33e283ae7f0c84135253788243881dd9ee48d4b468b1d" Mar 12 14:53:45.036129 master-0 kubenswrapper[37036]: E0312 14:53:45.024446 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50a803004ce750679cf33e283ae7f0c84135253788243881dd9ee48d4b468b1d\": container with ID starting with 50a803004ce750679cf33e283ae7f0c84135253788243881dd9ee48d4b468b1d not found: ID does not exist" containerID="50a803004ce750679cf33e283ae7f0c84135253788243881dd9ee48d4b468b1d" Mar 12 14:53:45.036129 master-0 kubenswrapper[37036]: I0312 14:53:45.024540 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50a803004ce750679cf33e283ae7f0c84135253788243881dd9ee48d4b468b1d"} err="failed to get container status \"50a803004ce750679cf33e283ae7f0c84135253788243881dd9ee48d4b468b1d\": rpc error: code = NotFound desc = could not find container \"50a803004ce750679cf33e283ae7f0c84135253788243881dd9ee48d4b468b1d\": container with ID starting with 50a803004ce750679cf33e283ae7f0c84135253788243881dd9ee48d4b468b1d not found: ID does not exist" Mar 12 14:53:45.036129 master-0 kubenswrapper[37036]: I0312 14:53:45.024579 37036 scope.go:117] "RemoveContainer" containerID="d359a7ab08300b8242fb1ea43100a752b9043f500655dc3c8c463df0aa0709cd" Mar 12 14:53:45.036129 master-0 kubenswrapper[37036]: E0312 14:53:45.024988 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d359a7ab08300b8242fb1ea43100a752b9043f500655dc3c8c463df0aa0709cd\": container with ID starting with d359a7ab08300b8242fb1ea43100a752b9043f500655dc3c8c463df0aa0709cd not found: ID does not exist" containerID="d359a7ab08300b8242fb1ea43100a752b9043f500655dc3c8c463df0aa0709cd" Mar 12 14:53:45.036129 master-0 kubenswrapper[37036]: I0312 14:53:45.025045 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d359a7ab08300b8242fb1ea43100a752b9043f500655dc3c8c463df0aa0709cd"} err="failed to get container status \"d359a7ab08300b8242fb1ea43100a752b9043f500655dc3c8c463df0aa0709cd\": rpc error: code = NotFound desc = could not find container \"d359a7ab08300b8242fb1ea43100a752b9043f500655dc3c8c463df0aa0709cd\": container with ID starting with d359a7ab08300b8242fb1ea43100a752b9043f500655dc3c8c463df0aa0709cd not found: ID does not exist" Mar 12 14:53:45.036129 master-0 kubenswrapper[37036]: I0312 14:53:45.025080 37036 scope.go:117] "RemoveContainer" containerID="50a803004ce750679cf33e283ae7f0c84135253788243881dd9ee48d4b468b1d" Mar 12 14:53:45.036129 master-0 kubenswrapper[37036]: I0312 14:53:45.025801 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50a803004ce750679cf33e283ae7f0c84135253788243881dd9ee48d4b468b1d"} err="failed to get container status \"50a803004ce750679cf33e283ae7f0c84135253788243881dd9ee48d4b468b1d\": rpc error: code = NotFound desc = could not find container \"50a803004ce750679cf33e283ae7f0c84135253788243881dd9ee48d4b468b1d\": container with ID starting with 50a803004ce750679cf33e283ae7f0c84135253788243881dd9ee48d4b468b1d not found: ID does not exist" Mar 12 14:53:45.036129 master-0 kubenswrapper[37036]: I0312 14:53:45.025837 37036 scope.go:117] "RemoveContainer" containerID="d359a7ab08300b8242fb1ea43100a752b9043f500655dc3c8c463df0aa0709cd" Mar 12 14:53:45.036129 master-0 kubenswrapper[37036]: I0312 14:53:45.026093 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d359a7ab08300b8242fb1ea43100a752b9043f500655dc3c8c463df0aa0709cd"} err="failed to get container status \"d359a7ab08300b8242fb1ea43100a752b9043f500655dc3c8c463df0aa0709cd\": rpc error: code = NotFound desc = could not find container \"d359a7ab08300b8242fb1ea43100a752b9043f500655dc3c8c463df0aa0709cd\": container with ID starting with d359a7ab08300b8242fb1ea43100a752b9043f500655dc3c8c463df0aa0709cd not found: ID does not exist" Mar 12 14:53:45.044430 master-0 kubenswrapper[37036]: I0312 14:53:45.037341 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-kube-api-access-q4r8g" (OuterVolumeSpecName: "kube-api-access-q4r8g") pod "5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a" (UID: "5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a"). InnerVolumeSpecName "kube-api-access-q4r8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:53:45.052964 master-0 kubenswrapper[37036]: I0312 14:53:45.052748 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-config-data" (OuterVolumeSpecName: "config-data") pod "5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a" (UID: "5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:45.054643 master-0 kubenswrapper[37036]: I0312 14:53:45.054556 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a" (UID: "5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:45.088021 master-0 kubenswrapper[37036]: I0312 14:53:45.085976 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a" (UID: "5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:45.097327 master-0 kubenswrapper[37036]: I0312 14:53:45.096980 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a" (UID: "5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:45.122253 master-0 kubenswrapper[37036]: I0312 14:53:45.122196 37036 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:45.122253 master-0 kubenswrapper[37036]: I0312 14:53:45.122232 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:45.122253 master-0 kubenswrapper[37036]: I0312 14:53:45.122243 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4r8g\" (UniqueName: \"kubernetes.io/projected/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-kube-api-access-q4r8g\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:45.122549 master-0 kubenswrapper[37036]: I0312 14:53:45.122273 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:45.122549 master-0 kubenswrapper[37036]: I0312 14:53:45.122288 37036 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:45.325024 master-0 kubenswrapper[37036]: I0312 14:53:45.324729 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 14:53:45.354160 master-0 kubenswrapper[37036]: I0312 14:53:45.353344 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 12 14:53:45.382846 master-0 kubenswrapper[37036]: I0312 14:53:45.382780 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 14:53:45.383747 master-0 kubenswrapper[37036]: E0312 14:53:45.383711 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f146506-a967-4da7-b1cb-57ba34e55eae" containerName="dnsmasq-dns" Mar 12 14:53:45.383747 master-0 kubenswrapper[37036]: I0312 14:53:45.383744 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f146506-a967-4da7-b1cb-57ba34e55eae" containerName="dnsmasq-dns" Mar 12 14:53:45.383813 master-0 kubenswrapper[37036]: E0312 14:53:45.383787 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40de96d4-7f02-4bc3-a660-957d6b986159" containerName="nova-manage" Mar 12 14:53:45.383813 master-0 kubenswrapper[37036]: I0312 14:53:45.383797 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="40de96d4-7f02-4bc3-a660-957d6b986159" containerName="nova-manage" Mar 12 14:53:45.383888 master-0 kubenswrapper[37036]: E0312 14:53:45.383820 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f146506-a967-4da7-b1cb-57ba34e55eae" containerName="init" Mar 12 14:53:45.383888 master-0 kubenswrapper[37036]: I0312 14:53:45.383830 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f146506-a967-4da7-b1cb-57ba34e55eae" containerName="init" Mar 12 14:53:45.383888 master-0 kubenswrapper[37036]: E0312 14:53:45.383845 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a" containerName="nova-api-log" Mar 12 14:53:45.383888 master-0 kubenswrapper[37036]: I0312 14:53:45.383853 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a" containerName="nova-api-log" Mar 12 14:53:45.383888 master-0 kubenswrapper[37036]: E0312 14:53:45.383875 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a" containerName="nova-api-api" Mar 12 14:53:45.383888 master-0 kubenswrapper[37036]: I0312 14:53:45.383882 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a" containerName="nova-api-api" Mar 12 14:53:45.384093 master-0 kubenswrapper[37036]: E0312 14:53:45.383943 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc5c739-be50-4e7f-a490-f901f062e630" containerName="nova-manage" Mar 12 14:53:45.384093 master-0 kubenswrapper[37036]: I0312 14:53:45.383953 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc5c739-be50-4e7f-a490-f901f062e630" containerName="nova-manage" Mar 12 14:53:45.384321 master-0 kubenswrapper[37036]: I0312 14:53:45.384288 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="40de96d4-7f02-4bc3-a660-957d6b986159" containerName="nova-manage" Mar 12 14:53:45.384321 master-0 kubenswrapper[37036]: I0312 14:53:45.384314 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a" containerName="nova-api-log" Mar 12 14:53:45.384384 master-0 kubenswrapper[37036]: I0312 14:53:45.384356 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f146506-a967-4da7-b1cb-57ba34e55eae" containerName="dnsmasq-dns" Mar 12 14:53:45.384384 master-0 kubenswrapper[37036]: I0312 14:53:45.384380 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a" containerName="nova-api-api" Mar 12 14:53:45.384558 master-0 kubenswrapper[37036]: I0312 14:53:45.384399 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccc5c739-be50-4e7f-a490-f901f062e630" containerName="nova-manage" Mar 12 14:53:45.386122 master-0 kubenswrapper[37036]: I0312 14:53:45.386094 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 14:53:45.390014 master-0 kubenswrapper[37036]: I0312 14:53:45.388801 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 14:53:45.390014 master-0 kubenswrapper[37036]: I0312 14:53:45.389106 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 12 14:53:45.390014 master-0 kubenswrapper[37036]: I0312 14:53:45.389234 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 12 14:53:45.402062 master-0 kubenswrapper[37036]: I0312 14:53:45.401990 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 14:53:45.537284 master-0 kubenswrapper[37036]: I0312 14:53:45.537206 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/371ab32b-22e0-41bb-8d36-5634d9ea3722-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"371ab32b-22e0-41bb-8d36-5634d9ea3722\") " pod="openstack/nova-api-0" Mar 12 14:53:45.538150 master-0 kubenswrapper[37036]: I0312 14:53:45.538116 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8qmb\" (UniqueName: \"kubernetes.io/projected/371ab32b-22e0-41bb-8d36-5634d9ea3722-kube-api-access-z8qmb\") pod \"nova-api-0\" (UID: \"371ab32b-22e0-41bb-8d36-5634d9ea3722\") " pod="openstack/nova-api-0" Mar 12 14:53:45.538387 master-0 kubenswrapper[37036]: I0312 14:53:45.538344 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/371ab32b-22e0-41bb-8d36-5634d9ea3722-logs\") pod \"nova-api-0\" (UID: \"371ab32b-22e0-41bb-8d36-5634d9ea3722\") " pod="openstack/nova-api-0" Mar 12 14:53:45.538580 master-0 kubenswrapper[37036]: I0312 14:53:45.538561 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/371ab32b-22e0-41bb-8d36-5634d9ea3722-config-data\") pod \"nova-api-0\" (UID: \"371ab32b-22e0-41bb-8d36-5634d9ea3722\") " pod="openstack/nova-api-0" Mar 12 14:53:45.538829 master-0 kubenswrapper[37036]: I0312 14:53:45.538811 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/371ab32b-22e0-41bb-8d36-5634d9ea3722-internal-tls-certs\") pod \"nova-api-0\" (UID: \"371ab32b-22e0-41bb-8d36-5634d9ea3722\") " pod="openstack/nova-api-0" Mar 12 14:53:45.539058 master-0 kubenswrapper[37036]: I0312 14:53:45.539040 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/371ab32b-22e0-41bb-8d36-5634d9ea3722-public-tls-certs\") pod \"nova-api-0\" (UID: \"371ab32b-22e0-41bb-8d36-5634d9ea3722\") " pod="openstack/nova-api-0" Mar 12 14:53:45.640734 master-0 kubenswrapper[37036]: I0312 14:53:45.640685 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/371ab32b-22e0-41bb-8d36-5634d9ea3722-public-tls-certs\") pod \"nova-api-0\" (UID: \"371ab32b-22e0-41bb-8d36-5634d9ea3722\") " pod="openstack/nova-api-0" Mar 12 14:53:45.640992 master-0 kubenswrapper[37036]: I0312 14:53:45.640755 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/371ab32b-22e0-41bb-8d36-5634d9ea3722-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"371ab32b-22e0-41bb-8d36-5634d9ea3722\") " pod="openstack/nova-api-0" Mar 12 14:53:45.640992 master-0 kubenswrapper[37036]: I0312 14:53:45.640805 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8qmb\" (UniqueName: \"kubernetes.io/projected/371ab32b-22e0-41bb-8d36-5634d9ea3722-kube-api-access-z8qmb\") pod \"nova-api-0\" (UID: \"371ab32b-22e0-41bb-8d36-5634d9ea3722\") " pod="openstack/nova-api-0" Mar 12 14:53:45.640992 master-0 kubenswrapper[37036]: I0312 14:53:45.640863 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/371ab32b-22e0-41bb-8d36-5634d9ea3722-logs\") pod \"nova-api-0\" (UID: \"371ab32b-22e0-41bb-8d36-5634d9ea3722\") " pod="openstack/nova-api-0" Mar 12 14:53:45.640992 master-0 kubenswrapper[37036]: I0312 14:53:45.640941 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/371ab32b-22e0-41bb-8d36-5634d9ea3722-config-data\") pod \"nova-api-0\" (UID: \"371ab32b-22e0-41bb-8d36-5634d9ea3722\") " pod="openstack/nova-api-0" Mar 12 14:53:45.641191 master-0 kubenswrapper[37036]: I0312 14:53:45.641031 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/371ab32b-22e0-41bb-8d36-5634d9ea3722-internal-tls-certs\") pod \"nova-api-0\" (UID: \"371ab32b-22e0-41bb-8d36-5634d9ea3722\") " pod="openstack/nova-api-0" Mar 12 14:53:45.641492 master-0 kubenswrapper[37036]: I0312 14:53:45.641455 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/371ab32b-22e0-41bb-8d36-5634d9ea3722-logs\") pod \"nova-api-0\" (UID: \"371ab32b-22e0-41bb-8d36-5634d9ea3722\") " pod="openstack/nova-api-0" Mar 12 14:53:45.646101 master-0 kubenswrapper[37036]: I0312 14:53:45.646055 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/371ab32b-22e0-41bb-8d36-5634d9ea3722-public-tls-certs\") pod \"nova-api-0\" (UID: \"371ab32b-22e0-41bb-8d36-5634d9ea3722\") " pod="openstack/nova-api-0" Mar 12 14:53:45.646228 master-0 kubenswrapper[37036]: I0312 14:53:45.646147 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/371ab32b-22e0-41bb-8d36-5634d9ea3722-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"371ab32b-22e0-41bb-8d36-5634d9ea3722\") " pod="openstack/nova-api-0" Mar 12 14:53:45.647634 master-0 kubenswrapper[37036]: I0312 14:53:45.647402 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/371ab32b-22e0-41bb-8d36-5634d9ea3722-config-data\") pod \"nova-api-0\" (UID: \"371ab32b-22e0-41bb-8d36-5634d9ea3722\") " pod="openstack/nova-api-0" Mar 12 14:53:45.647634 master-0 kubenswrapper[37036]: I0312 14:53:45.647422 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/371ab32b-22e0-41bb-8d36-5634d9ea3722-internal-tls-certs\") pod \"nova-api-0\" (UID: \"371ab32b-22e0-41bb-8d36-5634d9ea3722\") " pod="openstack/nova-api-0" Mar 12 14:53:45.659586 master-0 kubenswrapper[37036]: I0312 14:53:45.659526 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8qmb\" (UniqueName: \"kubernetes.io/projected/371ab32b-22e0-41bb-8d36-5634d9ea3722-kube-api-access-z8qmb\") pod \"nova-api-0\" (UID: \"371ab32b-22e0-41bb-8d36-5634d9ea3722\") " pod="openstack/nova-api-0" Mar 12 14:53:45.879092 master-0 kubenswrapper[37036]: I0312 14:53:45.879029 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 14:53:45.923394 master-0 kubenswrapper[37036]: I0312 14:53:45.923347 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 14:53:45.996191 master-0 kubenswrapper[37036]: I0312 14:53:45.996119 37036 generic.go:334] "Generic (PLEG): container finished" podID="29b7dc25-a599-4d70-ab20-134de7116d36" containerID="35b8098d082887e6625505328b0325b48a9845b6052dd7c30e869af7468c5051" exitCode=0 Mar 12 14:53:45.996191 master-0 kubenswrapper[37036]: I0312 14:53:45.996187 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"29b7dc25-a599-4d70-ab20-134de7116d36","Type":"ContainerDied","Data":"35b8098d082887e6625505328b0325b48a9845b6052dd7c30e869af7468c5051"} Mar 12 14:53:45.996191 master-0 kubenswrapper[37036]: I0312 14:53:45.996214 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"29b7dc25-a599-4d70-ab20-134de7116d36","Type":"ContainerDied","Data":"fa7b5d075465e552484fbd9672e9b9f7db591955e604b1a26b8695e5f7852190"} Mar 12 14:53:45.996628 master-0 kubenswrapper[37036]: I0312 14:53:45.996231 37036 scope.go:117] "RemoveContainer" containerID="35b8098d082887e6625505328b0325b48a9845b6052dd7c30e869af7468c5051" Mar 12 14:53:45.996628 master-0 kubenswrapper[37036]: I0312 14:53:45.996338 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 14:53:46.049271 master-0 kubenswrapper[37036]: I0312 14:53:46.049195 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b7dc25-a599-4d70-ab20-134de7116d36-combined-ca-bundle\") pod \"29b7dc25-a599-4d70-ab20-134de7116d36\" (UID: \"29b7dc25-a599-4d70-ab20-134de7116d36\") " Mar 12 14:53:46.049671 master-0 kubenswrapper[37036]: I0312 14:53:46.049383 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29b7dc25-a599-4d70-ab20-134de7116d36-config-data\") pod \"29b7dc25-a599-4d70-ab20-134de7116d36\" (UID: \"29b7dc25-a599-4d70-ab20-134de7116d36\") " Mar 12 14:53:46.049671 master-0 kubenswrapper[37036]: I0312 14:53:46.049611 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbhw5\" (UniqueName: \"kubernetes.io/projected/29b7dc25-a599-4d70-ab20-134de7116d36-kube-api-access-vbhw5\") pod \"29b7dc25-a599-4d70-ab20-134de7116d36\" (UID: \"29b7dc25-a599-4d70-ab20-134de7116d36\") " Mar 12 14:53:46.057965 master-0 kubenswrapper[37036]: I0312 14:53:46.055539 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29b7dc25-a599-4d70-ab20-134de7116d36-kube-api-access-vbhw5" (OuterVolumeSpecName: "kube-api-access-vbhw5") pod "29b7dc25-a599-4d70-ab20-134de7116d36" (UID: "29b7dc25-a599-4d70-ab20-134de7116d36"). InnerVolumeSpecName "kube-api-access-vbhw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:53:46.094678 master-0 kubenswrapper[37036]: I0312 14:53:46.094403 37036 scope.go:117] "RemoveContainer" containerID="35b8098d082887e6625505328b0325b48a9845b6052dd7c30e869af7468c5051" Mar 12 14:53:46.096451 master-0 kubenswrapper[37036]: E0312 14:53:46.096398 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35b8098d082887e6625505328b0325b48a9845b6052dd7c30e869af7468c5051\": container with ID starting with 35b8098d082887e6625505328b0325b48a9845b6052dd7c30e869af7468c5051 not found: ID does not exist" containerID="35b8098d082887e6625505328b0325b48a9845b6052dd7c30e869af7468c5051" Mar 12 14:53:46.096571 master-0 kubenswrapper[37036]: I0312 14:53:46.096444 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35b8098d082887e6625505328b0325b48a9845b6052dd7c30e869af7468c5051"} err="failed to get container status \"35b8098d082887e6625505328b0325b48a9845b6052dd7c30e869af7468c5051\": rpc error: code = NotFound desc = could not find container \"35b8098d082887e6625505328b0325b48a9845b6052dd7c30e869af7468c5051\": container with ID starting with 35b8098d082887e6625505328b0325b48a9845b6052dd7c30e869af7468c5051 not found: ID does not exist" Mar 12 14:53:46.100740 master-0 kubenswrapper[37036]: I0312 14:53:46.099749 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29b7dc25-a599-4d70-ab20-134de7116d36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29b7dc25-a599-4d70-ab20-134de7116d36" (UID: "29b7dc25-a599-4d70-ab20-134de7116d36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:46.108925 master-0 kubenswrapper[37036]: I0312 14:53:46.108853 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29b7dc25-a599-4d70-ab20-134de7116d36-config-data" (OuterVolumeSpecName: "config-data") pod "29b7dc25-a599-4d70-ab20-134de7116d36" (UID: "29b7dc25-a599-4d70-ab20-134de7116d36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:46.152251 master-0 kubenswrapper[37036]: I0312 14:53:46.152150 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29b7dc25-a599-4d70-ab20-134de7116d36-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:46.152251 master-0 kubenswrapper[37036]: I0312 14:53:46.152219 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbhw5\" (UniqueName: \"kubernetes.io/projected/29b7dc25-a599-4d70-ab20-134de7116d36-kube-api-access-vbhw5\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:46.152251 master-0 kubenswrapper[37036]: I0312 14:53:46.152231 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b7dc25-a599-4d70-ab20-134de7116d36-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:46.348606 master-0 kubenswrapper[37036]: I0312 14:53:46.348481 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 14:53:46.369807 master-0 kubenswrapper[37036]: I0312 14:53:46.369488 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 14:53:46.385547 master-0 kubenswrapper[37036]: I0312 14:53:46.385498 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 14:53:46.386287 master-0 kubenswrapper[37036]: E0312 14:53:46.386260 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b7dc25-a599-4d70-ab20-134de7116d36" containerName="nova-scheduler-scheduler" Mar 12 14:53:46.386287 master-0 kubenswrapper[37036]: I0312 14:53:46.386286 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b7dc25-a599-4d70-ab20-134de7116d36" containerName="nova-scheduler-scheduler" Mar 12 14:53:46.386688 master-0 kubenswrapper[37036]: I0312 14:53:46.386665 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="29b7dc25-a599-4d70-ab20-134de7116d36" containerName="nova-scheduler-scheduler" Mar 12 14:53:46.387722 master-0 kubenswrapper[37036]: I0312 14:53:46.387691 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 14:53:46.394423 master-0 kubenswrapper[37036]: I0312 14:53:46.394368 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 12 14:53:46.394670 master-0 kubenswrapper[37036]: W0312 14:53:46.394637 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod371ab32b_22e0_41bb_8d36_5634d9ea3722.slice/crio-0b9d0855f176257991791fc8ea8039cf148dbf9c36843596b03576143f054a31 WatchSource:0}: Error finding container 0b9d0855f176257991791fc8ea8039cf148dbf9c36843596b03576143f054a31: Status 404 returned error can't find the container with id 0b9d0855f176257991791fc8ea8039cf148dbf9c36843596b03576143f054a31 Mar 12 14:53:46.398022 master-0 kubenswrapper[37036]: I0312 14:53:46.397965 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 14:53:46.423981 master-0 kubenswrapper[37036]: I0312 14:53:46.423920 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 14:53:46.566934 master-0 kubenswrapper[37036]: I0312 14:53:46.566886 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rlhv\" (UniqueName: \"kubernetes.io/projected/96a60e40-b6c7-4771-9eb6-54aa05628a4d-kube-api-access-2rlhv\") pod \"nova-scheduler-0\" (UID: \"96a60e40-b6c7-4771-9eb6-54aa05628a4d\") " pod="openstack/nova-scheduler-0" Mar 12 14:53:46.567148 master-0 kubenswrapper[37036]: I0312 14:53:46.567130 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96a60e40-b6c7-4771-9eb6-54aa05628a4d-config-data\") pod \"nova-scheduler-0\" (UID: \"96a60e40-b6c7-4771-9eb6-54aa05628a4d\") " pod="openstack/nova-scheduler-0" Mar 12 14:53:46.567327 master-0 kubenswrapper[37036]: I0312 14:53:46.567304 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96a60e40-b6c7-4771-9eb6-54aa05628a4d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"96a60e40-b6c7-4771-9eb6-54aa05628a4d\") " pod="openstack/nova-scheduler-0" Mar 12 14:53:46.669102 master-0 kubenswrapper[37036]: I0312 14:53:46.669050 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rlhv\" (UniqueName: \"kubernetes.io/projected/96a60e40-b6c7-4771-9eb6-54aa05628a4d-kube-api-access-2rlhv\") pod \"nova-scheduler-0\" (UID: \"96a60e40-b6c7-4771-9eb6-54aa05628a4d\") " pod="openstack/nova-scheduler-0" Mar 12 14:53:46.669102 master-0 kubenswrapper[37036]: I0312 14:53:46.669106 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96a60e40-b6c7-4771-9eb6-54aa05628a4d-config-data\") pod \"nova-scheduler-0\" (UID: \"96a60e40-b6c7-4771-9eb6-54aa05628a4d\") " pod="openstack/nova-scheduler-0" Mar 12 14:53:46.669511 master-0 kubenswrapper[37036]: I0312 14:53:46.669483 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96a60e40-b6c7-4771-9eb6-54aa05628a4d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"96a60e40-b6c7-4771-9eb6-54aa05628a4d\") " pod="openstack/nova-scheduler-0" Mar 12 14:53:46.672153 master-0 kubenswrapper[37036]: I0312 14:53:46.672116 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96a60e40-b6c7-4771-9eb6-54aa05628a4d-config-data\") pod \"nova-scheduler-0\" (UID: \"96a60e40-b6c7-4771-9eb6-54aa05628a4d\") " pod="openstack/nova-scheduler-0" Mar 12 14:53:46.674185 master-0 kubenswrapper[37036]: I0312 14:53:46.674153 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96a60e40-b6c7-4771-9eb6-54aa05628a4d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"96a60e40-b6c7-4771-9eb6-54aa05628a4d\") " pod="openstack/nova-scheduler-0" Mar 12 14:53:46.688440 master-0 kubenswrapper[37036]: I0312 14:53:46.688397 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rlhv\" (UniqueName: \"kubernetes.io/projected/96a60e40-b6c7-4771-9eb6-54aa05628a4d-kube-api-access-2rlhv\") pod \"nova-scheduler-0\" (UID: \"96a60e40-b6c7-4771-9eb6-54aa05628a4d\") " pod="openstack/nova-scheduler-0" Mar 12 14:53:46.869628 master-0 kubenswrapper[37036]: I0312 14:53:46.869568 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 14:53:47.065721 master-0 kubenswrapper[37036]: I0312 14:53:47.065596 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"371ab32b-22e0-41bb-8d36-5634d9ea3722","Type":"ContainerStarted","Data":"82a62abd78a29f059df1395d0c6aaff805652ee116b4b9af4c0725fd3b07e470"} Mar 12 14:53:47.068390 master-0 kubenswrapper[37036]: I0312 14:53:47.068342 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"371ab32b-22e0-41bb-8d36-5634d9ea3722","Type":"ContainerStarted","Data":"37a10a37633e77d99a537d5b4d3239b87a297791f802f6906dcffddc5262b5d3"} Mar 12 14:53:47.068390 master-0 kubenswrapper[37036]: I0312 14:53:47.068382 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"371ab32b-22e0-41bb-8d36-5634d9ea3722","Type":"ContainerStarted","Data":"0b9d0855f176257991791fc8ea8039cf148dbf9c36843596b03576143f054a31"} Mar 12 14:53:47.103970 master-0 kubenswrapper[37036]: I0312 14:53:47.100342 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.100325633 podStartE2EDuration="2.100325633s" podCreationTimestamp="2026-03-12 14:53:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:53:47.093716923 +0000 UTC m=+1086.101457870" watchObservedRunningTime="2026-03-12 14:53:47.100325633 +0000 UTC m=+1086.108066570" Mar 12 14:53:47.249871 master-0 kubenswrapper[37036]: I0312 14:53:47.249811 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29b7dc25-a599-4d70-ab20-134de7116d36" path="/var/lib/kubelet/pods/29b7dc25-a599-4d70-ab20-134de7116d36/volumes" Mar 12 14:53:47.250563 master-0 kubenswrapper[37036]: I0312 14:53:47.250532 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a" path="/var/lib/kubelet/pods/5eeaa131-5527-4cd6-8e3b-e3dd359bcf3a/volumes" Mar 12 14:53:47.336186 master-0 kubenswrapper[37036]: I0312 14:53:47.336023 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 14:53:47.360589 master-0 kubenswrapper[37036]: W0312 14:53:47.360551 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96a60e40_b6c7_4771_9eb6_54aa05628a4d.slice/crio-603cab2a4dcb1fb5e4a2d44599e4f75e2fdb3fd2d33f59579e8f6b586d873493 WatchSource:0}: Error finding container 603cab2a4dcb1fb5e4a2d44599e4f75e2fdb3fd2d33f59579e8f6b586d873493: Status 404 returned error can't find the container with id 603cab2a4dcb1fb5e4a2d44599e4f75e2fdb3fd2d33f59579e8f6b586d873493 Mar 12 14:53:47.439947 master-0 kubenswrapper[37036]: I0312 14:53:47.439859 37036 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="41dbd10c-af5b-4927-899a-f2661eede49e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.16:8775/\": read tcp 10.128.0.2:44966->10.128.1.16:8775: read: connection reset by peer" Mar 12 14:53:47.440199 master-0 kubenswrapper[37036]: I0312 14:53:47.439858 37036 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="41dbd10c-af5b-4927-899a-f2661eede49e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.16:8775/\": read tcp 10.128.0.2:44970->10.128.1.16:8775: read: connection reset by peer" Mar 12 14:53:47.979275 master-0 kubenswrapper[37036]: I0312 14:53:47.978776 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 14:53:48.106598 master-0 kubenswrapper[37036]: I0312 14:53:48.106468 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prm86\" (UniqueName: \"kubernetes.io/projected/41dbd10c-af5b-4927-899a-f2661eede49e-kube-api-access-prm86\") pod \"41dbd10c-af5b-4927-899a-f2661eede49e\" (UID: \"41dbd10c-af5b-4927-899a-f2661eede49e\") " Mar 12 14:53:48.106598 master-0 kubenswrapper[37036]: I0312 14:53:48.106547 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41dbd10c-af5b-4927-899a-f2661eede49e-logs\") pod \"41dbd10c-af5b-4927-899a-f2661eede49e\" (UID: \"41dbd10c-af5b-4927-899a-f2661eede49e\") " Mar 12 14:53:48.106598 master-0 kubenswrapper[37036]: I0312 14:53:48.106595 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41dbd10c-af5b-4927-899a-f2661eede49e-combined-ca-bundle\") pod \"41dbd10c-af5b-4927-899a-f2661eede49e\" (UID: \"41dbd10c-af5b-4927-899a-f2661eede49e\") " Mar 12 14:53:48.107324 master-0 kubenswrapper[37036]: I0312 14:53:48.106628 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41dbd10c-af5b-4927-899a-f2661eede49e-config-data\") pod \"41dbd10c-af5b-4927-899a-f2661eede49e\" (UID: \"41dbd10c-af5b-4927-899a-f2661eede49e\") " Mar 12 14:53:48.107324 master-0 kubenswrapper[37036]: I0312 14:53:48.106651 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/41dbd10c-af5b-4927-899a-f2661eede49e-nova-metadata-tls-certs\") pod \"41dbd10c-af5b-4927-899a-f2661eede49e\" (UID: \"41dbd10c-af5b-4927-899a-f2661eede49e\") " Mar 12 14:53:48.107680 master-0 kubenswrapper[37036]: I0312 14:53:48.107624 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41dbd10c-af5b-4927-899a-f2661eede49e-logs" (OuterVolumeSpecName: "logs") pod "41dbd10c-af5b-4927-899a-f2661eede49e" (UID: "41dbd10c-af5b-4927-899a-f2661eede49e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:53:48.109687 master-0 kubenswrapper[37036]: I0312 14:53:48.109569 37036 generic.go:334] "Generic (PLEG): container finished" podID="41dbd10c-af5b-4927-899a-f2661eede49e" containerID="46d50b6503878f989c61731a777aceed07e0f2296d9287c5960f12b857206c5c" exitCode=0 Mar 12 14:53:48.109937 master-0 kubenswrapper[37036]: I0312 14:53:48.109663 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"41dbd10c-af5b-4927-899a-f2661eede49e","Type":"ContainerDied","Data":"46d50b6503878f989c61731a777aceed07e0f2296d9287c5960f12b857206c5c"} Mar 12 14:53:48.109937 master-0 kubenswrapper[37036]: I0312 14:53:48.109710 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 14:53:48.109937 master-0 kubenswrapper[37036]: I0312 14:53:48.109744 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"41dbd10c-af5b-4927-899a-f2661eede49e","Type":"ContainerDied","Data":"35dfbe220b75e7d1b3b211033be3823ed86ca037e9d9026570a3be1589fd597e"} Mar 12 14:53:48.109937 master-0 kubenswrapper[37036]: I0312 14:53:48.109765 37036 scope.go:117] "RemoveContainer" containerID="46d50b6503878f989c61731a777aceed07e0f2296d9287c5960f12b857206c5c" Mar 12 14:53:48.110968 master-0 kubenswrapper[37036]: I0312 14:53:48.110913 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41dbd10c-af5b-4927-899a-f2661eede49e-kube-api-access-prm86" (OuterVolumeSpecName: "kube-api-access-prm86") pod "41dbd10c-af5b-4927-899a-f2661eede49e" (UID: "41dbd10c-af5b-4927-899a-f2661eede49e"). InnerVolumeSpecName "kube-api-access-prm86". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:53:48.112864 master-0 kubenswrapper[37036]: I0312 14:53:48.112828 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"96a60e40-b6c7-4771-9eb6-54aa05628a4d","Type":"ContainerStarted","Data":"28a2f76eee6c9c7af06c60bf5296bec97fa7a9d4718291d6c99798b9e1f5d063"} Mar 12 14:53:48.112864 master-0 kubenswrapper[37036]: I0312 14:53:48.112864 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"96a60e40-b6c7-4771-9eb6-54aa05628a4d","Type":"ContainerStarted","Data":"603cab2a4dcb1fb5e4a2d44599e4f75e2fdb3fd2d33f59579e8f6b586d873493"} Mar 12 14:53:48.151075 master-0 kubenswrapper[37036]: I0312 14:53:48.146772 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41dbd10c-af5b-4927-899a-f2661eede49e-config-data" (OuterVolumeSpecName: "config-data") pod "41dbd10c-af5b-4927-899a-f2661eede49e" (UID: "41dbd10c-af5b-4927-899a-f2661eede49e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:48.151075 master-0 kubenswrapper[37036]: I0312 14:53:48.150230 37036 scope.go:117] "RemoveContainer" containerID="b8e95615a47ca2ad8f86312cb3bf6d93155dfe7d9ef87cc466c970c7a0e320b2" Mar 12 14:53:48.160562 master-0 kubenswrapper[37036]: I0312 14:53:48.160182 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.160130921 podStartE2EDuration="2.160130921s" podCreationTimestamp="2026-03-12 14:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:53:48.137271163 +0000 UTC m=+1087.145012100" watchObservedRunningTime="2026-03-12 14:53:48.160130921 +0000 UTC m=+1087.167871858" Mar 12 14:53:48.190518 master-0 kubenswrapper[37036]: I0312 14:53:48.190383 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41dbd10c-af5b-4927-899a-f2661eede49e-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "41dbd10c-af5b-4927-899a-f2661eede49e" (UID: "41dbd10c-af5b-4927-899a-f2661eede49e"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:48.194143 master-0 kubenswrapper[37036]: I0312 14:53:48.192575 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41dbd10c-af5b-4927-899a-f2661eede49e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41dbd10c-af5b-4927-899a-f2661eede49e" (UID: "41dbd10c-af5b-4927-899a-f2661eede49e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:53:48.211217 master-0 kubenswrapper[37036]: I0312 14:53:48.209148 37036 scope.go:117] "RemoveContainer" containerID="46d50b6503878f989c61731a777aceed07e0f2296d9287c5960f12b857206c5c" Mar 12 14:53:48.211217 master-0 kubenswrapper[37036]: I0312 14:53:48.209754 37036 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41dbd10c-af5b-4927-899a-f2661eede49e-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:48.211217 master-0 kubenswrapper[37036]: I0312 14:53:48.209798 37036 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41dbd10c-af5b-4927-899a-f2661eede49e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:48.211217 master-0 kubenswrapper[37036]: I0312 14:53:48.209809 37036 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41dbd10c-af5b-4927-899a-f2661eede49e-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:48.211217 master-0 kubenswrapper[37036]: I0312 14:53:48.209820 37036 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/41dbd10c-af5b-4927-899a-f2661eede49e-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:48.211217 master-0 kubenswrapper[37036]: I0312 14:53:48.209829 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prm86\" (UniqueName: \"kubernetes.io/projected/41dbd10c-af5b-4927-899a-f2661eede49e-kube-api-access-prm86\") on node \"master-0\" DevicePath \"\"" Mar 12 14:53:48.211485 master-0 kubenswrapper[37036]: E0312 14:53:48.211396 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46d50b6503878f989c61731a777aceed07e0f2296d9287c5960f12b857206c5c\": container with ID starting with 46d50b6503878f989c61731a777aceed07e0f2296d9287c5960f12b857206c5c not found: ID does not exist" containerID="46d50b6503878f989c61731a777aceed07e0f2296d9287c5960f12b857206c5c" Mar 12 14:53:48.211485 master-0 kubenswrapper[37036]: I0312 14:53:48.211429 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46d50b6503878f989c61731a777aceed07e0f2296d9287c5960f12b857206c5c"} err="failed to get container status \"46d50b6503878f989c61731a777aceed07e0f2296d9287c5960f12b857206c5c\": rpc error: code = NotFound desc = could not find container \"46d50b6503878f989c61731a777aceed07e0f2296d9287c5960f12b857206c5c\": container with ID starting with 46d50b6503878f989c61731a777aceed07e0f2296d9287c5960f12b857206c5c not found: ID does not exist" Mar 12 14:53:48.211485 master-0 kubenswrapper[37036]: I0312 14:53:48.211456 37036 scope.go:117] "RemoveContainer" containerID="b8e95615a47ca2ad8f86312cb3bf6d93155dfe7d9ef87cc466c970c7a0e320b2" Mar 12 14:53:48.211856 master-0 kubenswrapper[37036]: E0312 14:53:48.211828 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8e95615a47ca2ad8f86312cb3bf6d93155dfe7d9ef87cc466c970c7a0e320b2\": container with ID starting with b8e95615a47ca2ad8f86312cb3bf6d93155dfe7d9ef87cc466c970c7a0e320b2 not found: ID does not exist" containerID="b8e95615a47ca2ad8f86312cb3bf6d93155dfe7d9ef87cc466c970c7a0e320b2" Mar 12 14:53:48.211947 master-0 kubenswrapper[37036]: I0312 14:53:48.211851 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8e95615a47ca2ad8f86312cb3bf6d93155dfe7d9ef87cc466c970c7a0e320b2"} err="failed to get container status \"b8e95615a47ca2ad8f86312cb3bf6d93155dfe7d9ef87cc466c970c7a0e320b2\": rpc error: code = NotFound desc = could not find container \"b8e95615a47ca2ad8f86312cb3bf6d93155dfe7d9ef87cc466c970c7a0e320b2\": container with ID starting with b8e95615a47ca2ad8f86312cb3bf6d93155dfe7d9ef87cc466c970c7a0e320b2 not found: ID does not exist" Mar 12 14:53:48.468766 master-0 kubenswrapper[37036]: I0312 14:53:48.468705 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 14:53:48.488071 master-0 kubenswrapper[37036]: I0312 14:53:48.487952 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 14:53:48.500011 master-0 kubenswrapper[37036]: I0312 14:53:48.499948 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 12 14:53:48.500692 master-0 kubenswrapper[37036]: E0312 14:53:48.500659 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41dbd10c-af5b-4927-899a-f2661eede49e" containerName="nova-metadata-metadata" Mar 12 14:53:48.500771 master-0 kubenswrapper[37036]: I0312 14:53:48.500692 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="41dbd10c-af5b-4927-899a-f2661eede49e" containerName="nova-metadata-metadata" Mar 12 14:53:48.500771 master-0 kubenswrapper[37036]: E0312 14:53:48.500728 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41dbd10c-af5b-4927-899a-f2661eede49e" containerName="nova-metadata-log" Mar 12 14:53:48.500771 master-0 kubenswrapper[37036]: I0312 14:53:48.500742 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="41dbd10c-af5b-4927-899a-f2661eede49e" containerName="nova-metadata-log" Mar 12 14:53:48.501099 master-0 kubenswrapper[37036]: I0312 14:53:48.501076 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="41dbd10c-af5b-4927-899a-f2661eede49e" containerName="nova-metadata-log" Mar 12 14:53:48.501177 master-0 kubenswrapper[37036]: I0312 14:53:48.501146 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="41dbd10c-af5b-4927-899a-f2661eede49e" containerName="nova-metadata-metadata" Mar 12 14:53:48.502631 master-0 kubenswrapper[37036]: I0312 14:53:48.502609 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 14:53:48.505617 master-0 kubenswrapper[37036]: I0312 14:53:48.505553 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 12 14:53:48.506236 master-0 kubenswrapper[37036]: I0312 14:53:48.506200 37036 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 12 14:53:48.519966 master-0 kubenswrapper[37036]: I0312 14:53:48.518313 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/67eba7bf-7232-4b81-b3cc-2f3f34737ba6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"67eba7bf-7232-4b81-b3cc-2f3f34737ba6\") " pod="openstack/nova-metadata-0" Mar 12 14:53:48.519966 master-0 kubenswrapper[37036]: I0312 14:53:48.518422 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67eba7bf-7232-4b81-b3cc-2f3f34737ba6-logs\") pod \"nova-metadata-0\" (UID: \"67eba7bf-7232-4b81-b3cc-2f3f34737ba6\") " pod="openstack/nova-metadata-0" Mar 12 14:53:48.519966 master-0 kubenswrapper[37036]: I0312 14:53:48.518564 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67eba7bf-7232-4b81-b3cc-2f3f34737ba6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"67eba7bf-7232-4b81-b3cc-2f3f34737ba6\") " pod="openstack/nova-metadata-0" Mar 12 14:53:48.519966 master-0 kubenswrapper[37036]: I0312 14:53:48.518652 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnrl7\" (UniqueName: \"kubernetes.io/projected/67eba7bf-7232-4b81-b3cc-2f3f34737ba6-kube-api-access-qnrl7\") pod \"nova-metadata-0\" (UID: \"67eba7bf-7232-4b81-b3cc-2f3f34737ba6\") " pod="openstack/nova-metadata-0" Mar 12 14:53:48.519966 master-0 kubenswrapper[37036]: I0312 14:53:48.518779 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67eba7bf-7232-4b81-b3cc-2f3f34737ba6-config-data\") pod \"nova-metadata-0\" (UID: \"67eba7bf-7232-4b81-b3cc-2f3f34737ba6\") " pod="openstack/nova-metadata-0" Mar 12 14:53:48.547874 master-0 kubenswrapper[37036]: I0312 14:53:48.547813 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 14:53:48.621477 master-0 kubenswrapper[37036]: I0312 14:53:48.620592 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/67eba7bf-7232-4b81-b3cc-2f3f34737ba6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"67eba7bf-7232-4b81-b3cc-2f3f34737ba6\") " pod="openstack/nova-metadata-0" Mar 12 14:53:48.621477 master-0 kubenswrapper[37036]: I0312 14:53:48.620682 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67eba7bf-7232-4b81-b3cc-2f3f34737ba6-logs\") pod \"nova-metadata-0\" (UID: \"67eba7bf-7232-4b81-b3cc-2f3f34737ba6\") " pod="openstack/nova-metadata-0" Mar 12 14:53:48.621477 master-0 kubenswrapper[37036]: I0312 14:53:48.620772 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67eba7bf-7232-4b81-b3cc-2f3f34737ba6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"67eba7bf-7232-4b81-b3cc-2f3f34737ba6\") " pod="openstack/nova-metadata-0" Mar 12 14:53:48.621477 master-0 kubenswrapper[37036]: I0312 14:53:48.620798 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnrl7\" (UniqueName: \"kubernetes.io/projected/67eba7bf-7232-4b81-b3cc-2f3f34737ba6-kube-api-access-qnrl7\") pod \"nova-metadata-0\" (UID: \"67eba7bf-7232-4b81-b3cc-2f3f34737ba6\") " pod="openstack/nova-metadata-0" Mar 12 14:53:48.621477 master-0 kubenswrapper[37036]: I0312 14:53:48.621015 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67eba7bf-7232-4b81-b3cc-2f3f34737ba6-config-data\") pod \"nova-metadata-0\" (UID: \"67eba7bf-7232-4b81-b3cc-2f3f34737ba6\") " pod="openstack/nova-metadata-0" Mar 12 14:53:48.621477 master-0 kubenswrapper[37036]: I0312 14:53:48.621147 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67eba7bf-7232-4b81-b3cc-2f3f34737ba6-logs\") pod \"nova-metadata-0\" (UID: \"67eba7bf-7232-4b81-b3cc-2f3f34737ba6\") " pod="openstack/nova-metadata-0" Mar 12 14:53:48.624649 master-0 kubenswrapper[37036]: I0312 14:53:48.624575 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67eba7bf-7232-4b81-b3cc-2f3f34737ba6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"67eba7bf-7232-4b81-b3cc-2f3f34737ba6\") " pod="openstack/nova-metadata-0" Mar 12 14:53:48.625778 master-0 kubenswrapper[37036]: I0312 14:53:48.625680 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/67eba7bf-7232-4b81-b3cc-2f3f34737ba6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"67eba7bf-7232-4b81-b3cc-2f3f34737ba6\") " pod="openstack/nova-metadata-0" Mar 12 14:53:48.632025 master-0 kubenswrapper[37036]: I0312 14:53:48.631752 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67eba7bf-7232-4b81-b3cc-2f3f34737ba6-config-data\") pod \"nova-metadata-0\" (UID: \"67eba7bf-7232-4b81-b3cc-2f3f34737ba6\") " pod="openstack/nova-metadata-0" Mar 12 14:53:48.647695 master-0 kubenswrapper[37036]: I0312 14:53:48.647399 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnrl7\" (UniqueName: \"kubernetes.io/projected/67eba7bf-7232-4b81-b3cc-2f3f34737ba6-kube-api-access-qnrl7\") pod \"nova-metadata-0\" (UID: \"67eba7bf-7232-4b81-b3cc-2f3f34737ba6\") " pod="openstack/nova-metadata-0" Mar 12 14:53:48.848571 master-0 kubenswrapper[37036]: I0312 14:53:48.848525 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 14:53:49.250246 master-0 kubenswrapper[37036]: I0312 14:53:49.250178 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41dbd10c-af5b-4927-899a-f2661eede49e" path="/var/lib/kubelet/pods/41dbd10c-af5b-4927-899a-f2661eede49e/volumes" Mar 12 14:53:49.295141 master-0 kubenswrapper[37036]: I0312 14:53:49.295085 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 14:53:50.148145 master-0 kubenswrapper[37036]: I0312 14:53:50.148020 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67eba7bf-7232-4b81-b3cc-2f3f34737ba6","Type":"ContainerStarted","Data":"dfab3173f0175dce880abd9ca50b86b3be4c4b7f1a24f526c680647d255c85db"} Mar 12 14:53:50.148145 master-0 kubenswrapper[37036]: I0312 14:53:50.148074 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67eba7bf-7232-4b81-b3cc-2f3f34737ba6","Type":"ContainerStarted","Data":"42711d867325204d862247fc04a1d8a19fbea588a1462ca070f5a5f69385ac24"} Mar 12 14:53:50.148145 master-0 kubenswrapper[37036]: I0312 14:53:50.148084 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67eba7bf-7232-4b81-b3cc-2f3f34737ba6","Type":"ContainerStarted","Data":"33007a790a001a0a106962f0b7eaea87d766a619d35771284ea89f2f51844748"} Mar 12 14:53:50.172046 master-0 kubenswrapper[37036]: I0312 14:53:50.171952 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.171931434 podStartE2EDuration="2.171931434s" podCreationTimestamp="2026-03-12 14:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:53:50.169055544 +0000 UTC m=+1089.176796491" watchObservedRunningTime="2026-03-12 14:53:50.171931434 +0000 UTC m=+1089.179672391" Mar 12 14:53:51.869764 master-0 kubenswrapper[37036]: I0312 14:53:51.869688 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 12 14:53:53.849565 master-0 kubenswrapper[37036]: I0312 14:53:53.849491 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 14:53:53.849565 master-0 kubenswrapper[37036]: I0312 14:53:53.849556 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 14:53:55.880027 master-0 kubenswrapper[37036]: I0312 14:53:55.879777 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 14:53:55.880027 master-0 kubenswrapper[37036]: I0312 14:53:55.879862 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 14:53:56.877087 master-0 kubenswrapper[37036]: I0312 14:53:56.876172 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 12 14:53:56.903151 master-0 kubenswrapper[37036]: I0312 14:53:56.903048 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="371ab32b-22e0-41bb-8d36-5634d9ea3722" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.128.1.23:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:53:56.903781 master-0 kubenswrapper[37036]: I0312 14:53:56.903328 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="371ab32b-22e0-41bb-8d36-5634d9ea3722" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.128.1.23:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:53:57.085244 master-0 kubenswrapper[37036]: I0312 14:53:57.085196 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 12 14:53:57.279419 master-0 kubenswrapper[37036]: I0312 14:53:57.279332 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 12 14:53:58.849554 master-0 kubenswrapper[37036]: I0312 14:53:58.849498 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 14:53:58.850102 master-0 kubenswrapper[37036]: I0312 14:53:58.849826 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 14:53:59.866326 master-0 kubenswrapper[37036]: I0312 14:53:59.866210 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="67eba7bf-7232-4b81-b3cc-2f3f34737ba6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.25:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:53:59.866977 master-0 kubenswrapper[37036]: I0312 14:53:59.866250 37036 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="67eba7bf-7232-4b81-b3cc-2f3f34737ba6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.25:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 14:54:05.885787 master-0 kubenswrapper[37036]: I0312 14:54:05.885711 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 14:54:05.886534 master-0 kubenswrapper[37036]: I0312 14:54:05.886341 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 14:54:05.891000 master-0 kubenswrapper[37036]: I0312 14:54:05.890007 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 14:54:05.892184 master-0 kubenswrapper[37036]: I0312 14:54:05.892152 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 14:54:06.344492 master-0 kubenswrapper[37036]: I0312 14:54:06.344440 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 14:54:06.350494 master-0 kubenswrapper[37036]: I0312 14:54:06.350438 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 14:54:08.865268 master-0 kubenswrapper[37036]: I0312 14:54:08.865214 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 12 14:54:08.865853 master-0 kubenswrapper[37036]: I0312 14:54:08.865290 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 12 14:54:08.869967 master-0 kubenswrapper[37036]: I0312 14:54:08.869928 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 12 14:54:08.871069 master-0 kubenswrapper[37036]: I0312 14:54:08.871045 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 12 14:54:35.306685 master-0 kubenswrapper[37036]: I0312 14:54:35.306607 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-rfr25"] Mar 12 14:54:35.307471 master-0 kubenswrapper[37036]: I0312 14:54:35.306886 37036 kuberuntime_container.go:808] "Killing container with a grace period" pod="sushy-emulator/sushy-emulator-78f6d7d749-rfr25" podUID="b77289fb-9e3c-448c-a62c-9cba16fb43b8" containerName="sushy-emulator" containerID="cri-o://c55ea3141ea4ca3938cbf08815970b6a7543c2b491c6a21df96d44bd4af01641" gracePeriod=30 Mar 12 14:54:36.302627 master-0 kubenswrapper[37036]: I0312 14:54:36.302565 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-78f6d7d749-rfr25" Mar 12 14:54:36.367850 master-0 kubenswrapper[37036]: I0312 14:54:36.367780 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdjlf\" (UniqueName: \"kubernetes.io/projected/b77289fb-9e3c-448c-a62c-9cba16fb43b8-kube-api-access-cdjlf\") pod \"b77289fb-9e3c-448c-a62c-9cba16fb43b8\" (UID: \"b77289fb-9e3c-448c-a62c-9cba16fb43b8\") " Mar 12 14:54:36.368510 master-0 kubenswrapper[37036]: I0312 14:54:36.368294 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/b77289fb-9e3c-448c-a62c-9cba16fb43b8-os-client-config\") pod \"b77289fb-9e3c-448c-a62c-9cba16fb43b8\" (UID: \"b77289fb-9e3c-448c-a62c-9cba16fb43b8\") " Mar 12 14:54:36.368510 master-0 kubenswrapper[37036]: I0312 14:54:36.368385 37036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/b77289fb-9e3c-448c-a62c-9cba16fb43b8-sushy-emulator-config\") pod \"b77289fb-9e3c-448c-a62c-9cba16fb43b8\" (UID: \"b77289fb-9e3c-448c-a62c-9cba16fb43b8\") " Mar 12 14:54:36.371146 master-0 kubenswrapper[37036]: I0312 14:54:36.371104 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b77289fb-9e3c-448c-a62c-9cba16fb43b8-sushy-emulator-config" (OuterVolumeSpecName: "sushy-emulator-config") pod "b77289fb-9e3c-448c-a62c-9cba16fb43b8" (UID: "b77289fb-9e3c-448c-a62c-9cba16fb43b8"). InnerVolumeSpecName "sushy-emulator-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:54:36.376263 master-0 kubenswrapper[37036]: I0312 14:54:36.376218 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b77289fb-9e3c-448c-a62c-9cba16fb43b8-kube-api-access-cdjlf" (OuterVolumeSpecName: "kube-api-access-cdjlf") pod "b77289fb-9e3c-448c-a62c-9cba16fb43b8" (UID: "b77289fb-9e3c-448c-a62c-9cba16fb43b8"). InnerVolumeSpecName "kube-api-access-cdjlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:54:36.392863 master-0 kubenswrapper[37036]: I0312 14:54:36.392750 37036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b77289fb-9e3c-448c-a62c-9cba16fb43b8-os-client-config" (OuterVolumeSpecName: "os-client-config") pod "b77289fb-9e3c-448c-a62c-9cba16fb43b8" (UID: "b77289fb-9e3c-448c-a62c-9cba16fb43b8"). InnerVolumeSpecName "os-client-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:54:36.473121 master-0 kubenswrapper[37036]: I0312 14:54:36.472049 37036 reconciler_common.go:293] "Volume detached for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/b77289fb-9e3c-448c-a62c-9cba16fb43b8-os-client-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:54:36.473121 master-0 kubenswrapper[37036]: I0312 14:54:36.472098 37036 reconciler_common.go:293] "Volume detached for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/b77289fb-9e3c-448c-a62c-9cba16fb43b8-sushy-emulator-config\") on node \"master-0\" DevicePath \"\"" Mar 12 14:54:36.473121 master-0 kubenswrapper[37036]: I0312 14:54:36.472113 37036 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdjlf\" (UniqueName: \"kubernetes.io/projected/b77289fb-9e3c-448c-a62c-9cba16fb43b8-kube-api-access-cdjlf\") on node \"master-0\" DevicePath \"\"" Mar 12 14:54:36.709284 master-0 kubenswrapper[37036]: I0312 14:54:36.709231 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-84965d5d88-7btjb"] Mar 12 14:54:36.710242 master-0 kubenswrapper[37036]: E0312 14:54:36.710218 37036 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b77289fb-9e3c-448c-a62c-9cba16fb43b8" containerName="sushy-emulator" Mar 12 14:54:36.710359 master-0 kubenswrapper[37036]: I0312 14:54:36.710344 37036 state_mem.go:107] "Deleted CPUSet assignment" podUID="b77289fb-9e3c-448c-a62c-9cba16fb43b8" containerName="sushy-emulator" Mar 12 14:54:36.710816 master-0 kubenswrapper[37036]: I0312 14:54:36.710797 37036 memory_manager.go:354] "RemoveStaleState removing state" podUID="b77289fb-9e3c-448c-a62c-9cba16fb43b8" containerName="sushy-emulator" Mar 12 14:54:36.711988 master-0 kubenswrapper[37036]: I0312 14:54:36.711966 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-84965d5d88-7btjb" Mar 12 14:54:36.741333 master-0 kubenswrapper[37036]: I0312 14:54:36.741259 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-84965d5d88-7btjb"] Mar 12 14:54:36.746655 master-0 kubenswrapper[37036]: I0312 14:54:36.744505 37036 generic.go:334] "Generic (PLEG): container finished" podID="b77289fb-9e3c-448c-a62c-9cba16fb43b8" containerID="c55ea3141ea4ca3938cbf08815970b6a7543c2b491c6a21df96d44bd4af01641" exitCode=0 Mar 12 14:54:36.746655 master-0 kubenswrapper[37036]: I0312 14:54:36.744555 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-78f6d7d749-rfr25" event={"ID":"b77289fb-9e3c-448c-a62c-9cba16fb43b8","Type":"ContainerDied","Data":"c55ea3141ea4ca3938cbf08815970b6a7543c2b491c6a21df96d44bd4af01641"} Mar 12 14:54:36.746655 master-0 kubenswrapper[37036]: I0312 14:54:36.744582 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-78f6d7d749-rfr25" event={"ID":"b77289fb-9e3c-448c-a62c-9cba16fb43b8","Type":"ContainerDied","Data":"e31a799e13a52c9c8c63a4be41fd1241745cf44b5c0af7308e0ca38cd28ddc01"} Mar 12 14:54:36.746655 master-0 kubenswrapper[37036]: I0312 14:54:36.744601 37036 scope.go:117] "RemoveContainer" containerID="c55ea3141ea4ca3938cbf08815970b6a7543c2b491c6a21df96d44bd4af01641" Mar 12 14:54:36.746655 master-0 kubenswrapper[37036]: I0312 14:54:36.744605 37036 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-78f6d7d749-rfr25" Mar 12 14:54:36.795049 master-0 kubenswrapper[37036]: I0312 14:54:36.794988 37036 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-rfr25"] Mar 12 14:54:36.814962 master-0 kubenswrapper[37036]: I0312 14:54:36.814885 37036 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-rfr25"] Mar 12 14:54:36.838166 master-0 kubenswrapper[37036]: I0312 14:54:36.836980 37036 scope.go:117] "RemoveContainer" containerID="c55ea3141ea4ca3938cbf08815970b6a7543c2b491c6a21df96d44bd4af01641" Mar 12 14:54:36.840324 master-0 kubenswrapper[37036]: E0312 14:54:36.840267 37036 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c55ea3141ea4ca3938cbf08815970b6a7543c2b491c6a21df96d44bd4af01641\": container with ID starting with c55ea3141ea4ca3938cbf08815970b6a7543c2b491c6a21df96d44bd4af01641 not found: ID does not exist" containerID="c55ea3141ea4ca3938cbf08815970b6a7543c2b491c6a21df96d44bd4af01641" Mar 12 14:54:36.840688 master-0 kubenswrapper[37036]: I0312 14:54:36.840461 37036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c55ea3141ea4ca3938cbf08815970b6a7543c2b491c6a21df96d44bd4af01641"} err="failed to get container status \"c55ea3141ea4ca3938cbf08815970b6a7543c2b491c6a21df96d44bd4af01641\": rpc error: code = NotFound desc = could not find container \"c55ea3141ea4ca3938cbf08815970b6a7543c2b491c6a21df96d44bd4af01641\": container with ID starting with c55ea3141ea4ca3938cbf08815970b6a7543c2b491c6a21df96d44bd4af01641 not found: ID does not exist" Mar 12 14:54:36.882980 master-0 kubenswrapper[37036]: I0312 14:54:36.882916 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/3a4efe37-24b5-424d-8700-90615380982a-sushy-emulator-config\") pod \"sushy-emulator-84965d5d88-7btjb\" (UID: \"3a4efe37-24b5-424d-8700-90615380982a\") " pod="sushy-emulator/sushy-emulator-84965d5d88-7btjb" Mar 12 14:54:36.883508 master-0 kubenswrapper[37036]: I0312 14:54:36.883449 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t44jw\" (UniqueName: \"kubernetes.io/projected/3a4efe37-24b5-424d-8700-90615380982a-kube-api-access-t44jw\") pod \"sushy-emulator-84965d5d88-7btjb\" (UID: \"3a4efe37-24b5-424d-8700-90615380982a\") " pod="sushy-emulator/sushy-emulator-84965d5d88-7btjb" Mar 12 14:54:36.883671 master-0 kubenswrapper[37036]: I0312 14:54:36.883640 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/3a4efe37-24b5-424d-8700-90615380982a-os-client-config\") pod \"sushy-emulator-84965d5d88-7btjb\" (UID: \"3a4efe37-24b5-424d-8700-90615380982a\") " pod="sushy-emulator/sushy-emulator-84965d5d88-7btjb" Mar 12 14:54:36.985214 master-0 kubenswrapper[37036]: I0312 14:54:36.985069 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t44jw\" (UniqueName: \"kubernetes.io/projected/3a4efe37-24b5-424d-8700-90615380982a-kube-api-access-t44jw\") pod \"sushy-emulator-84965d5d88-7btjb\" (UID: \"3a4efe37-24b5-424d-8700-90615380982a\") " pod="sushy-emulator/sushy-emulator-84965d5d88-7btjb" Mar 12 14:54:36.985466 master-0 kubenswrapper[37036]: I0312 14:54:36.985230 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/3a4efe37-24b5-424d-8700-90615380982a-os-client-config\") pod \"sushy-emulator-84965d5d88-7btjb\" (UID: \"3a4efe37-24b5-424d-8700-90615380982a\") " pod="sushy-emulator/sushy-emulator-84965d5d88-7btjb" Mar 12 14:54:36.985466 master-0 kubenswrapper[37036]: I0312 14:54:36.985306 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/3a4efe37-24b5-424d-8700-90615380982a-sushy-emulator-config\") pod \"sushy-emulator-84965d5d88-7btjb\" (UID: \"3a4efe37-24b5-424d-8700-90615380982a\") " pod="sushy-emulator/sushy-emulator-84965d5d88-7btjb" Mar 12 14:54:36.986386 master-0 kubenswrapper[37036]: I0312 14:54:36.986353 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/3a4efe37-24b5-424d-8700-90615380982a-sushy-emulator-config\") pod \"sushy-emulator-84965d5d88-7btjb\" (UID: \"3a4efe37-24b5-424d-8700-90615380982a\") " pod="sushy-emulator/sushy-emulator-84965d5d88-7btjb" Mar 12 14:54:36.990635 master-0 kubenswrapper[37036]: I0312 14:54:36.990586 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/3a4efe37-24b5-424d-8700-90615380982a-os-client-config\") pod \"sushy-emulator-84965d5d88-7btjb\" (UID: \"3a4efe37-24b5-424d-8700-90615380982a\") " pod="sushy-emulator/sushy-emulator-84965d5d88-7btjb" Mar 12 14:54:37.005481 master-0 kubenswrapper[37036]: I0312 14:54:37.005436 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t44jw\" (UniqueName: \"kubernetes.io/projected/3a4efe37-24b5-424d-8700-90615380982a-kube-api-access-t44jw\") pod \"sushy-emulator-84965d5d88-7btjb\" (UID: \"3a4efe37-24b5-424d-8700-90615380982a\") " pod="sushy-emulator/sushy-emulator-84965d5d88-7btjb" Mar 12 14:54:37.035528 master-0 kubenswrapper[37036]: I0312 14:54:37.035465 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-84965d5d88-7btjb" Mar 12 14:54:37.254251 master-0 kubenswrapper[37036]: I0312 14:54:37.254188 37036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b77289fb-9e3c-448c-a62c-9cba16fb43b8" path="/var/lib/kubelet/pods/b77289fb-9e3c-448c-a62c-9cba16fb43b8/volumes" Mar 12 14:54:37.565387 master-0 kubenswrapper[37036]: I0312 14:54:37.565322 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-84965d5d88-7btjb"] Mar 12 14:54:37.580600 master-0 kubenswrapper[37036]: W0312 14:54:37.580365 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a4efe37_24b5_424d_8700_90615380982a.slice/crio-c8d0b7e9f2d51038a78152f1719d3f820b0f27f7396fa126f40175879c805d0e WatchSource:0}: Error finding container c8d0b7e9f2d51038a78152f1719d3f820b0f27f7396fa126f40175879c805d0e: Status 404 returned error can't find the container with id c8d0b7e9f2d51038a78152f1719d3f820b0f27f7396fa126f40175879c805d0e Mar 12 14:54:37.756697 master-0 kubenswrapper[37036]: I0312 14:54:37.756641 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-84965d5d88-7btjb" event={"ID":"3a4efe37-24b5-424d-8700-90615380982a","Type":"ContainerStarted","Data":"c8d0b7e9f2d51038a78152f1719d3f820b0f27f7396fa126f40175879c805d0e"} Mar 12 14:54:38.775549 master-0 kubenswrapper[37036]: I0312 14:54:38.775403 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-84965d5d88-7btjb" event={"ID":"3a4efe37-24b5-424d-8700-90615380982a","Type":"ContainerStarted","Data":"2ca497156bc94244c3ea2e6c00eb36916be9cb56a65ce13d4b24af7f184a2594"} Mar 12 14:54:38.822551 master-0 kubenswrapper[37036]: I0312 14:54:38.822470 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-84965d5d88-7btjb" podStartSLOduration=2.822450386 podStartE2EDuration="2.822450386s" podCreationTimestamp="2026-03-12 14:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:54:38.803339069 +0000 UTC m=+1137.811080006" watchObservedRunningTime="2026-03-12 14:54:38.822450386 +0000 UTC m=+1137.830191323" Mar 12 14:54:47.036024 master-0 kubenswrapper[37036]: I0312 14:54:47.035947 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-84965d5d88-7btjb" Mar 12 14:54:47.036024 master-0 kubenswrapper[37036]: I0312 14:54:47.036008 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-84965d5d88-7btjb" Mar 12 14:54:47.049675 master-0 kubenswrapper[37036]: I0312 14:54:47.049598 37036 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-84965d5d88-7btjb" Mar 12 14:54:47.892641 master-0 kubenswrapper[37036]: I0312 14:54:47.892547 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-84965d5d88-7btjb" Mar 12 14:55:44.608250 master-0 kubenswrapper[37036]: I0312 14:55:44.608168 37036 scope.go:117] "RemoveContainer" containerID="9457a74c3b0d7737ba8ee40b0e7cd1ce0418c98f841e5d68dfc27385f4ea28bd" Mar 12 14:55:44.630416 master-0 kubenswrapper[37036]: I0312 14:55:44.630362 37036 scope.go:117] "RemoveContainer" containerID="3312bced8ddfaa859b4b5ce821d267e776224cbcedcad7308170a24b3f24dd14" Mar 12 14:56:44.725009 master-0 kubenswrapper[37036]: I0312 14:56:44.724854 37036 scope.go:117] "RemoveContainer" containerID="c53e9b5eca44dbdb711fd9ca31714b12eeb4a562c2a32756e0839e8a22701626" Mar 12 14:56:44.776665 master-0 kubenswrapper[37036]: I0312 14:56:44.776606 37036 scope.go:117] "RemoveContainer" containerID="4a5d7cdb26d1dba2275f36ad028c23931eadc88d305215e98e2edefe9cf43015" Mar 12 14:56:44.800713 master-0 kubenswrapper[37036]: I0312 14:56:44.800633 37036 scope.go:117] "RemoveContainer" containerID="f636ee0e3059b014460c00cc556a02c3208e8c0f62e970894bdb2e5c1ad01b52" Mar 12 14:56:44.832126 master-0 kubenswrapper[37036]: I0312 14:56:44.831927 37036 scope.go:117] "RemoveContainer" containerID="c8ba044ac56699d5d1fefb52ed073dbfee76f81402b701b3312728e398391369" Mar 12 14:57:44.975103 master-0 kubenswrapper[37036]: I0312 14:57:44.975043 37036 scope.go:117] "RemoveContainer" containerID="fd5bbad93f3b715cb2cff75c5354ceb537717061278c7c8765fe906f2526900e" Mar 12 14:57:45.002707 master-0 kubenswrapper[37036]: I0312 14:57:45.002629 37036 scope.go:117] "RemoveContainer" containerID="4bbe3b4a0e9688f41597323db7c8d29bbb53026d0fbd65feee38b96c8e042453" Mar 12 14:58:26.964348 master-0 kubenswrapper[37036]: I0312 14:58:26.964266 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8r8cw/must-gather-hz5xm"] Mar 12 14:58:26.966432 master-0 kubenswrapper[37036]: I0312 14:58:26.966395 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8r8cw/must-gather-hz5xm" Mar 12 14:58:26.972358 master-0 kubenswrapper[37036]: I0312 14:58:26.972306 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8r8cw"/"openshift-service-ca.crt" Mar 12 14:58:26.972635 master-0 kubenswrapper[37036]: I0312 14:58:26.972380 37036 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8r8cw"/"kube-root-ca.crt" Mar 12 14:58:26.984716 master-0 kubenswrapper[37036]: I0312 14:58:26.984660 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8r8cw/must-gather-98hj2"] Mar 12 14:58:26.987007 master-0 kubenswrapper[37036]: I0312 14:58:26.986942 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8r8cw/must-gather-98hj2" Mar 12 14:58:27.008512 master-0 kubenswrapper[37036]: I0312 14:58:27.008451 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8r8cw/must-gather-hz5xm"] Mar 12 14:58:27.030836 master-0 kubenswrapper[37036]: I0312 14:58:27.030771 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8r8cw/must-gather-98hj2"] Mar 12 14:58:27.067468 master-0 kubenswrapper[37036]: I0312 14:58:27.062731 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/af12ad53-66f1-4e51-8bfc-8d6c9dd2234d-must-gather-output\") pod \"must-gather-98hj2\" (UID: \"af12ad53-66f1-4e51-8bfc-8d6c9dd2234d\") " pod="openshift-must-gather-8r8cw/must-gather-98hj2" Mar 12 14:58:27.067468 master-0 kubenswrapper[37036]: I0312 14:58:27.062814 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d7696fca-9f2c-4106-af46-037eda3305f3-must-gather-output\") pod \"must-gather-hz5xm\" (UID: \"d7696fca-9f2c-4106-af46-037eda3305f3\") " pod="openshift-must-gather-8r8cw/must-gather-hz5xm" Mar 12 14:58:27.067468 master-0 kubenswrapper[37036]: I0312 14:58:27.062920 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2mdr\" (UniqueName: \"kubernetes.io/projected/d7696fca-9f2c-4106-af46-037eda3305f3-kube-api-access-l2mdr\") pod \"must-gather-hz5xm\" (UID: \"d7696fca-9f2c-4106-af46-037eda3305f3\") " pod="openshift-must-gather-8r8cw/must-gather-hz5xm" Mar 12 14:58:27.067468 master-0 kubenswrapper[37036]: I0312 14:58:27.063047 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp99l\" (UniqueName: \"kubernetes.io/projected/af12ad53-66f1-4e51-8bfc-8d6c9dd2234d-kube-api-access-gp99l\") pod \"must-gather-98hj2\" (UID: \"af12ad53-66f1-4e51-8bfc-8d6c9dd2234d\") " pod="openshift-must-gather-8r8cw/must-gather-98hj2" Mar 12 14:58:27.165511 master-0 kubenswrapper[37036]: I0312 14:58:27.165425 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp99l\" (UniqueName: \"kubernetes.io/projected/af12ad53-66f1-4e51-8bfc-8d6c9dd2234d-kube-api-access-gp99l\") pod \"must-gather-98hj2\" (UID: \"af12ad53-66f1-4e51-8bfc-8d6c9dd2234d\") " pod="openshift-must-gather-8r8cw/must-gather-98hj2" Mar 12 14:58:27.165767 master-0 kubenswrapper[37036]: I0312 14:58:27.165596 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/af12ad53-66f1-4e51-8bfc-8d6c9dd2234d-must-gather-output\") pod \"must-gather-98hj2\" (UID: \"af12ad53-66f1-4e51-8bfc-8d6c9dd2234d\") " pod="openshift-must-gather-8r8cw/must-gather-98hj2" Mar 12 14:58:27.165767 master-0 kubenswrapper[37036]: I0312 14:58:27.165619 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d7696fca-9f2c-4106-af46-037eda3305f3-must-gather-output\") pod \"must-gather-hz5xm\" (UID: \"d7696fca-9f2c-4106-af46-037eda3305f3\") " pod="openshift-must-gather-8r8cw/must-gather-hz5xm" Mar 12 14:58:27.165767 master-0 kubenswrapper[37036]: I0312 14:58:27.165672 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2mdr\" (UniqueName: \"kubernetes.io/projected/d7696fca-9f2c-4106-af46-037eda3305f3-kube-api-access-l2mdr\") pod \"must-gather-hz5xm\" (UID: \"d7696fca-9f2c-4106-af46-037eda3305f3\") " pod="openshift-must-gather-8r8cw/must-gather-hz5xm" Mar 12 14:58:27.187292 master-0 kubenswrapper[37036]: I0312 14:58:27.187226 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/af12ad53-66f1-4e51-8bfc-8d6c9dd2234d-must-gather-output\") pod \"must-gather-98hj2\" (UID: \"af12ad53-66f1-4e51-8bfc-8d6c9dd2234d\") " pod="openshift-must-gather-8r8cw/must-gather-98hj2" Mar 12 14:58:27.199722 master-0 kubenswrapper[37036]: I0312 14:58:27.199640 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d7696fca-9f2c-4106-af46-037eda3305f3-must-gather-output\") pod \"must-gather-hz5xm\" (UID: \"d7696fca-9f2c-4106-af46-037eda3305f3\") " pod="openshift-must-gather-8r8cw/must-gather-hz5xm" Mar 12 14:58:27.414970 master-0 kubenswrapper[37036]: I0312 14:58:27.414400 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp99l\" (UniqueName: \"kubernetes.io/projected/af12ad53-66f1-4e51-8bfc-8d6c9dd2234d-kube-api-access-gp99l\") pod \"must-gather-98hj2\" (UID: \"af12ad53-66f1-4e51-8bfc-8d6c9dd2234d\") " pod="openshift-must-gather-8r8cw/must-gather-98hj2" Mar 12 14:58:27.419658 master-0 kubenswrapper[37036]: I0312 14:58:27.419589 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2mdr\" (UniqueName: \"kubernetes.io/projected/d7696fca-9f2c-4106-af46-037eda3305f3-kube-api-access-l2mdr\") pod \"must-gather-hz5xm\" (UID: \"d7696fca-9f2c-4106-af46-037eda3305f3\") " pod="openshift-must-gather-8r8cw/must-gather-hz5xm" Mar 12 14:58:27.589058 master-0 kubenswrapper[37036]: I0312 14:58:27.588996 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8r8cw/must-gather-hz5xm" Mar 12 14:58:27.611315 master-0 kubenswrapper[37036]: I0312 14:58:27.611246 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8r8cw/must-gather-98hj2" Mar 12 14:58:28.146423 master-0 kubenswrapper[37036]: I0312 14:58:28.146360 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8r8cw/must-gather-98hj2"] Mar 12 14:58:28.149825 master-0 kubenswrapper[37036]: I0312 14:58:28.149752 37036 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 14:58:28.265463 master-0 kubenswrapper[37036]: I0312 14:58:28.265418 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8r8cw/must-gather-hz5xm"] Mar 12 14:58:28.268001 master-0 kubenswrapper[37036]: W0312 14:58:28.266016 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7696fca_9f2c_4106_af46_037eda3305f3.slice/crio-4a7c1131a5620d8c2f87eff4f6cf9f9c7da9630994a370118dbd52c9866b80f0 WatchSource:0}: Error finding container 4a7c1131a5620d8c2f87eff4f6cf9f9c7da9630994a370118dbd52c9866b80f0: Status 404 returned error can't find the container with id 4a7c1131a5620d8c2f87eff4f6cf9f9c7da9630994a370118dbd52c9866b80f0 Mar 12 14:58:28.612829 master-0 kubenswrapper[37036]: I0312 14:58:28.612680 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8r8cw/must-gather-hz5xm" event={"ID":"d7696fca-9f2c-4106-af46-037eda3305f3","Type":"ContainerStarted","Data":"4a7c1131a5620d8c2f87eff4f6cf9f9c7da9630994a370118dbd52c9866b80f0"} Mar 12 14:58:28.614545 master-0 kubenswrapper[37036]: I0312 14:58:28.614506 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8r8cw/must-gather-98hj2" event={"ID":"af12ad53-66f1-4e51-8bfc-8d6c9dd2234d","Type":"ContainerStarted","Data":"14517ce38cb1226906f8af0db1f172037150307757d79e68faeb895c75c3d6a5"} Mar 12 14:58:30.642967 master-0 kubenswrapper[37036]: I0312 14:58:30.642887 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8r8cw/must-gather-98hj2" event={"ID":"af12ad53-66f1-4e51-8bfc-8d6c9dd2234d","Type":"ContainerStarted","Data":"b3a31879e21cffde2d833dc5f15614b25c9b3a10757282b60197b6f9f915c24c"} Mar 12 14:58:30.642967 master-0 kubenswrapper[37036]: I0312 14:58:30.642959 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8r8cw/must-gather-98hj2" event={"ID":"af12ad53-66f1-4e51-8bfc-8d6c9dd2234d","Type":"ContainerStarted","Data":"623f928db01292b7d5d91c1e86fc43b0df6b640c2fdd9989167adf221aff9c92"} Mar 12 14:58:31.724224 master-0 kubenswrapper[37036]: I0312 14:58:31.724119 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8r8cw/must-gather-98hj2" podStartSLOduration=4.317323173 podStartE2EDuration="5.72409297s" podCreationTimestamp="2026-03-12 14:58:26 +0000 UTC" firstStartedPulling="2026-03-12 14:58:28.147718425 +0000 UTC m=+1367.155459362" lastFinishedPulling="2026-03-12 14:58:29.554488222 +0000 UTC m=+1368.562229159" observedRunningTime="2026-03-12 14:58:31.716057995 +0000 UTC m=+1370.723798932" watchObservedRunningTime="2026-03-12 14:58:31.72409297 +0000 UTC m=+1370.731833917" Mar 12 14:58:32.790043 master-0 kubenswrapper[37036]: I0312 14:58:32.790002 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-8c9c967c7-xxhhx_a35674af-162c-4a4a-8605-158b2326267e/cluster-version-operator/0.log" Mar 12 14:58:33.761698 master-0 kubenswrapper[37036]: I0312 14:58:33.761656 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-8c9c967c7-xxhhx_a35674af-162c-4a4a-8605-158b2326267e/cluster-version-operator/1.log" Mar 12 14:58:38.803967 master-0 kubenswrapper[37036]: I0312 14:58:38.802805 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-6d5c2_b866dae3-6a86-4fc5-af95-d12b24ad3f52/nmstate-console-plugin/0.log" Mar 12 14:58:38.849932 master-0 kubenswrapper[37036]: I0312 14:58:38.842954 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-q2z2r_9098b81d-4f6c-4c5b-a8eb-8471467f295f/nmstate-handler/0.log" Mar 12 14:58:38.912114 master-0 kubenswrapper[37036]: I0312 14:58:38.911974 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-4wrzj_63f17f99-00e2-470c-9a36-a121e3bd8fb8/nmstate-metrics/0.log" Mar 12 14:58:38.929920 master-0 kubenswrapper[37036]: I0312 14:58:38.929852 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-4wrzj_63f17f99-00e2-470c-9a36-a121e3bd8fb8/kube-rbac-proxy/0.log" Mar 12 14:58:38.946236 master-0 kubenswrapper[37036]: I0312 14:58:38.946171 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-tkxjb_8f4463a6-493b-4af7-959a-eaef1ff7048f/nmstate-operator/0.log" Mar 12 14:58:38.962369 master-0 kubenswrapper[37036]: I0312 14:58:38.962328 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-9qpxv_2a484e66-3d50-4a01-968a-7758520e5880/nmstate-webhook/0.log" Mar 12 14:58:39.264239 master-0 kubenswrapper[37036]: I0312 14:58:39.262818 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-9rrn5_27123016-8e66-428d-8998-0b9113e606a7/controller/0.log" Mar 12 14:58:39.279884 master-0 kubenswrapper[37036]: I0312 14:58:39.278959 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-9rrn5_27123016-8e66-428d-8998-0b9113e606a7/kube-rbac-proxy/0.log" Mar 12 14:58:39.365916 master-0 kubenswrapper[37036]: I0312 14:58:39.363453 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6r4g_3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d/controller/0.log" Mar 12 14:58:40.282030 master-0 kubenswrapper[37036]: I0312 14:58:40.281995 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6r4g_3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d/frr/0.log" Mar 12 14:58:40.296609 master-0 kubenswrapper[37036]: I0312 14:58:40.295876 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6r4g_3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d/reloader/0.log" Mar 12 14:58:40.302249 master-0 kubenswrapper[37036]: I0312 14:58:40.302215 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6r4g_3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d/frr-metrics/0.log" Mar 12 14:58:40.311058 master-0 kubenswrapper[37036]: I0312 14:58:40.311022 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6r4g_3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d/kube-rbac-proxy/0.log" Mar 12 14:58:40.320635 master-0 kubenswrapper[37036]: I0312 14:58:40.320566 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6r4g_3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d/kube-rbac-proxy-frr/0.log" Mar 12 14:58:40.329654 master-0 kubenswrapper[37036]: I0312 14:58:40.328115 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6r4g_3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d/cp-frr-files/0.log" Mar 12 14:58:40.338767 master-0 kubenswrapper[37036]: I0312 14:58:40.338720 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6r4g_3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d/cp-reloader/0.log" Mar 12 14:58:40.347630 master-0 kubenswrapper[37036]: I0312 14:58:40.347568 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6r4g_3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d/cp-metrics/0.log" Mar 12 14:58:40.358880 master-0 kubenswrapper[37036]: I0312 14:58:40.358847 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-nt2jr_9b53f7ff-ae45-4b88-9a32-8548fcab110a/frr-k8s-webhook-server/0.log" Mar 12 14:58:40.378715 master-0 kubenswrapper[37036]: I0312 14:58:40.378662 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-794566cf8d-rcz9c_5160bc8b-ff23-474f-b5b9-fa90f8e78394/manager/0.log" Mar 12 14:58:40.400067 master-0 kubenswrapper[37036]: I0312 14:58:40.399980 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-644b57d759-m5szb_9ae40425-b1c6-4fe9-bf12-7af305cc7990/webhook-server/0.log" Mar 12 14:58:40.816205 master-0 kubenswrapper[37036]: I0312 14:58:40.816163 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-l6mt6_a17fe07a-69eb-4d18-9348-1ea5bddf51a6/speaker/0.log" Mar 12 14:58:40.827607 master-0 kubenswrapper[37036]: I0312 14:58:40.827550 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-l6mt6_a17fe07a-69eb-4d18-9348-1ea5bddf51a6/kube-rbac-proxy/0.log" Mar 12 14:58:40.942932 master-0 kubenswrapper[37036]: I0312 14:58:40.941991 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcdctl/0.log" Mar 12 14:58:41.262999 master-0 kubenswrapper[37036]: I0312 14:58:41.262104 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd/0.log" Mar 12 14:58:41.281919 master-0 kubenswrapper[37036]: I0312 14:58:41.281171 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-metrics/0.log" Mar 12 14:58:41.294119 master-0 kubenswrapper[37036]: I0312 14:58:41.294058 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-readyz/0.log" Mar 12 14:58:41.312235 master-0 kubenswrapper[37036]: I0312 14:58:41.312177 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-rev/0.log" Mar 12 14:58:41.330236 master-0 kubenswrapper[37036]: I0312 14:58:41.330197 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/setup/0.log" Mar 12 14:58:41.377940 master-0 kubenswrapper[37036]: I0312 14:58:41.374967 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-ensure-env-vars/0.log" Mar 12 14:58:41.392942 master-0 kubenswrapper[37036]: I0312 14:58:41.392837 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-resources-copy/0.log" Mar 12 14:58:41.438002 master-0 kubenswrapper[37036]: I0312 14:58:41.437954 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_23b56974-d2b1-4205-af5a-70cc2b616d1a/installer/0.log" Mar 12 14:58:41.487200 master-0 kubenswrapper[37036]: I0312 14:58:41.487118 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_b2d8e6e9-c10f-4b43-8155-9addbfddba2e/installer/0.log" Mar 12 14:58:43.332125 master-0 kubenswrapper[37036]: I0312 14:58:43.332012 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-lbcvf_146495bf-0787-483f-a9fc-0e8925b89150/assisted-installer-controller/0.log" Mar 12 14:58:43.955566 master-0 kubenswrapper[37036]: I0312 14:58:43.955511 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8r8cw/must-gather-hz5xm" event={"ID":"d7696fca-9f2c-4106-af46-037eda3305f3","Type":"ContainerStarted","Data":"b500c47c790f8af4a11252ffbe37aaffeb2f983fcb5bc945cfa26d3f6acb48c5"} Mar 12 14:58:43.955942 master-0 kubenswrapper[37036]: I0312 14:58:43.955889 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8r8cw/must-gather-hz5xm" event={"ID":"d7696fca-9f2c-4106-af46-037eda3305f3","Type":"ContainerStarted","Data":"4ba7e4472cc0e1e82c85dc47d3ecf98dba9efa4383b1f0ee3b4bafe2b341b8e3"} Mar 12 14:58:43.973516 master-0 kubenswrapper[37036]: I0312 14:58:43.965432 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-859898ff78-qv7v9_1172bb7b-c430-4011-b869-0f6ba03987d5/oauth-openshift/0.log" Mar 12 14:58:43.989256 master-0 kubenswrapper[37036]: I0312 14:58:43.987046 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8r8cw/must-gather-hz5xm" podStartSLOduration=3.75225208 podStartE2EDuration="17.987018363s" podCreationTimestamp="2026-03-12 14:58:26 +0000 UTC" firstStartedPulling="2026-03-12 14:58:28.268665415 +0000 UTC m=+1367.276406352" lastFinishedPulling="2026-03-12 14:58:42.503431698 +0000 UTC m=+1381.511172635" observedRunningTime="2026-03-12 14:58:43.978366034 +0000 UTC m=+1382.986106971" watchObservedRunningTime="2026-03-12 14:58:43.987018363 +0000 UTC m=+1382.994759300" Mar 12 14:58:45.431173 master-0 kubenswrapper[37036]: I0312 14:58:45.431125 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-jpf47_57930a54-89ab-4ec8-a504-74035bb74d63/authentication-operator/4.log" Mar 12 14:58:45.484919 master-0 kubenswrapper[37036]: I0312 14:58:45.484818 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-jpf47_57930a54-89ab-4ec8-a504-74035bb74d63/authentication-operator/5.log" Mar 12 14:58:46.598918 master-0 kubenswrapper[37036]: I0312 14:58:46.597895 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-79f8cd6fdd-gjwhp_e7f6ebd3-98c8-457c-a88c-7e81270f01b5/router/6.log" Mar 12 14:58:46.603481 master-0 kubenswrapper[37036]: I0312 14:58:46.603166 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-79f8cd6fdd-gjwhp_e7f6ebd3-98c8-457c-a88c-7e81270f01b5/router/5.log" Mar 12 14:58:46.792178 master-0 kubenswrapper[37036]: E0312 14:58:46.791958 37036 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.32.10:37078->192.168.32.10:40455: write tcp 192.168.32.10:37078->192.168.32.10:40455: write: broken pipe Mar 12 14:58:47.278920 master-0 kubenswrapper[37036]: I0312 14:58:47.278511 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8r8cw/perf-node-gather-daemonset-wt88n"] Mar 12 14:58:47.292918 master-0 kubenswrapper[37036]: I0312 14:58:47.284001 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8r8cw/perf-node-gather-daemonset-wt88n"] Mar 12 14:58:47.292918 master-0 kubenswrapper[37036]: I0312 14:58:47.284254 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8r8cw/perf-node-gather-daemonset-wt88n" Mar 12 14:58:47.364446 master-0 kubenswrapper[37036]: I0312 14:58:47.364310 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a8b2d752-6399-4198-a288-a5094e06cd54-sys\") pod \"perf-node-gather-daemonset-wt88n\" (UID: \"a8b2d752-6399-4198-a288-a5094e06cd54\") " pod="openshift-must-gather-8r8cw/perf-node-gather-daemonset-wt88n" Mar 12 14:58:47.364622 master-0 kubenswrapper[37036]: I0312 14:58:47.364611 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/a8b2d752-6399-4198-a288-a5094e06cd54-proc\") pod \"perf-node-gather-daemonset-wt88n\" (UID: \"a8b2d752-6399-4198-a288-a5094e06cd54\") " pod="openshift-must-gather-8r8cw/perf-node-gather-daemonset-wt88n" Mar 12 14:58:47.364960 master-0 kubenswrapper[37036]: I0312 14:58:47.364927 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/a8b2d752-6399-4198-a288-a5094e06cd54-podres\") pod \"perf-node-gather-daemonset-wt88n\" (UID: \"a8b2d752-6399-4198-a288-a5094e06cd54\") " pod="openshift-must-gather-8r8cw/perf-node-gather-daemonset-wt88n" Mar 12 14:58:47.370914 master-0 kubenswrapper[37036]: I0312 14:58:47.368047 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w9zm\" (UniqueName: \"kubernetes.io/projected/a8b2d752-6399-4198-a288-a5094e06cd54-kube-api-access-9w9zm\") pod \"perf-node-gather-daemonset-wt88n\" (UID: \"a8b2d752-6399-4198-a288-a5094e06cd54\") " pod="openshift-must-gather-8r8cw/perf-node-gather-daemonset-wt88n" Mar 12 14:58:47.370914 master-0 kubenswrapper[37036]: I0312 14:58:47.368282 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8b2d752-6399-4198-a288-a5094e06cd54-lib-modules\") pod \"perf-node-gather-daemonset-wt88n\" (UID: \"a8b2d752-6399-4198-a288-a5094e06cd54\") " pod="openshift-must-gather-8r8cw/perf-node-gather-daemonset-wt88n" Mar 12 14:58:47.470923 master-0 kubenswrapper[37036]: I0312 14:58:47.470830 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8b2d752-6399-4198-a288-a5094e06cd54-lib-modules\") pod \"perf-node-gather-daemonset-wt88n\" (UID: \"a8b2d752-6399-4198-a288-a5094e06cd54\") " pod="openshift-must-gather-8r8cw/perf-node-gather-daemonset-wt88n" Mar 12 14:58:47.471150 master-0 kubenswrapper[37036]: I0312 14:58:47.470959 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a8b2d752-6399-4198-a288-a5094e06cd54-sys\") pod \"perf-node-gather-daemonset-wt88n\" (UID: \"a8b2d752-6399-4198-a288-a5094e06cd54\") " pod="openshift-must-gather-8r8cw/perf-node-gather-daemonset-wt88n" Mar 12 14:58:47.471150 master-0 kubenswrapper[37036]: I0312 14:58:47.471027 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/a8b2d752-6399-4198-a288-a5094e06cd54-proc\") pod \"perf-node-gather-daemonset-wt88n\" (UID: \"a8b2d752-6399-4198-a288-a5094e06cd54\") " pod="openshift-must-gather-8r8cw/perf-node-gather-daemonset-wt88n" Mar 12 14:58:47.471150 master-0 kubenswrapper[37036]: I0312 14:58:47.471060 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/a8b2d752-6399-4198-a288-a5094e06cd54-podres\") pod \"perf-node-gather-daemonset-wt88n\" (UID: \"a8b2d752-6399-4198-a288-a5094e06cd54\") " pod="openshift-must-gather-8r8cw/perf-node-gather-daemonset-wt88n" Mar 12 14:58:47.471150 master-0 kubenswrapper[37036]: I0312 14:58:47.471093 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w9zm\" (UniqueName: \"kubernetes.io/projected/a8b2d752-6399-4198-a288-a5094e06cd54-kube-api-access-9w9zm\") pod \"perf-node-gather-daemonset-wt88n\" (UID: \"a8b2d752-6399-4198-a288-a5094e06cd54\") " pod="openshift-must-gather-8r8cw/perf-node-gather-daemonset-wt88n" Mar 12 14:58:47.471635 master-0 kubenswrapper[37036]: I0312 14:58:47.471591 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8b2d752-6399-4198-a288-a5094e06cd54-lib-modules\") pod \"perf-node-gather-daemonset-wt88n\" (UID: \"a8b2d752-6399-4198-a288-a5094e06cd54\") " pod="openshift-must-gather-8r8cw/perf-node-gather-daemonset-wt88n" Mar 12 14:58:47.471737 master-0 kubenswrapper[37036]: I0312 14:58:47.471711 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/a8b2d752-6399-4198-a288-a5094e06cd54-proc\") pod \"perf-node-gather-daemonset-wt88n\" (UID: \"a8b2d752-6399-4198-a288-a5094e06cd54\") " pod="openshift-must-gather-8r8cw/perf-node-gather-daemonset-wt88n" Mar 12 14:58:47.471786 master-0 kubenswrapper[37036]: I0312 14:58:47.471680 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/a8b2d752-6399-4198-a288-a5094e06cd54-podres\") pod \"perf-node-gather-daemonset-wt88n\" (UID: \"a8b2d752-6399-4198-a288-a5094e06cd54\") " pod="openshift-must-gather-8r8cw/perf-node-gather-daemonset-wt88n" Mar 12 14:58:47.471834 master-0 kubenswrapper[37036]: I0312 14:58:47.471611 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a8b2d752-6399-4198-a288-a5094e06cd54-sys\") pod \"perf-node-gather-daemonset-wt88n\" (UID: \"a8b2d752-6399-4198-a288-a5094e06cd54\") " pod="openshift-must-gather-8r8cw/perf-node-gather-daemonset-wt88n" Mar 12 14:58:47.490756 master-0 kubenswrapper[37036]: I0312 14:58:47.490539 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w9zm\" (UniqueName: \"kubernetes.io/projected/a8b2d752-6399-4198-a288-a5094e06cd54-kube-api-access-9w9zm\") pod \"perf-node-gather-daemonset-wt88n\" (UID: \"a8b2d752-6399-4198-a288-a5094e06cd54\") " pod="openshift-must-gather-8r8cw/perf-node-gather-daemonset-wt88n" Mar 12 14:58:47.636123 master-0 kubenswrapper[37036]: I0312 14:58:47.635934 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8r8cw/perf-node-gather-daemonset-wt88n" Mar 12 14:58:47.899682 master-0 kubenswrapper[37036]: I0312 14:58:47.899560 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-794bf69795-vntlz_7420564a-dc9d-4a2e-b0fc-0cc01f115e3b/oauth-apiserver/0.log" Mar 12 14:58:47.962094 master-0 kubenswrapper[37036]: I0312 14:58:47.962020 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-794bf69795-vntlz_7420564a-dc9d-4a2e-b0fc-0cc01f115e3b/oauth-apiserver/1.log" Mar 12 14:58:48.009622 master-0 kubenswrapper[37036]: I0312 14:58:48.009363 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-794bf69795-vntlz_7420564a-dc9d-4a2e-b0fc-0cc01f115e3b/fix-audit-permissions/0.log" Mar 12 14:58:48.295024 master-0 kubenswrapper[37036]: I0312 14:58:48.294978 37036 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8r8cw/perf-node-gather-daemonset-wt88n"] Mar 12 14:58:48.392008 master-0 kubenswrapper[37036]: I0312 14:58:48.391878 37036 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8r8cw/master-0-debug-4bz5b"] Mar 12 14:58:48.393659 master-0 kubenswrapper[37036]: I0312 14:58:48.393615 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8r8cw/master-0-debug-4bz5b" Mar 12 14:58:48.515801 master-0 kubenswrapper[37036]: I0312 14:58:48.515738 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/90cec6a5-07a9-4694-b816-12b704634290-host\") pod \"master-0-debug-4bz5b\" (UID: \"90cec6a5-07a9-4694-b816-12b704634290\") " pod="openshift-must-gather-8r8cw/master-0-debug-4bz5b" Mar 12 14:58:48.516081 master-0 kubenswrapper[37036]: I0312 14:58:48.515811 37036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlsfq\" (UniqueName: \"kubernetes.io/projected/90cec6a5-07a9-4694-b816-12b704634290-kube-api-access-tlsfq\") pod \"master-0-debug-4bz5b\" (UID: \"90cec6a5-07a9-4694-b816-12b704634290\") " pod="openshift-must-gather-8r8cw/master-0-debug-4bz5b" Mar 12 14:58:48.618725 master-0 kubenswrapper[37036]: I0312 14:58:48.618681 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/90cec6a5-07a9-4694-b816-12b704634290-host\") pod \"master-0-debug-4bz5b\" (UID: \"90cec6a5-07a9-4694-b816-12b704634290\") " pod="openshift-must-gather-8r8cw/master-0-debug-4bz5b" Mar 12 14:58:48.618725 master-0 kubenswrapper[37036]: I0312 14:58:48.618732 37036 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlsfq\" (UniqueName: \"kubernetes.io/projected/90cec6a5-07a9-4694-b816-12b704634290-kube-api-access-tlsfq\") pod \"master-0-debug-4bz5b\" (UID: \"90cec6a5-07a9-4694-b816-12b704634290\") " pod="openshift-must-gather-8r8cw/master-0-debug-4bz5b" Mar 12 14:58:48.619015 master-0 kubenswrapper[37036]: I0312 14:58:48.618872 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/90cec6a5-07a9-4694-b816-12b704634290-host\") pod \"master-0-debug-4bz5b\" (UID: \"90cec6a5-07a9-4694-b816-12b704634290\") " pod="openshift-must-gather-8r8cw/master-0-debug-4bz5b" Mar 12 14:58:48.635472 master-0 kubenswrapper[37036]: I0312 14:58:48.635414 37036 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlsfq\" (UniqueName: \"kubernetes.io/projected/90cec6a5-07a9-4694-b816-12b704634290-kube-api-access-tlsfq\") pod \"master-0-debug-4bz5b\" (UID: \"90cec6a5-07a9-4694-b816-12b704634290\") " pod="openshift-must-gather-8r8cw/master-0-debug-4bz5b" Mar 12 14:58:48.758152 master-0 kubenswrapper[37036]: I0312 14:58:48.758056 37036 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8r8cw/master-0-debug-4bz5b" Mar 12 14:58:48.804639 master-0 kubenswrapper[37036]: W0312 14:58:48.804554 37036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90cec6a5_07a9_4694_b816_12b704634290.slice/crio-99afaf2e3f36ec01cd049ba3032cbffd314a13c50e9bdd6be251f5c715cc3420 WatchSource:0}: Error finding container 99afaf2e3f36ec01cd049ba3032cbffd314a13c50e9bdd6be251f5c715cc3420: Status 404 returned error can't find the container with id 99afaf2e3f36ec01cd049ba3032cbffd314a13c50e9bdd6be251f5c715cc3420 Mar 12 14:58:49.033105 master-0 kubenswrapper[37036]: I0312 14:58:49.033042 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8r8cw/master-0-debug-4bz5b" event={"ID":"90cec6a5-07a9-4694-b816-12b704634290","Type":"ContainerStarted","Data":"99afaf2e3f36ec01cd049ba3032cbffd314a13c50e9bdd6be251f5c715cc3420"} Mar 12 14:58:49.034988 master-0 kubenswrapper[37036]: I0312 14:58:49.034959 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8r8cw/perf-node-gather-daemonset-wt88n" event={"ID":"a8b2d752-6399-4198-a288-a5094e06cd54","Type":"ContainerStarted","Data":"91ccbaf34c4dc717ac7c4b430a483a5d02baa45e8f8d42fb94f7b8533723561b"} Mar 12 14:58:49.034988 master-0 kubenswrapper[37036]: I0312 14:58:49.034984 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8r8cw/perf-node-gather-daemonset-wt88n" event={"ID":"a8b2d752-6399-4198-a288-a5094e06cd54","Type":"ContainerStarted","Data":"b163eb3ee08f2e352cecf39c2bf243d09f1dbfea134864dd13f2885d7e424e95"} Mar 12 14:58:49.037481 master-0 kubenswrapper[37036]: I0312 14:58:49.037450 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-must-gather-8r8cw/perf-node-gather-daemonset-wt88n" Mar 12 14:58:49.092126 master-0 kubenswrapper[37036]: I0312 14:58:49.091932 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-b7296_9757edbb-8ce2-4513-9b32-a552df50634c/kube-rbac-proxy/0.log" Mar 12 14:58:49.115882 master-0 kubenswrapper[37036]: I0312 14:58:49.115797 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-b7296_9757edbb-8ce2-4513-9b32-a552df50634c/cluster-autoscaler-operator/0.log" Mar 12 14:58:49.123083 master-0 kubenswrapper[37036]: I0312 14:58:49.122862 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-b7296_9757edbb-8ce2-4513-9b32-a552df50634c/cluster-autoscaler-operator/1.log" Mar 12 14:58:49.140583 master-0 kubenswrapper[37036]: I0312 14:58:49.140530 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hs6mc_3edaa533-ecbb-443e-a270-4cb4f923daf6/cluster-baremetal-operator/3.log" Mar 12 14:58:49.141775 master-0 kubenswrapper[37036]: I0312 14:58:49.141652 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hs6mc_3edaa533-ecbb-443e-a270-4cb4f923daf6/cluster-baremetal-operator/4.log" Mar 12 14:58:49.167407 master-0 kubenswrapper[37036]: I0312 14:58:49.167250 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hs6mc_3edaa533-ecbb-443e-a270-4cb4f923daf6/baremetal-kube-rbac-proxy/0.log" Mar 12 14:58:49.189113 master-0 kubenswrapper[37036]: I0312 14:58:49.189031 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-7s8fj_f3c13c5f-3d1f-4e0a-b77b-732255680086/control-plane-machine-set-operator/0.log" Mar 12 14:58:49.189656 master-0 kubenswrapper[37036]: I0312 14:58:49.189263 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-7s8fj_f3c13c5f-3d1f-4e0a-b77b-732255680086/control-plane-machine-set-operator/1.log" Mar 12 14:58:49.214316 master-0 kubenswrapper[37036]: I0312 14:58:49.211102 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-qtx2d_6f5cd3ff-ced6-47e3-8054-d83053d87680/kube-rbac-proxy/0.log" Mar 12 14:58:49.243938 master-0 kubenswrapper[37036]: I0312 14:58:49.234633 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-qtx2d_6f5cd3ff-ced6-47e3-8054-d83053d87680/machine-api-operator/0.log" Mar 12 14:58:49.243938 master-0 kubenswrapper[37036]: I0312 14:58:49.236847 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-qtx2d_6f5cd3ff-ced6-47e3-8054-d83053d87680/machine-api-operator/1.log" Mar 12 14:58:50.324459 master-0 kubenswrapper[37036]: I0312 14:58:50.324363 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-05598-api-0_3f760056-b25d-4261-9ad5-66ba2dc8e046/cinder-05598-api-log/0.log" Mar 12 14:58:50.343475 master-0 kubenswrapper[37036]: I0312 14:58:50.343428 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-05598-api-0_3f760056-b25d-4261-9ad5-66ba2dc8e046/cinder-api/0.log" Mar 12 14:58:50.449006 master-0 kubenswrapper[37036]: I0312 14:58:50.447078 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-05598-backup-0_59d7a356-c194-4ab2-9291-c1116ecc4bde/cinder-backup/0.log" Mar 12 14:58:50.457236 master-0 kubenswrapper[37036]: I0312 14:58:50.457194 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-05598-backup-0_59d7a356-c194-4ab2-9291-c1116ecc4bde/probe/0.log" Mar 12 14:58:50.471590 master-0 kubenswrapper[37036]: I0312 14:58:50.471495 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-05598-db-sync-6bdmp_75876022-f077-4c9e-95c1-3d0b1dbb61a3/cinder-05598-db-sync/0.log" Mar 12 14:58:50.560878 master-0 kubenswrapper[37036]: I0312 14:58:50.560240 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-05598-scheduler-0_76d48ea9-e7d6-4acd-b340-957c34aceb04/cinder-scheduler/0.log" Mar 12 14:58:50.849480 master-0 kubenswrapper[37036]: I0312 14:58:50.849429 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-05598-scheduler-0_76d48ea9-e7d6-4acd-b340-957c34aceb04/probe/0.log" Mar 12 14:58:50.969407 master-0 kubenswrapper[37036]: I0312 14:58:50.969320 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-05598-volume-lvm-iscsi-0_d700a6d3-cb4f-4971-8b80-30eaab119193/cinder-volume/0.log" Mar 12 14:58:50.981438 master-0 kubenswrapper[37036]: I0312 14:58:50.981376 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-05598-volume-lvm-iscsi-0_d700a6d3-cb4f-4971-8b80-30eaab119193/probe/0.log" Mar 12 14:58:50.994585 master-0 kubenswrapper[37036]: I0312 14:58:50.994532 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-0f2a-account-create-update-7cc89_4d80a8ef-edb7-4620-a67f-dcdcdd80a907/mariadb-account-create-update/0.log" Mar 12 14:58:51.005046 master-0 kubenswrapper[37036]: I0312 14:58:51.005006 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-db-create-d5cg7_690417a0-ecec-4a79-ab5b-789407fec2b0/mariadb-database-create/0.log" Mar 12 14:58:51.023258 master-0 kubenswrapper[37036]: I0312 14:58:51.020644 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-676dcc7665-z72s6_30518ff6-a619-4340-9982-2662a8475370/dnsmasq-dns/0.log" Mar 12 14:58:51.045169 master-0 kubenswrapper[37036]: I0312 14:58:51.044155 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-676dcc7665-z72s6_30518ff6-a619-4340-9982-2662a8475370/init/0.log" Mar 12 14:58:51.066371 master-0 kubenswrapper[37036]: I0312 14:58:51.066293 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-9848-account-create-update-p47pv_280b3449-e519-4936-b541-9ea239fe7aee/mariadb-account-create-update/0.log" Mar 12 14:58:51.161149 master-0 kubenswrapper[37036]: I0312 14:58:51.157815 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-bc20e-default-external-api-0_d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c/glance-log/0.log" Mar 12 14:58:51.169963 master-0 kubenswrapper[37036]: I0312 14:58:51.168067 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-bc20e-default-external-api-0_d6f44e4f-ee7c-47ac-a347-0f91e81dfb2c/glance-httpd/0.log" Mar 12 14:58:51.262048 master-0 kubenswrapper[37036]: I0312 14:58:51.262004 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-bc20e-default-internal-api-0_23884bb7-c60a-40ec-b96e-7b5280cea5f5/glance-log/0.log" Mar 12 14:58:51.275325 master-0 kubenswrapper[37036]: I0312 14:58:51.275275 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5_1047bb4a-135f-488d-9399-0518cb3a827d/cluster-cloud-controller-manager/0.log" Mar 12 14:58:51.276833 master-0 kubenswrapper[37036]: I0312 14:58:51.276406 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-bc20e-default-internal-api-0_23884bb7-c60a-40ec-b96e-7b5280cea5f5/glance-httpd/0.log" Mar 12 14:58:51.289224 master-0 kubenswrapper[37036]: I0312 14:58:51.287501 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-db-create-kqst6_3dccc99c-4958-49fe-8db1-1658241ccd0c/mariadb-database-create/0.log" Mar 12 14:58:51.303299 master-0 kubenswrapper[37036]: I0312 14:58:51.303257 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5_1047bb4a-135f-488d-9399-0518cb3a827d/config-sync-controllers/0.log" Mar 12 14:58:51.307841 master-0 kubenswrapper[37036]: I0312 14:58:51.307803 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-db-sync-989wd_78a7388f-90a4-420a-b2ef-e31fb1fda25e/glance-db-sync/0.log" Mar 12 14:58:51.317138 master-0 kubenswrapper[37036]: I0312 14:58:51.317089 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-2v4z5_1047bb4a-135f-488d-9399-0518cb3a827d/kube-rbac-proxy/0.log" Mar 12 14:58:51.323727 master-0 kubenswrapper[37036]: I0312 14:58:51.323684 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-5df675497f-chr8j_09d8068a-62a2-4363-9735-46c62d79015e/ironic-api-log/0.log" Mar 12 14:58:51.347915 master-0 kubenswrapper[37036]: I0312 14:58:51.347870 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-5df675497f-chr8j_09d8068a-62a2-4363-9735-46c62d79015e/ironic-api/0.log" Mar 12 14:58:51.360184 master-0 kubenswrapper[37036]: I0312 14:58:51.360161 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-5df675497f-chr8j_09d8068a-62a2-4363-9735-46c62d79015e/init/0.log" Mar 12 14:58:51.376567 master-0 kubenswrapper[37036]: I0312 14:58:51.376529 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-a807-account-create-update-qt766_65ac62ad-c04b-410b-bdc7-44e1663f6682/mariadb-account-create-update/0.log" Mar 12 14:58:51.408658 master-0 kubenswrapper[37036]: I0312 14:58:51.408615 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_8c0524b9-cbf3-40e3-9424-98b634ba1b10/ironic-conductor/0.log" Mar 12 14:58:51.418338 master-0 kubenswrapper[37036]: I0312 14:58:51.418246 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_8c0524b9-cbf3-40e3-9424-98b634ba1b10/httpboot/0.log" Mar 12 14:58:51.426379 master-0 kubenswrapper[37036]: I0312 14:58:51.426329 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_8c0524b9-cbf3-40e3-9424-98b634ba1b10/dnsmasq/0.log" Mar 12 14:58:51.435064 master-0 kubenswrapper[37036]: I0312 14:58:51.435017 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_8c0524b9-cbf3-40e3-9424-98b634ba1b10/init/0.log" Mar 12 14:58:51.441999 master-0 kubenswrapper[37036]: I0312 14:58:51.441962 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_8c0524b9-cbf3-40e3-9424-98b634ba1b10/ironic-python-agent-init/0.log" Mar 12 14:58:52.110312 master-0 kubenswrapper[37036]: I0312 14:58:52.110256 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_8c0524b9-cbf3-40e3-9424-98b634ba1b10/pxe-init/0.log" Mar 12 14:58:52.122163 master-0 kubenswrapper[37036]: I0312 14:58:52.121710 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-create-5ghhs_93af238f-2462-4584-872e-6e7c2c98b599/mariadb-database-create/0.log" Mar 12 14:58:52.143996 master-0 kubenswrapper[37036]: I0312 14:58:52.142477 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-hzl7q_67c2c80d-8881-4a05-8d2f-2350b3848b13/ironic-db-sync/0.log" Mar 12 14:58:52.155709 master-0 kubenswrapper[37036]: I0312 14:58:52.153690 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-hzl7q_67c2c80d-8881-4a05-8d2f-2350b3848b13/init/0.log" Mar 12 14:58:52.183166 master-0 kubenswrapper[37036]: I0312 14:58:52.183114 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_8deaf53d-c497-4a42-92f7-1d88df637fec/ironic-inspector-httpd/0.log" Mar 12 14:58:52.202298 master-0 kubenswrapper[37036]: I0312 14:58:52.202235 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_8deaf53d-c497-4a42-92f7-1d88df637fec/ironic-inspector/0.log" Mar 12 14:58:52.211699 master-0 kubenswrapper[37036]: I0312 14:58:52.211658 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_8deaf53d-c497-4a42-92f7-1d88df637fec/inspector-httpboot/0.log" Mar 12 14:58:52.226452 master-0 kubenswrapper[37036]: I0312 14:58:52.221541 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_8deaf53d-c497-4a42-92f7-1d88df637fec/ramdisk-logs/0.log" Mar 12 14:58:52.236024 master-0 kubenswrapper[37036]: I0312 14:58:52.235698 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_8deaf53d-c497-4a42-92f7-1d88df637fec/inspector-dnsmasq/0.log" Mar 12 14:58:52.245595 master-0 kubenswrapper[37036]: I0312 14:58:52.245557 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_8deaf53d-c497-4a42-92f7-1d88df637fec/ironic-python-agent-init/0.log" Mar 12 14:58:52.276150 master-0 kubenswrapper[37036]: I0312 14:58:52.276015 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_8deaf53d-c497-4a42-92f7-1d88df637fec/inspector-pxe-init/0.log" Mar 12 14:58:52.300648 master-0 kubenswrapper[37036]: I0312 14:58:52.300596 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-a166-account-create-update-4lb7x_97d7251f-7c8b-4119-af0a-368d13352fc2/mariadb-account-create-update/0.log" Mar 12 14:58:52.319131 master-0 kubenswrapper[37036]: I0312 14:58:52.319081 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-db-create-w9frr_a271d093-100f-4c56-a201-16eb10358184/mariadb-database-create/0.log" Mar 12 14:58:52.329932 master-0 kubenswrapper[37036]: I0312 14:58:52.329857 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-db-sync-ht555_d8654d3e-dee8-4a56-9b0b-3dbeb0d2a463/ironic-inspector-db-sync/0.log" Mar 12 14:58:52.349613 master-0 kubenswrapper[37036]: I0312 14:58:52.349555 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-5685659465-xhxkv_ee3a29d2-bf14-4521-896e-b0169adefcb2/ironic-neutron-agent/2.log" Mar 12 14:58:52.351315 master-0 kubenswrapper[37036]: I0312 14:58:52.351282 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-5685659465-xhxkv_ee3a29d2-bf14-4521-896e-b0169adefcb2/ironic-neutron-agent/1.log" Mar 12 14:58:52.361593 master-0 kubenswrapper[37036]: I0312 14:58:52.361461 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-1d9f-account-create-update-kkfzt_68421a5c-f523-46fc-8448-704811e6ed1c/mariadb-account-create-update/0.log" Mar 12 14:58:52.413494 master-0 kubenswrapper[37036]: I0312 14:58:52.413376 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-679985d476-m2lh7_04b5377d-4b4e-4bdf-80f6-63cc26f5cdb1/keystone-api/0.log" Mar 12 14:58:52.431229 master-0 kubenswrapper[37036]: I0312 14:58:52.431159 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-bootstrap-l6zns_b41a1efb-8f79-468f-a3e1-2d42cba4456a/keystone-bootstrap/0.log" Mar 12 14:58:52.446146 master-0 kubenswrapper[37036]: I0312 14:58:52.442953 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-db-create-cqt24_a9644543-4e13-4b3c-9862-7a861ea2af30/mariadb-database-create/0.log" Mar 12 14:58:52.458866 master-0 kubenswrapper[37036]: I0312 14:58:52.458811 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-db-sync-wmxbp_4272c013-816c-4779-a81d-2945610612f3/keystone-db-sync/0.log" Mar 12 14:58:53.918249 master-0 kubenswrapper[37036]: I0312 14:58:53.918196 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-55d85b7b47-pxgq9_de61e1fe-294c-48a6-8cf3-aeb4637ef2cc/kube-rbac-proxy/0.log" Mar 12 14:58:53.954543 master-0 kubenswrapper[37036]: I0312 14:58:53.954469 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-55d85b7b47-pxgq9_de61e1fe-294c-48a6-8cf3-aeb4637ef2cc/cloud-credential-operator/0.log" Mar 12 14:58:53.955959 master-0 kubenswrapper[37036]: I0312 14:58:53.955930 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-55d85b7b47-pxgq9_de61e1fe-294c-48a6-8cf3-aeb4637ef2cc/cloud-credential-operator/1.log" Mar 12 14:58:56.582093 master-0 kubenswrapper[37036]: I0312 14:58:56.582041 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-ljnjj_0a898118-6d01-4211-92f0-43967b75405c/openshift-config-operator/4.log" Mar 12 14:58:56.589786 master-0 kubenswrapper[37036]: I0312 14:58:56.589749 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-ljnjj_0a898118-6d01-4211-92f0-43967b75405c/openshift-config-operator/5.log" Mar 12 14:58:56.610664 master-0 kubenswrapper[37036]: I0312 14:58:56.610607 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-ljnjj_0a898118-6d01-4211-92f0-43967b75405c/openshift-api/0.log" Mar 12 14:58:57.673993 master-0 kubenswrapper[37036]: I0312 14:58:57.673841 37036 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-8r8cw/perf-node-gather-daemonset-wt88n" Mar 12 14:58:57.811617 master-0 kubenswrapper[37036]: I0312 14:58:57.811518 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8r8cw/perf-node-gather-daemonset-wt88n" podStartSLOduration=10.81149623 podStartE2EDuration="10.81149623s" podCreationTimestamp="2026-03-12 14:58:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:58:49.060183412 +0000 UTC m=+1388.067924369" watchObservedRunningTime="2026-03-12 14:58:57.81149623 +0000 UTC m=+1396.819237167" Mar 12 14:58:59.317802 master-0 kubenswrapper[37036]: I0312 14:58:59.317740 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-6c7fb6b958-4kp7s_c2f3fb87-655d-4622-b0c3-4288a9bb76d2/console-operator/0.log" Mar 12 14:59:00.599919 master-0 kubenswrapper[37036]: I0312 14:59:00.594486 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f48d8466d-4rmwh_10012bf3-1e8d-4224-9d71-51d4a0231a08/console/0.log" Mar 12 14:59:00.635660 master-0 kubenswrapper[37036]: I0312 14:59:00.634475 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-84f57b9877-ftnvc_70c26392-cfee-4dc3-9c71-4684558daa07/download-server/0.log" Mar 12 14:59:01.062929 master-0 kubenswrapper[37036]: I0312 14:59:01.061614 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_b6a9660b-6127-48b0-82e7-cf5e38a66429/memcached/0.log" Mar 12 14:59:01.213666 master-0 kubenswrapper[37036]: I0312 14:59:01.213609 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-65b67d6cc7-hwxhs_62ff79c8-5b73-4f55-a1d5-6288146d42f7/neutron-api/0.log" Mar 12 14:59:01.227459 master-0 kubenswrapper[37036]: I0312 14:59:01.225527 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-65b67d6cc7-hwxhs_62ff79c8-5b73-4f55-a1d5-6288146d42f7/neutron-httpd/0.log" Mar 12 14:59:01.262244 master-0 kubenswrapper[37036]: I0312 14:59:01.254548 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8992-account-create-update-6z47s_800b0f0d-e0c1-458c-92b5-2773be83e138/mariadb-account-create-update/0.log" Mar 12 14:59:01.278434 master-0 kubenswrapper[37036]: I0312 14:59:01.278185 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-db-create-gmb6j_7ad87258-d41a-4e83-9215-354562cf0075/mariadb-database-create/0.log" Mar 12 14:59:01.464284 master-0 kubenswrapper[37036]: I0312 14:59:01.464166 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-db-sync-79jsx_fa5c40b0-d90b-4a98-af67-d37503c2c2dc/neutron-db-sync/0.log" Mar 12 14:59:01.582918 master-0 kubenswrapper[37036]: I0312 14:59:01.582446 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_371ab32b-22e0-41bb-8d36-5634d9ea3722/nova-api-log/0.log" Mar 12 14:59:01.820311 master-0 kubenswrapper[37036]: I0312 14:59:01.819852 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_371ab32b-22e0-41bb-8d36-5634d9ea3722/nova-api-api/0.log" Mar 12 14:59:01.854100 master-0 kubenswrapper[37036]: I0312 14:59:01.854050 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-a000-account-create-update-t5sxm_eb4a72f6-6d97-4d7b-a538-11604e6144ea/mariadb-account-create-update/0.log" Mar 12 14:59:01.876467 master-0 kubenswrapper[37036]: I0312 14:59:01.876381 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-db-create-fqxgc_e762e1b3-ab0c-47b4-88a1-4e4030b12ed4/mariadb-database-create/0.log" Mar 12 14:59:01.931335 master-0 kubenswrapper[37036]: I0312 14:59:01.931287 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-994d-account-create-update-h6mj5_112425ab-cbf2-468c-b40c-e64aa339389c/mariadb-account-create-update/0.log" Mar 12 14:59:01.951023 master-0 kubenswrapper[37036]: I0312 14:59:01.950971 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-cell-mapping-hvjms_ccb059aa-827d-46f3-8218-8178e9eeafbd/nova-manage/0.log" Mar 12 14:59:02.056226 master-0 kubenswrapper[37036]: I0312 14:59:02.056176 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_b210b938-2578-45b2-a6ef-84908f58242a/nova-cell0-conductor-conductor/0.log" Mar 12 14:59:02.084000 master-0 kubenswrapper[37036]: I0312 14:59:02.083185 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-db-sync-zjqjr_61a413f4-9b2a-4a44-aef7-6c75090b9a44/nova-cell0-conductor-db-sync/0.log" Mar 12 14:59:02.097008 master-0 kubenswrapper[37036]: I0312 14:59:02.093010 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-db-create-9dzk7_72fa304a-a97e-4350-81f5-6180bc4ba594/mariadb-database-create/0.log" Mar 12 14:59:02.121966 master-0 kubenswrapper[37036]: I0312 14:59:02.115209 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-cell-mapping-26qtm_40de96d4-7f02-4bc3-a660-957d6b986159/nova-manage/0.log" Mar 12 14:59:02.194610 master-0 kubenswrapper[37036]: I0312 14:59:02.194550 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-compute-ironic-compute-0_2a440e7b-a37d-4e7e-9873-ac70bc709a60/nova-cell1-compute-ironic-compute-compute/0.log" Mar 12 14:59:02.300799 master-0 kubenswrapper[37036]: I0312 14:59:02.300434 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_29a0e0e6-b7b6-4632-94d9-86c24da56df4/nova-cell1-conductor-conductor/0.log" Mar 12 14:59:02.318188 master-0 kubenswrapper[37036]: I0312 14:59:02.318148 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-xgrsw_06eb9f4b-167e-435b-8ef6-ae44fc0b85a9/cluster-storage-operator/1.log" Mar 12 14:59:02.326712 master-0 kubenswrapper[37036]: I0312 14:59:02.325836 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-db-sync-q72km_46e4d8ed-5640-49bc-ae47-44c113072fab/nova-cell1-conductor-db-sync/0.log" Mar 12 14:59:02.332007 master-0 kubenswrapper[37036]: I0312 14:59:02.331960 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-xgrsw_06eb9f4b-167e-435b-8ef6-ae44fc0b85a9/cluster-storage-operator/2.log" Mar 12 14:59:02.340633 master-0 kubenswrapper[37036]: I0312 14:59:02.340169 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-d364-account-create-update-dznnx_14afc361-6f21-415e-af7b-7ed3a4f9c48b/mariadb-account-create-update/0.log" Mar 12 14:59:02.346926 master-0 kubenswrapper[37036]: I0312 14:59:02.346887 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-db-create-zktq7_f92f3efc-76bc-40a0-b3c3-8da77d03c022/mariadb-database-create/0.log" Mar 12 14:59:02.354748 master-0 kubenswrapper[37036]: I0312 14:59:02.354674 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/5.log" Mar 12 14:59:02.357054 master-0 kubenswrapper[37036]: I0312 14:59:02.355173 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-z9hzg_d56089bf-177c-492d-8964-73a45574e7ed/snapshot-controller/6.log" Mar 12 14:59:02.359295 master-0 kubenswrapper[37036]: I0312 14:59:02.359216 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-host-discover-q6h7h_ccc5c739-be50-4e7f-a490-f901f062e630/nova-manage/0.log" Mar 12 14:59:02.383047 master-0 kubenswrapper[37036]: I0312 14:59:02.381765 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5685fbc7d-ckmlv_8660cef9-0ab3-453e-a4b9-c243daa6ddb0/csi-snapshot-controller-operator/2.log" Mar 12 14:59:02.392499 master-0 kubenswrapper[37036]: I0312 14:59:02.392464 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5685fbc7d-ckmlv_8660cef9-0ab3-453e-a4b9-c243daa6ddb0/csi-snapshot-controller-operator/3.log" Mar 12 14:59:02.433396 master-0 kubenswrapper[37036]: I0312 14:59:02.433358 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_aa4193cf-d004-497f-b8da-736467c10ced/nova-cell1-novncproxy-novncproxy/0.log" Mar 12 14:59:02.522639 master-0 kubenswrapper[37036]: I0312 14:59:02.521516 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_67eba7bf-7232-4b81-b3cc-2f3f34737ba6/nova-metadata-log/0.log" Mar 12 14:59:02.725228 master-0 kubenswrapper[37036]: I0312 14:59:02.725111 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_67eba7bf-7232-4b81-b3cc-2f3f34737ba6/nova-metadata-metadata/0.log" Mar 12 14:59:02.865071 master-0 kubenswrapper[37036]: I0312 14:59:02.865011 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_96a60e40-b6c7-4771-9eb6-54aa05628a4d/nova-scheduler-scheduler/0.log" Mar 12 14:59:03.215920 master-0 kubenswrapper[37036]: I0312 14:59:03.215856 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e303709a-0166-4153-9e20-0351599d1a9c/galera/0.log" Mar 12 14:59:03.241795 master-0 kubenswrapper[37036]: I0312 14:59:03.241734 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e303709a-0166-4153-9e20-0351599d1a9c/mysql-bootstrap/0.log" Mar 12 14:59:03.474814 master-0 kubenswrapper[37036]: I0312 14:59:03.474633 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_114161cb-b5bb-41d9-b085-63a181ec3480/galera/0.log" Mar 12 14:59:03.523236 master-0 kubenswrapper[37036]: I0312 14:59:03.522752 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_114161cb-b5bb-41d9-b085-63a181ec3480/mysql-bootstrap/0.log" Mar 12 14:59:03.537444 master-0 kubenswrapper[37036]: I0312 14:59:03.537397 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c3083373-d0bb-4775-b8ea-1d34f46bc0b7/openstackclient/0.log" Mar 12 14:59:03.555291 master-0 kubenswrapper[37036]: I0312 14:59:03.555138 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-4mq52_a24e803b-32b7-4b4b-bb59-f58b9a506626/ovn-controller/0.log" Mar 12 14:59:03.575724 master-0 kubenswrapper[37036]: I0312 14:59:03.575616 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-rgttc_e864e00e-7629-4dab-ae9a-55f609712148/openstack-network-exporter/0.log" Mar 12 14:59:03.596564 master-0 kubenswrapper[37036]: I0312 14:59:03.596451 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6l42m_30f28fa9-b72d-471a-b089-9a79f5669fae/ovsdb-server/0.log" Mar 12 14:59:03.608201 master-0 kubenswrapper[37036]: I0312 14:59:03.607971 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6l42m_30f28fa9-b72d-471a-b089-9a79f5669fae/ovs-vswitchd/0.log" Mar 12 14:59:03.622797 master-0 kubenswrapper[37036]: I0312 14:59:03.622734 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6l42m_30f28fa9-b72d-471a-b089-9a79f5669fae/ovsdb-server-init/0.log" Mar 12 14:59:03.636159 master-0 kubenswrapper[37036]: I0312 14:59:03.634947 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_22529544-4de1-4a4d-8b41-71e9c3b522e1/ovn-northd/0.log" Mar 12 14:59:03.650247 master-0 kubenswrapper[37036]: I0312 14:59:03.650153 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_22529544-4de1-4a4d-8b41-71e9c3b522e1/openstack-network-exporter/0.log" Mar 12 14:59:03.668555 master-0 kubenswrapper[37036]: I0312 14:59:03.668417 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_46cbbfbf-551d-40f6-ab13-5a988d23c1d4/ovsdbserver-nb/0.log" Mar 12 14:59:03.681794 master-0 kubenswrapper[37036]: I0312 14:59:03.681179 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_46cbbfbf-551d-40f6-ab13-5a988d23c1d4/openstack-network-exporter/0.log" Mar 12 14:59:03.702954 master-0 kubenswrapper[37036]: I0312 14:59:03.702553 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_bf3d5632-6ab4-4408-a837-7897110106d4/ovsdbserver-sb/0.log" Mar 12 14:59:03.731363 master-0 kubenswrapper[37036]: I0312 14:59:03.730668 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_bf3d5632-6ab4-4408-a837-7897110106d4/openstack-network-exporter/0.log" Mar 12 14:59:03.749628 master-0 kubenswrapper[37036]: I0312 14:59:03.747428 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-4fdf-account-create-update-bvvpt_9e21e790-37ba-458a-a7a6-c17ed7736b11/mariadb-account-create-update/0.log" Mar 12 14:59:03.797022 master-0 kubenswrapper[37036]: I0312 14:59:03.796893 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-867bb94d6d-fmw6x_c163c0a9-d63a-491b-a3c7-4a97bede9f2f/placement-log/0.log" Mar 12 14:59:03.819439 master-0 kubenswrapper[37036]: I0312 14:59:03.819044 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-867bb94d6d-fmw6x_c163c0a9-d63a-491b-a3c7-4a97bede9f2f/placement-api/0.log" Mar 12 14:59:03.833613 master-0 kubenswrapper[37036]: I0312 14:59:03.831344 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-db-create-cjlkf_6defdb3a-1932-4d90-b25c-af496585b703/mariadb-database-create/0.log" Mar 12 14:59:03.843259 master-0 kubenswrapper[37036]: I0312 14:59:03.843203 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-db-sync-rdgvk_36a55e95-783b-40ef-996a-5e29f87dc118/placement-db-sync/0.log" Mar 12 14:59:03.889354 master-0 kubenswrapper[37036]: I0312 14:59:03.889288 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f063fb36-4428-461a-8b29-3750c3f8217f/rabbitmq/0.log" Mar 12 14:59:03.896771 master-0 kubenswrapper[37036]: I0312 14:59:03.894312 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f063fb36-4428-461a-8b29-3750c3f8217f/setup-container/0.log" Mar 12 14:59:03.940011 master-0 kubenswrapper[37036]: I0312 14:59:03.939962 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e/rabbitmq/0.log" Mar 12 14:59:03.947374 master-0 kubenswrapper[37036]: I0312 14:59:03.947314 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_78cbfbac-b4dc-46bc-8804-a4c8b52f5f4e/setup-container/0.log" Mar 12 14:59:03.961924 master-0 kubenswrapper[37036]: I0312 14:59:03.960702 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_root-account-create-update-sxdng_661deaeb-75cd-4a4f-b211-91dcffe41b1b/mariadb-account-create-update/0.log" Mar 12 14:59:04.004582 master-0 kubenswrapper[37036]: I0312 14:59:04.004493 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-589895fbb7-q4wwv_8c6b9f13-4a3a-4920-a84b-f76516501f81/dns-operator/0.log" Mar 12 14:59:04.021752 master-0 kubenswrapper[37036]: I0312 14:59:04.019124 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-589895fbb7-q4wwv_8c6b9f13-4a3a-4920-a84b-f76516501f81/kube-rbac-proxy/0.log" Mar 12 14:59:04.033921 master-0 kubenswrapper[37036]: I0312 14:59:04.030256 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-75dfc444b6-mtcqr_649018e4-7368-455c-8b92-fae29b1b01ec/proxy-httpd/0.log" Mar 12 14:59:04.045394 master-0 kubenswrapper[37036]: I0312 14:59:04.044332 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-75dfc444b6-mtcqr_649018e4-7368-455c-8b92-fae29b1b01ec/proxy-server/0.log" Mar 12 14:59:04.061262 master-0 kubenswrapper[37036]: I0312 14:59:04.061157 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-qpqr7_a35ad3f9-8c4a-47cb-8326-a552e0b1dad1/swift-ring-rebalance/0.log" Mar 12 14:59:04.088955 master-0 kubenswrapper[37036]: I0312 14:59:04.088866 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_02b0bb9f-56cd-4ffe-9e37-2200e4baec09/account-server/0.log" Mar 12 14:59:04.106040 master-0 kubenswrapper[37036]: I0312 14:59:04.106000 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_02b0bb9f-56cd-4ffe-9e37-2200e4baec09/account-replicator/0.log" Mar 12 14:59:04.111538 master-0 kubenswrapper[37036]: I0312 14:59:04.111051 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_02b0bb9f-56cd-4ffe-9e37-2200e4baec09/account-auditor/0.log" Mar 12 14:59:04.119958 master-0 kubenswrapper[37036]: I0312 14:59:04.119910 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_02b0bb9f-56cd-4ffe-9e37-2200e4baec09/account-reaper/0.log" Mar 12 14:59:04.127821 master-0 kubenswrapper[37036]: I0312 14:59:04.127752 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_02b0bb9f-56cd-4ffe-9e37-2200e4baec09/container-server/0.log" Mar 12 14:59:04.139864 master-0 kubenswrapper[37036]: I0312 14:59:04.139772 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_02b0bb9f-56cd-4ffe-9e37-2200e4baec09/container-replicator/0.log" Mar 12 14:59:04.145125 master-0 kubenswrapper[37036]: I0312 14:59:04.145092 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_02b0bb9f-56cd-4ffe-9e37-2200e4baec09/container-auditor/0.log" Mar 12 14:59:04.155737 master-0 kubenswrapper[37036]: I0312 14:59:04.155691 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_02b0bb9f-56cd-4ffe-9e37-2200e4baec09/container-updater/0.log" Mar 12 14:59:04.165599 master-0 kubenswrapper[37036]: I0312 14:59:04.165405 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_02b0bb9f-56cd-4ffe-9e37-2200e4baec09/object-server/0.log" Mar 12 14:59:04.176404 master-0 kubenswrapper[37036]: I0312 14:59:04.176343 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_02b0bb9f-56cd-4ffe-9e37-2200e4baec09/object-replicator/0.log" Mar 12 14:59:04.188533 master-0 kubenswrapper[37036]: I0312 14:59:04.188473 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_02b0bb9f-56cd-4ffe-9e37-2200e4baec09/object-auditor/0.log" Mar 12 14:59:04.194407 master-0 kubenswrapper[37036]: I0312 14:59:04.194361 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_02b0bb9f-56cd-4ffe-9e37-2200e4baec09/object-updater/0.log" Mar 12 14:59:04.210078 master-0 kubenswrapper[37036]: I0312 14:59:04.210018 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_02b0bb9f-56cd-4ffe-9e37-2200e4baec09/object-expirer/0.log" Mar 12 14:59:04.222331 master-0 kubenswrapper[37036]: I0312 14:59:04.222282 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_02b0bb9f-56cd-4ffe-9e37-2200e4baec09/rsync/0.log" Mar 12 14:59:04.229057 master-0 kubenswrapper[37036]: I0312 14:59:04.229018 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_02b0bb9f-56cd-4ffe-9e37-2200e4baec09/swift-recon-cron/0.log" Mar 12 14:59:05.449258 master-0 kubenswrapper[37036]: I0312 14:59:05.449169 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-fpjck_3ec846db-e344-4f9e-95e6-7a0055f52766/dns/0.log" Mar 12 14:59:05.469020 master-0 kubenswrapper[37036]: I0312 14:59:05.468939 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-fpjck_3ec846db-e344-4f9e-95e6-7a0055f52766/kube-rbac-proxy/0.log" Mar 12 14:59:05.489327 master-0 kubenswrapper[37036]: I0312 14:59:05.489210 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-nml4k_3815db41-fe01-43f6-b75c-4ccca9124f51/dns-node-resolver/0.log" Mar 12 14:59:06.383724 master-0 kubenswrapper[37036]: I0312 14:59:06.383663 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-mjxsv_8d775283-2696-4411-8ddf-d4e6000f0a0c/etcd-operator/3.log" Mar 12 14:59:06.398397 master-0 kubenswrapper[37036]: I0312 14:59:06.398347 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-mjxsv_8d775283-2696-4411-8ddf-d4e6000f0a0c/etcd-operator/4.log" Mar 12 14:59:07.346255 master-0 kubenswrapper[37036]: I0312 14:59:07.346113 37036 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8r8cw/master-0-debug-4bz5b" event={"ID":"90cec6a5-07a9-4694-b816-12b704634290","Type":"ContainerStarted","Data":"40c0902a619c73d64792eb18388e551db1090cc7e6ea2c05a7835de59e36d667"} Mar 12 14:59:07.383042 master-0 kubenswrapper[37036]: I0312 14:59:07.381208 37036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8r8cw/master-0-debug-4bz5b" podStartSLOduration=1.660756431 podStartE2EDuration="19.381189615s" podCreationTimestamp="2026-03-12 14:58:48 +0000 UTC" firstStartedPulling="2026-03-12 14:58:48.808009376 +0000 UTC m=+1387.815750313" lastFinishedPulling="2026-03-12 14:59:06.52844256 +0000 UTC m=+1405.536183497" observedRunningTime="2026-03-12 14:59:07.368060596 +0000 UTC m=+1406.375801543" watchObservedRunningTime="2026-03-12 14:59:07.381189615 +0000 UTC m=+1406.388930552" Mar 12 14:59:07.685267 master-0 kubenswrapper[37036]: I0312 14:59:07.685130 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcdctl/0.log" Mar 12 14:59:07.918717 master-0 kubenswrapper[37036]: I0312 14:59:07.918008 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd/0.log" Mar 12 14:59:07.936679 master-0 kubenswrapper[37036]: I0312 14:59:07.936290 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-metrics/0.log" Mar 12 14:59:07.949541 master-0 kubenswrapper[37036]: I0312 14:59:07.949479 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-readyz/0.log" Mar 12 14:59:07.969254 master-0 kubenswrapper[37036]: I0312 14:59:07.969194 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-rev/0.log" Mar 12 14:59:07.990077 master-0 kubenswrapper[37036]: I0312 14:59:07.990025 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/setup/0.log" Mar 12 14:59:08.010740 master-0 kubenswrapper[37036]: I0312 14:59:08.010688 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-ensure-env-vars/0.log" Mar 12 14:59:08.028774 master-0 kubenswrapper[37036]: I0312 14:59:08.028701 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-resources-copy/0.log" Mar 12 14:59:08.082177 master-0 kubenswrapper[37036]: I0312 14:59:08.082116 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_23b56974-d2b1-4205-af5a-70cc2b616d1a/installer/0.log" Mar 12 14:59:08.129727 master-0 kubenswrapper[37036]: I0312 14:59:08.129672 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_b2d8e6e9-c10f-4b43-8155-9addbfddba2e/installer/0.log" Mar 12 14:59:09.196381 master-0 kubenswrapper[37036]: I0312 14:59:09.195935 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-86d6d77c7c-54cr9_a2435b91-86d6-415b-a978-34cc859e74f2/cluster-image-registry-operator/0.log" Mar 12 14:59:09.205120 master-0 kubenswrapper[37036]: I0312 14:59:09.205079 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-86d6d77c7c-54cr9_a2435b91-86d6-415b-a978-34cc859e74f2/cluster-image-registry-operator/1.log" Mar 12 14:59:09.225413 master-0 kubenswrapper[37036]: I0312 14:59:09.225187 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-nn5f6_4e5aaf2a-7df5-464b-b7c1-5a0e696eff02/node-ca/0.log" Mar 12 14:59:10.261682 master-0 kubenswrapper[37036]: I0312 14:59:10.261625 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/5.log" Mar 12 14:59:10.262836 master-0 kubenswrapper[37036]: I0312 14:59:10.262761 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/ingress-operator/6.log" Mar 12 14:59:10.274574 master-0 kubenswrapper[37036]: I0312 14:59:10.274536 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-44hhf_4bbd4f6c-53c0-45dc-ac7c-940a3a5a08f6/kube-rbac-proxy/0.log" Mar 12 14:59:11.053765 master-0 kubenswrapper[37036]: I0312 14:59:11.053729 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-dbdr9_ef5679f7-5bf5-409d-b74b-64a9cbb6c701/serve-healthcheck-canary/0.log" Mar 12 14:59:11.811410 master-0 kubenswrapper[37036]: I0312 14:59:11.811364 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-8f89dfddd-gltz7_dd29b21c-7a0e-4311-952f-427b00468e66/insights-operator/3.log" Mar 12 14:59:11.834911 master-0 kubenswrapper[37036]: I0312 14:59:11.834851 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-8f89dfddd-gltz7_dd29b21c-7a0e-4311-952f-427b00468e66/insights-operator/4.log" Mar 12 14:59:14.516131 master-0 kubenswrapper[37036]: I0312 14:59:14.516074 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_b31526da-bd77-4d32-af73-1ceccaebdce7/alertmanager/0.log" Mar 12 14:59:14.557160 master-0 kubenswrapper[37036]: I0312 14:59:14.553739 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_b31526da-bd77-4d32-af73-1ceccaebdce7/config-reloader/0.log" Mar 12 14:59:14.598284 master-0 kubenswrapper[37036]: I0312 14:59:14.598234 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_b31526da-bd77-4d32-af73-1ceccaebdce7/kube-rbac-proxy-web/0.log" Mar 12 14:59:14.631928 master-0 kubenswrapper[37036]: I0312 14:59:14.631850 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_b31526da-bd77-4d32-af73-1ceccaebdce7/kube-rbac-proxy/0.log" Mar 12 14:59:14.650450 master-0 kubenswrapper[37036]: I0312 14:59:14.650315 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_b31526da-bd77-4d32-af73-1ceccaebdce7/kube-rbac-proxy-metric/0.log" Mar 12 14:59:14.661584 master-0 kubenswrapper[37036]: I0312 14:59:14.661465 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_b31526da-bd77-4d32-af73-1ceccaebdce7/prom-label-proxy/0.log" Mar 12 14:59:14.685229 master-0 kubenswrapper[37036]: I0312 14:59:14.684160 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_b31526da-bd77-4d32-af73-1ceccaebdce7/init-config-reloader/0.log" Mar 12 14:59:14.733921 master-0 kubenswrapper[37036]: I0312 14:59:14.732836 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_cluster-monitoring-operator-674cbfbd9d-6w5nv_42dbcb8f-e8c4-413e-977d-40aa6df226aa/cluster-monitoring-operator/0.log" Mar 12 14:59:14.760019 master-0 kubenswrapper[37036]: I0312 14:59:14.759971 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-68b88f8cb5-vfvts_a81be38f-e07e-4863-8d61-fdefc2713a6a/kube-state-metrics/0.log" Mar 12 14:59:14.781463 master-0 kubenswrapper[37036]: I0312 14:59:14.781312 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-68b88f8cb5-vfvts_a81be38f-e07e-4863-8d61-fdefc2713a6a/kube-rbac-proxy-main/0.log" Mar 12 14:59:14.805967 master-0 kubenswrapper[37036]: I0312 14:59:14.805878 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-68b88f8cb5-vfvts_a81be38f-e07e-4863-8d61-fdefc2713a6a/kube-rbac-proxy-self/0.log" Mar 12 14:59:14.825938 master-0 kubenswrapper[37036]: I0312 14:59:14.825149 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_metrics-server-654cbcb7cd-n2kbl_685f22f3-dec5-476b-98a0-0cb73da77a3f/metrics-server/0.log" Mar 12 14:59:14.843919 master-0 kubenswrapper[37036]: I0312 14:59:14.843839 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-64948d9545-xshsb_a725ec48-e77d-4fce-957a-67abe8712193/monitoring-plugin/0.log" Mar 12 14:59:14.868321 master-0 kubenswrapper[37036]: I0312 14:59:14.868268 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-5pkwh_b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7/node-exporter/0.log" Mar 12 14:59:14.890490 master-0 kubenswrapper[37036]: I0312 14:59:14.890440 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-5pkwh_b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7/kube-rbac-proxy/0.log" Mar 12 14:59:14.908449 master-0 kubenswrapper[37036]: I0312 14:59:14.908391 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-5pkwh_b90e26a5-b42f-4fd5-a79b-6f4836a4bfc7/init-textfile/0.log" Mar 12 14:59:14.924236 master-0 kubenswrapper[37036]: I0312 14:59:14.924173 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-74cc79fd76-jms82_59f21770-429b-4b63-82fd-50ce0daf698d/kube-rbac-proxy-main/0.log" Mar 12 14:59:14.940961 master-0 kubenswrapper[37036]: I0312 14:59:14.940812 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-74cc79fd76-jms82_59f21770-429b-4b63-82fd-50ce0daf698d/kube-rbac-proxy-self/0.log" Mar 12 14:59:14.966289 master-0 kubenswrapper[37036]: I0312 14:59:14.966239 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-74cc79fd76-jms82_59f21770-429b-4b63-82fd-50ce0daf698d/openshift-state-metrics/0.log" Mar 12 14:59:14.997731 master-0 kubenswrapper[37036]: I0312 14:59:14.997671 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_6876993e-91d1-4544-bd72-e2eb4f1e10d1/prometheus/0.log" Mar 12 14:59:15.063634 master-0 kubenswrapper[37036]: I0312 14:59:15.063520 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_6876993e-91d1-4544-bd72-e2eb4f1e10d1/config-reloader/0.log" Mar 12 14:59:15.079425 master-0 kubenswrapper[37036]: I0312 14:59:15.079365 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_6876993e-91d1-4544-bd72-e2eb4f1e10d1/thanos-sidecar/0.log" Mar 12 14:59:15.103182 master-0 kubenswrapper[37036]: I0312 14:59:15.101519 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_6876993e-91d1-4544-bd72-e2eb4f1e10d1/kube-rbac-proxy-web/0.log" Mar 12 14:59:15.143580 master-0 kubenswrapper[37036]: I0312 14:59:15.143535 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_6876993e-91d1-4544-bd72-e2eb4f1e10d1/kube-rbac-proxy/0.log" Mar 12 14:59:15.171098 master-0 kubenswrapper[37036]: I0312 14:59:15.170989 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_6876993e-91d1-4544-bd72-e2eb4f1e10d1/kube-rbac-proxy-thanos/0.log" Mar 12 14:59:15.187371 master-0 kubenswrapper[37036]: I0312 14:59:15.187315 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_6876993e-91d1-4544-bd72-e2eb4f1e10d1/init-config-reloader/0.log" Mar 12 14:59:15.217666 master-0 kubenswrapper[37036]: I0312 14:59:15.217587 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-5ff8674d55-bwl7h_4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba/prometheus-operator/0.log" Mar 12 14:59:15.255384 master-0 kubenswrapper[37036]: I0312 14:59:15.255323 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-5ff8674d55-bwl7h_4bbcab11-187f-4b6b-bfe1-d0ba8ad651ba/kube-rbac-proxy/0.log" Mar 12 14:59:15.283158 master-0 kubenswrapper[37036]: I0312 14:59:15.283082 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-8464df8497-b5qg4_900b2a0e-1e2b-41a3-86f5-639ec1e95969/prometheus-operator-admission-webhook/0.log" Mar 12 14:59:15.300691 master-0 kubenswrapper[37036]: I0312 14:59:15.300642 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-9rrn5_27123016-8e66-428d-8998-0b9113e606a7/controller/0.log" Mar 12 14:59:15.306370 master-0 kubenswrapper[37036]: I0312 14:59:15.305682 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-9rrn5_27123016-8e66-428d-8998-0b9113e606a7/kube-rbac-proxy/0.log" Mar 12 14:59:15.310637 master-0 kubenswrapper[37036]: I0312 14:59:15.310555 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-cbb5fd9f8-xq7vd_f9dfe48c-daa1-4c18-9cf5-7b4930a0e649/telemeter-client/0.log" Mar 12 14:59:15.324348 master-0 kubenswrapper[37036]: I0312 14:59:15.324235 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-cbb5fd9f8-xq7vd_f9dfe48c-daa1-4c18-9cf5-7b4930a0e649/reload/0.log" Mar 12 14:59:15.327857 master-0 kubenswrapper[37036]: I0312 14:59:15.327819 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6r4g_3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d/controller/0.log" Mar 12 14:59:15.364421 master-0 kubenswrapper[37036]: I0312 14:59:15.364323 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-cbb5fd9f8-xq7vd_f9dfe48c-daa1-4c18-9cf5-7b4930a0e649/kube-rbac-proxy/0.log" Mar 12 14:59:15.386420 master-0 kubenswrapper[37036]: I0312 14:59:15.386378 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-79b7956d9f-qkhd4_71a335bd-078a-4b8c-ae09-2e40765034d3/thanos-query/0.log" Mar 12 14:59:15.398148 master-0 kubenswrapper[37036]: I0312 14:59:15.398103 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-79b7956d9f-qkhd4_71a335bd-078a-4b8c-ae09-2e40765034d3/kube-rbac-proxy-web/0.log" Mar 12 14:59:15.445520 master-0 kubenswrapper[37036]: I0312 14:59:15.445019 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-79b7956d9f-qkhd4_71a335bd-078a-4b8c-ae09-2e40765034d3/kube-rbac-proxy/0.log" Mar 12 14:59:15.467047 master-0 kubenswrapper[37036]: I0312 14:59:15.466990 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-79b7956d9f-qkhd4_71a335bd-078a-4b8c-ae09-2e40765034d3/prom-label-proxy/0.log" Mar 12 14:59:15.493971 master-0 kubenswrapper[37036]: I0312 14:59:15.493665 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-79b7956d9f-qkhd4_71a335bd-078a-4b8c-ae09-2e40765034d3/kube-rbac-proxy-rules/0.log" Mar 12 14:59:15.560697 master-0 kubenswrapper[37036]: I0312 14:59:15.560642 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-79b7956d9f-qkhd4_71a335bd-078a-4b8c-ae09-2e40765034d3/kube-rbac-proxy-metrics/0.log" Mar 12 14:59:16.358587 master-0 kubenswrapper[37036]: I0312 14:59:16.358545 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6r4g_3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d/frr/0.log" Mar 12 14:59:16.402483 master-0 kubenswrapper[37036]: I0312 14:59:16.402426 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6r4g_3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d/reloader/0.log" Mar 12 14:59:16.410298 master-0 kubenswrapper[37036]: I0312 14:59:16.410224 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6r4g_3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d/frr-metrics/0.log" Mar 12 14:59:16.415534 master-0 kubenswrapper[37036]: I0312 14:59:16.415446 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6r4g_3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d/kube-rbac-proxy/0.log" Mar 12 14:59:16.424764 master-0 kubenswrapper[37036]: I0312 14:59:16.424731 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6r4g_3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d/kube-rbac-proxy-frr/0.log" Mar 12 14:59:16.433887 master-0 kubenswrapper[37036]: I0312 14:59:16.433574 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6r4g_3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d/cp-frr-files/0.log" Mar 12 14:59:16.445987 master-0 kubenswrapper[37036]: I0312 14:59:16.445218 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6r4g_3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d/cp-reloader/0.log" Mar 12 14:59:16.454571 master-0 kubenswrapper[37036]: I0312 14:59:16.454511 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6r4g_3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d/cp-metrics/0.log" Mar 12 14:59:16.468066 master-0 kubenswrapper[37036]: I0312 14:59:16.468016 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-nt2jr_9b53f7ff-ae45-4b88-9a32-8548fcab110a/frr-k8s-webhook-server/0.log" Mar 12 14:59:16.501571 master-0 kubenswrapper[37036]: I0312 14:59:16.501279 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-794566cf8d-rcz9c_5160bc8b-ff23-474f-b5b9-fa90f8e78394/manager/0.log" Mar 12 14:59:16.517961 master-0 kubenswrapper[37036]: I0312 14:59:16.517556 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-644b57d759-m5szb_9ae40425-b1c6-4fe9-bf12-7af305cc7990/webhook-server/0.log" Mar 12 14:59:17.113553 master-0 kubenswrapper[37036]: I0312 14:59:17.113502 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-l6mt6_a17fe07a-69eb-4d18-9348-1ea5bddf51a6/speaker/0.log" Mar 12 14:59:21.255345 master-0 kubenswrapper[37036]: I0312 14:59:21.254862 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-l6mt6_a17fe07a-69eb-4d18-9348-1ea5bddf51a6/kube-rbac-proxy/0.log" Mar 12 14:59:22.534346 master-0 kubenswrapper[37036]: I0312 14:59:22.534259 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-9rrn5_27123016-8e66-428d-8998-0b9113e606a7/controller/0.log" Mar 12 14:59:22.740934 master-0 kubenswrapper[37036]: I0312 14:59:22.739354 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-9rrn5_27123016-8e66-428d-8998-0b9113e606a7/kube-rbac-proxy/0.log" Mar 12 14:59:23.809432 master-0 kubenswrapper[37036]: I0312 14:59:23.809385 37036 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6r4g_3e9c0d11-1aaf-4303-b4ea-9f6da7ca589d/controller/0.log"